diff --git a/fr/lang/fr/README.md b/fr/lang/fr/README.md
new file mode 100644
index 0000000000..484089f3c1
--- /dev/null
+++ b/fr/lang/fr/README.md
@@ -0,0 +1,165 @@
+# graphkit-learn
+[](https://travis-ci.org/jajupmochi/graphkit-learn) [](https://ci.appveyor.com/project/jajupmochi/graphkit-learn) [](https://codecov.io/gh/jajupmochi/graphkit-learn) [](https://graphkit-learn.readthedocs.io/en/master/?badge=master) [](https://badge.fury.io/py/graphkit-learn)
+
+A Python package for graph kernels, graph edit distances and graph pre-image problem.
+
+## Requirements
+
+* python>=3.6
+* numpy>=1.16.2
+* scipy>=1.1.0
+* matplotlib>=3.1.0
+* networkx>=2.2
+* scikit-learn>=0.20.0
+* tabulate>=0.8.2
+* tqdm>=4.26.0
+* control>=0.8.2 (for generalized random walk kernels only)
+* slycot==0.3.3 (for generalized random walk kernels only, which requires a fortran compiler, gfortran for example)
+
+## How to use?
+
+### Install the library
+
+* Install stable version from PyPI (may not be up-to-date):
+```
+$ pip install graphkit-learn
+```
+
+* Install latest version from GitHub:
+```
+$ git clone https://github.com/jajupmochi/graphkit-learn.git
+$ cd graphkit-learn/
+$ python setup.py install
+```
+
+### Run the test
+
+A series of [tests](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/tests) can be run to check if the library works correctly:
+```
+$ pip install -U pip pytest codecov coverage pytest-cov
+$ pytest -v --cov-config=.coveragerc --cov-report term --cov=gklearn gklearn/tests/
+```
+
+### Check examples
+
+A series of demos of using the library can be found on [Google Colab](https://drive.google.com/drive/folders/1r2gtPuFzIys2_MZw1wXqE2w3oCoVoQUG?usp=sharing) and in the [`example`](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/examples) folder.
+
+### Other demos
+
+Check [`notebooks`](https://github.com/jajupmochi/graphkit-learn/tree/master/notebooks) directory for more demos:
+* [`notebooks`](https://github.com/jajupmochi/graphkit-learn/tree/master/notebooks) directory includes test codes of graph kernels based on linear patterns;
+* [`notebooks/tests`](https://github.com/jajupmochi/graphkit-learn/tree/master/notebooks/tests) directory includes codes that test some libraries and functions;
+* [`notebooks/utils`](https://github.com/jajupmochi/graphkit-learn/tree/master/notebooks/utils) directory includes some useful tools, such as a Gram matrix checker and a function to get properties of datasets;
+* [`notebooks/else`](https://github.com/jajupmochi/graphkit-learn/tree/master/notebooks/else) directory includes other codes that we used for experiments.
+
+### Documentation
+
+The docs of the library can be found [here](https://graphkit-learn.readthedocs.io/en/master/?badge=master).
+
+## Main contents
+
+### 1 List of graph kernels
+
+* Based on walks
+ * [The common walk kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/common_walk.py) [1]
+ * Exponential
+ * Geometric
+ * [The marginalized kenrel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/marginalized.py)
+ * With tottering [2]
+ * Without tottering [7]
+ * [The generalized random walk kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/random_walk.py) [3]
+ * [Sylvester equation](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/sylvester_equation.py)
+ * Conjugate gradient
+ * Fixed-point iterations
+ * [Spectral decomposition](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/spectral_decomposition.py)
+* Based on paths
+ * [The shortest path kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/shortest_path.py) [4]
+ * [The structural shortest path kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/structural_sp.py) [5]
+ * [The path kernel up to length h](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/path_up_to_h.py) [6]
+ * The Tanimoto kernel
+ * The MinMax kernel
+* Non-linear kernels
+ * [The treelet kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/treelet.py) [10]
+ * [Weisfeiler-Lehman kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/weisfeiler_lehman.py) [11]
+ * [Subtree](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/weisfeiler_lehman.py#L479)
+
+A demo of computing graph kernels can be found on [Google Colab](https://colab.research.google.com/drive/17Q2QCl9CAtDweGF8LiWnWoN2laeJqT0u?usp=sharing) and in the [`examples`](https://github.com/jajupmochi/graphkit-learn/blob/master/gklearn/examples/compute_graph_kernel.py) folder.
+
+### 2 Graph Edit Distances
+
+### 3 Graph preimage methods
+
+A demo of generating graph preimages can be found on [Google Colab](https://colab.research.google.com/drive/1PIDvHOcmiLEQ5Np3bgBDdu0kLOquOMQK?usp=sharing) and in the [`examples`](https://github.com/jajupmochi/graphkit-learn/blob/master/gklearn/examples/median_preimege_generator.py) folder.
+
+### 4 Interface to `GEDLIB`
+
+[`GEDLIB`](https://github.com/dbblumenthal/gedlib) is an easily extensible C++ library for (suboptimally) computing the graph edit distance between attributed graphs. [A Python interface](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/gedlib) for `GEDLIB` is integrated in this library, based on [`gedlibpy`](https://github.com/Ryurin/gedlibpy) library.
+
+### 5 Computation optimization methods
+
+* Python’s `multiprocessing.Pool` module is applied to perform **parallelization** on the computations of all kernels as well as the model selection.
+* **The Fast Computation of Shortest Path Kernel (FCSP) method** [8] is implemented in *the random walk kernel*, *the shortest path kernel*, as well as *the structural shortest path kernel* where FCSP is applied on both vertex and edge kernels.
+* **The trie data structure** [9] is employed in *the path kernel up to length h* to store paths in graphs.
+
+## Issues
+
+* This library uses `multiprocessing.Pool.imap_unordered` function to do the parallelization, which may not be able to run correctly under Windows system. For now, Windows users may need to comment the parallel codes and uncomment the codes below them which run serially. We will consider adding a parameter to control serial or parallel computations as needed.
+
+* Some modules (such as `Numpy`, `Scipy`, `sklearn`) apply [`OpenBLAS`](https://www.openblas.net/) to perform parallel computation by default, which causes conflicts with other parallelization modules such as `multiprossing.Pool`, highly increasing the computing time. By setting its thread to 1, `OpenBLAS` is forced to use a single thread/CPU, thus avoids the conflicts. For now, this procedure has to be done manually. Under Linux, type this command in terminal before running the code:
+```
+$ export OPENBLAS_NUM_THREADS=1
+```
+Or add `export OPENBLAS_NUM_THREADS=1` at the end of your `~/.bashrc` file, then run
+```
+$ source ~/.bashrc
+```
+to make this effective permanently.
+
+## Results
+
+Check this paper for detailed description of graph kernels and experimental results:
+
+Linlin Jia, Benoit Gaüzère, and Paul Honeine. Graph Kernels Based on Linear Patterns: Theoretical and Experimental Comparisons. working paper or preprint, March 2019. URL https://hal-normandie-univ.archives-ouvertes.fr/hal-02053946.
+
+A comparison of performances of graph kernels on benchmark datasets can be found [here](https://graphkit-learn.readthedocs.io/en/master/experiments.html).
+
+## How to contribute
+
+Fork the library and open a pull request! Make your own contribute to the community!
+
+## Authors
+
+* [Linlin Jia](https://jajupmochi.github.io/), LITIS, INSA Rouen Normandie
+* [Benoit Gaüzère](http://pagesperso.litislab.fr/~bgauzere/#contact_en), LITIS, INSA Rouen Normandie
+* [Paul Honeine](http://honeine.fr/paul/Welcome.html), LITIS, Université de Rouen Normandie
+
+## Citation
+
+Still waiting...
+
+## Acknowledgments
+
+This research was supported by CSC (China Scholarship Council) and the French national research agency (ANR) under the grant APi (ANR-18-CE23-0014). The authors would like to thank the CRIANN (Le Centre Régional Informatique et d’Applications Numériques de Normandie) for providing computational resources.
+
+## References
+[1] Thomas Gärtner, Peter Flach, and Stefan Wrobel. On graph kernels: Hardness results and efficient alternatives. Learning Theory and Kernel Machines, pages 129–143, 2003.
+
+[2] H. Kashima, K. Tsuda, and A. Inokuchi. Marginalized kernels between labeled graphs. In Proceedings of the 20th International Conference on Machine Learning, Washington, DC, United States, 2003.
+
+[3] Vishwanathan, S.V.N., Schraudolph, N.N., Kondor, R., Borgwardt, K.M., 2010. Graph kernels. Journal of Machine Learning Research 11, 1201–1242.
+
+[4] K. M. Borgwardt and H.-P. Kriegel. Shortest-path kernels on graphs. In Proceedings of the International Conference on Data Mining, pages 74-81, 2005.
+
+[5] Liva Ralaivola, Sanjay J Swamidass, Hiroto Saigo, and Pierre Baldi. Graph kernels for chemical informatics. Neural networks, 18(8):1093–1110, 2005.
+
+[6] Suard F, Rakotomamonjy A, Bensrhair A. Kernel on Bag of Paths For Measuring Similarity of Shapes. InESANN 2007 Apr 25 (pp. 355-360).
+
+[7] Mahé, P., Ueda, N., Akutsu, T., Perret, J.L., Vert, J.P., 2004. Extensions of marginalized graph kernels, in: Proc. the twenty-first international conference on Machine learning, ACM. p. 70.
+
+[8] Lifan Xu, Wei Wang, M Alvarez, John Cavazos, and Dongping Zhang. Parallelization of shortest path graph kernels on multi-core cpus and gpus. Proceedings of the Programmability Issues for Heterogeneous Multicores (MultiProg), Vienna, Austria, 2014.
+
+[9] Edward Fredkin. Trie memory. Communications of the ACM, 3(9):490–499, 1960.
+
+[10] Gaüzere, B., Brun, L., Villemin, D., 2012. Two new graphs kernels in chemoinformatics. Pattern Recognition Letters 33, 2038–2047.
+
+[11] Shervashidze, N., Schweitzer, P., Leeuwen, E.J.v., Mehlhorn, K., Borgwardt, K.M., 2011. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research 12, 2539–2561.
diff --git a/lang/fr/.appveyor.yml b/lang/fr/.appveyor.yml
new file mode 100644
index 0000000000..d63af3a00f
--- /dev/null
+++ b/lang/fr/.appveyor.yml
@@ -0,0 +1,29 @@
+---
+environment:
+ matrix:
+ -
+ PYTHON: "C:\\Python36"
+ -
+ PYTHON: "C:\\Python36-x64"
+ -
+ PYTHON: "C:\\Python37"
+ -
+ PYTHON: "C:\\Python37-x64"
+ -
+ PYTHON: "C:\\Python38"
+ -
+ PYTHON: "C:\\Python38-x64"
+#skip_commits:
+#files:
+#- "*.yml"
+#- "*.rst"
+#- "LICENSE"
+install:
+ - "%PYTHON%\\python.exe -m pip install -U pip"
+ - "%PYTHON%\\python.exe -m pip install wheel"
+ - "%PYTHON%\\python.exe -m pip install -r requirements.txt"
+ - "%PYTHON%\\python.exe -m pip install -U pytest"
+build: false
+test_script:
+ - "%PYTHON%\\python.exe setup.py bdist_wheel"
+ - "%PYTHON%\\python.exe -m pytest -v gklearn/tests/ --ignore=gklearn/tests/test_median_preimage_generator.py"
diff --git a/lang/fr/.coveragerc b/lang/fr/.coveragerc
new file mode 100644
index 0000000000..1acf8611f6
--- /dev/null
+++ b/lang/fr/.coveragerc
@@ -0,0 +1,4 @@
+[run]
+omit =
+ gklearn/tests/*
+ gklearn/examples/*
diff --git a/lang/fr/.gitignore b/lang/fr/.gitignore
new file mode 100644
index 0000000000..8954c13d9e
--- /dev/null
+++ b/lang/fr/.gitignore
@@ -0,0 +1,81 @@
+# Jupyter Notebook
+.ipynb_checkpoints
+datasets/*
+!datasets/ds.py
+!datasets/Alkane/
+!datasets/acyclic/
+!datasets/Acyclic/
+!datasets/MAO/
+!datasets/PAH/
+!datasets/MUTAG/
+!datasets/Letter-med/
+!datasets/ENZYMES_txt/
+!datasets/DD/
+!datasets/NCI1/
+!datasets/NCI109/
+!datasets/AIDS/
+!datasets/monoterpenoides/
+!datasets/Monoterpenoides/
+!datasets/Fingerprint/*.txt
+!datasets/Cuneiform/*.txt
+notebooks/results/*
+notebooks/check_gm/*
+notebooks/test_parallel/*
+requirements/*
+gklearn/model.py
+gklearn/kernels/*_sym.py
+*.npy
+*.eps
+*.dat
+*.pyc
+
+gklearn/preimage/*
+!gklearn/preimage/*.py
+!gklearn/preimage/experiments/*.py
+!gklearn/preimage/experiments/tools/*.py
+
+__pycache__
+##*#
+
+docs/build/*
+!docs/build/latex/*.pdf
+docs/log*
+
+*.egg-info
+dist/
+build/
+
+.coverage
+htmlcov
+
+virtualenv
+
+.vscode/
+
+# gedlibpy
+gklearn/gedlib/build/
+gklearn/gedlib/build/__pycache__/
+gklearn/gedlib/collections/
+gklearn/gedlib/Median_Example/
+gklearn/gedlib/build/include/gedlib-master/median/collections/
+gklearn/gedlib/include/
+gklearn/gedlib/libgxlgedlib.so
+
+# misc
+notebooks/preimage/
+notebooks/unfinished
+gklearn/kernels/else/
+gklearn/kernels/unfinished/
+gklearn/kernels/.tags
+
+# pyenv
+.python-version
+
+# docker travis debug.
+ci.sh
+
+# outputs.
+outputs/
+
+# pyCharm.
+.idea/
diff --git a/lang/fr/.readthedocs.yml b/lang/fr/.readthedocs.yml
new file mode 100644
index 0000000000..32329e3116
--- /dev/null
+++ b/lang/fr/.readthedocs.yml
@@ -0,0 +1,27 @@
+---
+#.readthedocs.yml
+#Read the Docs configuration file
+#See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
+#Required
+version: 2
+#Build documentation in the docs/ directory with Sphinx
+sphinx:
+ configuration: docs/source/conf.py
+#Build documentation with MkDocs
+#mkdocs:
+#configuration: mkdocs.yml
+#Optionally build your docs in additional formats such as PDF and ePub
+formats: all
+#Optionally set the version of Python and requirements required to build your docs
+python:
+ version: 3.6
+ install:
+ -
+ requirements: docs/requirements.txt
+ -
+ requirements: requirements.txt
+ -
+ method: pip
+ path: .
+ extra_requirements:
+ - docs
diff --git a/lang/fr/.travis.yml b/lang/fr/.travis.yml
new file mode 100644
index 0000000000..d7786c7a6a
--- /dev/null
+++ b/lang/fr/.travis.yml
@@ -0,0 +1,22 @@
+---
+language: python
+python:
+ - '3.6'
+ - '3.7'
+ - '3.8'
+before_install:
+ - python --version
+ - pip install -U pip
+ - pip install -U pytest
+ - pip install codecov
+ - pip install coverage
+ - pip install pytest-cov
+ - sudo apt-get -y install gfortran
+install:
+ - pip install -r requirements.txt
+ - pip install wheel
+script:
+ - python setup.py bdist_wheel
+ - if [ $TRAVIS_PYTHON_VERSION == 3.6 ]; then pytest -v --cov-config=.coveragerc --cov-report term --cov=gklearn gklearn/tests/; else pytest -v --cov-config=.coveragerc --cov-report term --cov=gklearn gklearn/tests/ --ignore=gklearn/tests/test_median_preimage_generator.py; fi
+after_success:
+ - codecov
diff --git a/lang/fr/LICENSE b/lang/fr/LICENSE
new file mode 100644
index 0000000000..94a9ed024d
--- /dev/null
+++ b/lang/fr/LICENSE
@@ -0,0 +1,674 @@
+ GNU GENERAL PUBLIC LICENSE
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc.
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The GNU General Public License is a free, copyleft license for
+software and other kinds of works.
+
+ The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works. By contrast,
+the GNU General Public License is intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users. We, the Free Software Foundation, use the
+GNU General Public License for most of our software; it applies also to
+any other work released this way by its authors. You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+ To protect your rights, we need to prevent others from denying you
+these rights or asking you to surrender the rights. Therefore, you have
+certain responsibilities if you distribute copies of the software, or if
+you modify it: responsibilities to respect the freedom of others.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must pass on to the recipients the same
+freedoms that you received. You must make sure that they, too, receive
+or can get the source code. And you must show them these terms so they
+know their rights.
+
+ Developers that use the GNU GPL protect your rights with two steps:
+(1) assert copyright on the software, and (2) offer you this License
+giving you legal permission to copy, distribute and/or modify it.
+
+ For the developers' and authors' protection, the GPL clearly explains
+that there is no warranty for this free software. For both users' and
+authors' sake, the GPL requires that modified versions be marked as
+changed, so that their problems will not be attributed erroneously to
+authors of previous versions.
+
+ Some devices are designed to deny users access to install or run
+modified versions of the software inside them, although the manufacturer
+can do so. This is fundamentally incompatible with the aim of
+protecting users' freedom to change the software. The systematic
+pattern of such abuse occurs in the area of products for individuals to
+use, which is precisely where it is most unacceptable. Therefore, we
+have designed this version of the GPL to prohibit the practice for those
+products. If such problems arise substantially in other domains, we
+stand ready to extend this provision to those domains in future versions
+of the GPL, as needed to protect the freedom of users.
+
+ Finally, every program is threatened constantly by software patents.
+States should not allow patents to restrict development and use of
+software on general-purpose computers, but in those that do, we wish to
+avoid the special danger that patents applied to a free program could
+make it effectively proprietary. To prevent this, the GPL assures that
+patents cannot be used to render the program non-free.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ TERMS AND CONDITIONS
+
+ 0. Definitions.
+
+ "This License" refers to version 3 of the GNU General Public License.
+
+ "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+ "The Program" refers to any copyrightable work licensed under this
+License. Each licensee is addressed as "you". "Licensees" and
+"recipients" may be individuals or organizations.
+
+ To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy. The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+ A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+ To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy. Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+ To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies. Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+ An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License. If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+ 1. Source Code.
+
+ The "source code" for a work means the preferred form of the work
+for making modifications to it. "Object code" means any non-source
+form of a work.
+
+ A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+ The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form. A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+ The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities. However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work. For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+ The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+ The Corresponding Source for a work in source code form is that
+same work.
+
+ 2. Basic Permissions.
+
+ All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met. This License explicitly affirms your unlimited
+permission to run the unmodified Program. The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work. This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+ You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force. You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright. Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+ Conveying under any other circumstances is permitted solely under
+the conditions stated below. Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+ No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+ When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+ 4. Conveying Verbatim Copies.
+
+ You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+ You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+ 5. Conveying Modified Source Versions.
+
+ You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+ a) The work must carry prominent notices stating that you modified
+ it, and giving a relevant date.
+
+ b) The work must carry prominent notices stating that it is
+ released under this License and any conditions added under section
+ 7. This requirement modifies the requirement in section 4 to
+ "keep intact all notices".
+
+ c) You must license the entire work, as a whole, under this
+ License to anyone who comes into possession of a copy. This
+ License will therefore apply, along with any applicable section 7
+ additional terms, to the whole of the work, and all its parts,
+ regardless of how they are packaged. This License gives no
+ permission to license the work in any other way, but it does not
+ invalidate such permission if you have separately received it.
+
+ d) If the work has interactive user interfaces, each must display
+ Appropriate Legal Notices; however, if the Program has interactive
+ interfaces that do not display Appropriate Legal Notices, your
+ work need not make them do so.
+
+ A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit. Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+ 6. Conveying Non-Source Forms.
+
+ You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+ a) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by the
+ Corresponding Source fixed on a durable physical medium
+ customarily used for software interchange.
+
+ b) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by a
+ written offer, valid for at least three years and valid for as
+ long as you offer spare parts or customer support for that product
+ model, to give anyone who possesses the object code either (1) a
+ copy of the Corresponding Source for all the software in the
+ product that is covered by this License, on a durable physical
+ medium customarily used for software interchange, for a price no
+ more than your reasonable cost of physically performing this
+ conveying of source, or (2) access to copy the
+ Corresponding Source from a network server at no charge.
+
+ c) Convey individual copies of the object code with a copy of the
+ written offer to provide the Corresponding Source. This
+ alternative is allowed only occasionally and noncommercially, and
+ only if you received the object code with such an offer, in accord
+ with subsection 6b.
+
+ d) Convey the object code by offering access from a designated
+ place (gratis or for a charge), and offer equivalent access to the
+ Corresponding Source in the same way through the same place at no
+ further charge. You need not require recipients to copy the
+ Corresponding Source along with the object code. If the place to
+ copy the object code is a network server, the Corresponding Source
+ may be on a different server (operated by you or a third party)
+ that supports equivalent copying facilities, provided you maintain
+ clear directions next to the object code saying where to find the
+ Corresponding Source. Regardless of what server hosts the
+ Corresponding Source, you remain obligated to ensure that it is
+ available for as long as needed to satisfy these requirements.
+
+ e) Convey the object code using peer-to-peer transmission, provided
+ you inform other peers where the object code and Corresponding
+ Source of the work are being offered to the general public at no
+ charge under subsection 6d.
+
+ A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+ A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling. In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage. For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product. A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+ "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source. The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+ If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information. But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+ The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed. Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+ Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+ 7. Additional Terms.
+
+ "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law. If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+ When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it. (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.) You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+ Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+ a) Disclaiming warranty or limiting liability differently from the
+ terms of sections 15 and 16 of this License; or
+
+ b) Requiring preservation of specified reasonable legal notices or
+ author attributions in that material or in the Appropriate Legal
+ Notices displayed by works containing it; or
+
+ c) Prohibiting misrepresentation of the origin of that material, or
+ requiring that modified versions of such material be marked in
+ reasonable ways as different from the original version; or
+
+ d) Limiting the use for publicity purposes of names of licensors or
+ authors of the material; or
+
+ e) Declining to grant rights under trademark law for use of some
+ trade names, trademarks, or service marks; or
+
+ f) Requiring indemnification of licensors and authors of that
+ material by anyone who conveys the material (or modified versions of
+ it) with contractual assumptions of liability to the recipient, for
+ any liability that these contractual assumptions directly impose on
+ those licensors and authors.
+
+ All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10. If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term. If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+ If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+ Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+ 8. Termination.
+
+ You may not propagate or modify a covered work except as expressly
+provided under this License. Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+ However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+ Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+ Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License. If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+ 9. Acceptance Not Required for Having Copies.
+
+ You are not required to accept this License in order to receive or
+run a copy of the Program. Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance. However,
+nothing other than this License grants you permission to propagate or
+modify any covered work. These actions infringe copyright if you do
+not accept this License. Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+ 10. Automatic Licensing of Downstream Recipients.
+
+ Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License. You are not responsible
+for enforcing compliance by third parties with this License.
+
+ An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations. If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+ You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License. For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+ 11. Patents.
+
+ A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based. The
+work thus licensed is called the contributor's "contributor version".
+
+ A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version. For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+ In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement). To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+ If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients. "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+ If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+ A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License. You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+ Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+ 12. No Surrender of Others' Freedom.
+
+ If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all. For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+ 13. Use with the GNU Affero General Public License.
+
+ Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU Affero General Public License into a single
+combined work, and to convey the resulting work. The terms of this
+License will continue to apply to the part which is the covered work,
+but the special requirements of the GNU Affero General Public License,
+section 13, concerning interaction through a network will apply to the
+combination as such.
+
+ 14. Revised Versions of this License.
+
+ The Free Software Foundation may publish revised and/or new versions of
+the GNU General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Program specifies that a certain numbered version of the GNU General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation. If the Program does not specify a version number of the
+GNU General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+ If the Program specifies that a proxy can decide which future
+versions of the GNU General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+ Later license versions may give you additional or different
+permissions. However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+ 15. Disclaimer of Warranty.
+
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. Limitation of Liability.
+
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+ 17. Interpretation of Sections 15 and 16.
+
+ If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+
+ Copyright (C)
+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+
+Also add information on how to contact you by electronic and paper mail.
+
+ If the program does terminal interaction, make it output a short
+notice like this when it starts in an interactive mode:
+
+ Copyright (C)
+ This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, your program's commands
+might be different; for a GUI interface, you would use an "about box".
+
+ You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU GPL, see
+ .
+
+ The GNU General Public License does not permit incorporating your program
+into proprietary programs. If your program is a subroutine library, you
+may consider it more useful to permit linking proprietary applications with
+the library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License. But first, please read
+.
diff --git a/lang/fr/Problems.md b/lang/fr/Problems.md
new file mode 100644
index 0000000000..cb7dd1e1b1
--- /dev/null
+++ b/lang/fr/Problems.md
@@ -0,0 +1,23 @@
+# About graph kenrels.
+
+## (Random walk) Sylvester equation kernel.
+
+### ImportError: cannot import name 'frange' from 'matplotlib.mlab'
+
+You are using an outdated `control` with a recent `matplotlib`. `mlab.frange` was removed in `matplotlib-3.1.0`, and `control` removed the call in `control-0.8.2`.
+
+Update your `control` package.
+
+### Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.
+
+The Intel Math Kernel Library (MKL) is missing or not properly set. I assume MKL is required by the `control` module.
+
+Install MKL. Then add the following to your path:
+
+```
+export PATH=/opt/intel/bin:$PATH
+
+export LD_LIBRARY_PATH=/opt/intel/lib/intel64:/opt/intel/mkl/lib/intel64:$LD_LIBRARY_PATH
+
+export LD_PRELOAD=/opt/intel/mkl/lib/intel64/libmkl_def.so:/opt/intel/mkl/lib/intel64/libmkl_avx2.so:/opt/intel/mkl/lib/intel64/libmkl_core.so:/opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so:/opt/intel/mkl/lib/intel64/libmkl_intel_thread.so:/opt/intel/lib/intel64_lin/libiomp5.so
+```
diff --git a/lang/fr/README.md b/lang/fr/README.md
new file mode 100644
index 0000000000..d980044e31
--- /dev/null
+++ b/lang/fr/README.md
@@ -0,0 +1,165 @@
+# graphkit-learn
+[](https://travis-ci.org/jajupmochi/graphkit-learn) [](https://ci.appveyor.com/project/jajupmochi/graphkit-learn) [](https://codecov.io/gh/jajupmochi/graphkit-learn) [](https://graphkit-learn.readthedocs.io/en/master/?badge=master) [](https://badge.fury.io/py/graphkit-learn)
+
+A Python package for graph kernels, graph edit distances and graph pre-image problem.
+
+## Requirements
+
+* python>=3.6
+* numpy>=1.16.2
+* scipy>=1.1.0
+* matplotlib>=3.1.0
+* networkx>=2.2
+* scikit-learn>=0.20.0
+* tabulate>=0.8.2
+* tqdm>=4.26.0
+* control>=0.8.2 (for generalized random walk kernels only)
+* slycot>0.4.0 (for generalized random walk kernels only, which requires a fortran compiler, gfortran for example)
+
+## How to use?
+
+### Install the library
+
+* Install stable version from PyPI (may not be up-to-date):
+```
+$ pip install graphkit-learn
+```
+
+* Install latest version from GitHub:
+```
+$ git clone https://github.com/jajupmochi/graphkit-learn.git
+$ cd graphkit-learn/
+$ python setup.py install
+```
+
+### Run the test
+
+A series of [tests](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/tests) can be run to check if the library works correctly:
+```
+$ pip install -U pip pytest codecov coverage pytest-cov
+$ pytest -v --cov-config=.coveragerc --cov-report term --cov=gklearn gklearn/tests/
+```
+
+### Check examples
+
+A series of demos of using the library can be found on [Google Colab](https://drive.google.com/drive/folders/1r2gtPuFzIys2_MZw1wXqE2w3oCoVoQUG?usp=sharing) and in the [`example`](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/examples) folder.
+
+### Other demos
+
+Check [`notebooks`](https://github.com/jajupmochi/graphkit-learn/tree/master/notebooks) directory for more demos:
+* [`notebooks`](https://github.com/jajupmochi/graphkit-learn/tree/master/notebooks) directory includes test codes of graph kernels based on linear patterns;
+* [`notebooks/tests`](https://github.com/jajupmochi/graphkit-learn/tree/master/notebooks/tests) directory includes codes that test some libraries and functions;
+* [`notebooks/utils`](https://github.com/jajupmochi/graphkit-learn/tree/master/notebooks/utils) directory includes some useful tools, such as a Gram matrix checker and a function to get properties of datasets;
+* [`notebooks/else`](https://github.com/jajupmochi/graphkit-learn/tree/master/notebooks/else) directory includes other codes that we used for experiments.
+
+### Documentation
+
+The docs of the library can be found [here](https://graphkit-learn.readthedocs.io/en/master/?badge=master).
+
+## Main contents
+
+### 1 List of graph kernels
+
+* Based on walks
+ * [The common walk kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/common_walk.py) [1]
+ * Exponential
+ * Geometric
+ * [The marginalized kenrel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/marginalized.py)
+ * With tottering [2]
+ * Without tottering [7]
+ * [The generalized random walk kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/random_walk.py) [3]
+ * [Sylvester equation](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/sylvester_equation.py)
+ * Conjugate gradient
+ * Fixed-point iterations
+ * [Spectral decomposition](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/spectral_decomposition.py)
+* Based on paths
+ * [The shortest path kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/shortest_path.py) [4]
+ * [The structural shortest path kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/structural_sp.py) [5]
+ * [The path kernel up to length h](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/path_up_to_h.py) [6]
+ * The Tanimoto kernel
+ * The MinMax kernel
+* Non-linear kernels
+ * [The treelet kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/treelet.py) [10]
+ * [Weisfeiler-Lehman kernel](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/weisfeiler_lehman.py) [11]
+ * [Subtree](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/kernels/weisfeiler_lehman.py#L479)
+
+A demo of computing graph kernels can be found on [Google Colab](https://colab.research.google.com/drive/17Q2QCl9CAtDweGF8LiWnWoN2laeJqT0u?usp=sharing) and in the [`examples`](https://github.com/jajupmochi/graphkit-learn/blob/master/gklearn/examples/compute_graph_kernel.py) folder.
+
+### 2 Graph Edit Distances
+
+### 3 Graph preimage methods
+
+A demo of generating graph preimages can be found on [Google Colab](https://colab.research.google.com/drive/1PIDvHOcmiLEQ5Np3bgBDdu0kLOquOMQK?usp=sharing) and in the [`examples`](https://github.com/jajupmochi/graphkit-learn/blob/master/gklearn/examples/median_preimege_generator.py) folder.
+
+### 4 Interface to `GEDLIB`
+
+[`GEDLIB`](https://github.com/dbblumenthal/gedlib) is an easily extensible C++ library for (suboptimally) computing the graph edit distance between attributed graphs. [A Python interface](https://github.com/jajupmochi/graphkit-learn/tree/master/gklearn/gedlib) for `GEDLIB` is integrated in this library, based on [`gedlibpy`](https://github.com/Ryurin/gedlibpy) library.
+
+### 5 Computation optimization methods
+
+* Python’s `multiprocessing.Pool` module is applied to perform **parallelization** on the computations of all kernels as well as the model selection.
+* **The Fast Computation of Shortest Path Kernel (FCSP) method** [8] is implemented in *the random walk kernel*, *the shortest path kernel*, as well as *the structural shortest path kernel* where FCSP is applied on both vertex and edge kernels.
+* **The trie data structure** [9] is employed in *the path kernel up to length h* to store paths in graphs.
+
+## Issues
+
+* This library uses `multiprocessing.Pool.imap_unordered` function to do the parallelization, which may not be able to run correctly under Windows system. For now, Windows users may need to comment the parallel codes and uncomment the codes below them which run serially. We will consider adding a parameter to control serial or parallel computations as needed.
+
+* Some modules (such as `Numpy`, `Scipy`, `sklearn`) apply [`OpenBLAS`](https://www.openblas.net/) to perform parallel computation by default, which causes conflicts with other parallelization modules such as `multiprossing.Pool`, highly increasing the computing time. By setting its thread to 1, `OpenBLAS` is forced to use a single thread/CPU, thus avoids the conflicts. For now, this procedure has to be done manually. Under Linux, type this command in terminal before running the code:
+```
+$ export OPENBLAS_NUM_THREADS=1
+```
+Or add `export OPENBLAS_NUM_THREADS=1` at the end of your `~/.bashrc` file, then run
+```
+$ source ~/.bashrc
+```
+to make this effective permanently.
+
+## Results
+
+Check this paper for detailed description of graph kernels and experimental results:
+
+Linlin Jia, Benoit Gaüzère, and Paul Honeine. Graph Kernels Based on Linear Patterns: Theoretical and Experimental Comparisons. working paper or preprint, March 2019. URL https://hal-normandie-univ.archives-ouvertes.fr/hal-02053946.
+
+A comparison of performances of graph kernels on benchmark datasets can be found [here](https://graphkit-learn.readthedocs.io/en/master/experiments.html).
+
+## How to contribute
+
+Fork the library and open a pull request! Make your own contribute to the community!
+
+## Authors
+
+* [Linlin Jia](https://jajupmochi.github.io/), LITIS, INSA Rouen Normandie
+* [Benoit Gaüzère](http://pagesperso.litislab.fr/~bgauzere/#contact_en), LITIS, INSA Rouen Normandie
+* [Paul Honeine](http://honeine.fr/paul/Welcome.html), LITIS, Université de Rouen Normandie
+
+## Citation
+
+Still waiting...
+
+## Acknowledgments
+
+This research was supported by CSC (China Scholarship Council) and the French national research agency (ANR) under the grant APi (ANR-18-CE23-0014). The authors would like to thank the CRIANN (Le Centre Régional Informatique et d’Applications Numériques de Normandie) for providing computational resources.
+
+## References
+[1] Thomas Gärtner, Peter Flach, and Stefan Wrobel. On graph kernels: Hardness results and efficient alternatives. Learning Theory and Kernel Machines, pages 129–143, 2003.
+
+[2] H. Kashima, K. Tsuda, and A. Inokuchi. Marginalized kernels between labeled graphs. In Proceedings of the 20th International Conference on Machine Learning, Washington, DC, United States, 2003.
+
+[3] Vishwanathan, S.V.N., Schraudolph, N.N., Kondor, R., Borgwardt, K.M., 2010. Graph kernels. Journal of Machine Learning Research 11, 1201–1242.
+
+[4] K. M. Borgwardt and H.-P. Kriegel. Shortest-path kernels on graphs. In Proceedings of the International Conference on Data Mining, pages 74-81, 2005.
+
+[5] Liva Ralaivola, Sanjay J Swamidass, Hiroto Saigo, and Pierre Baldi. Graph kernels for chemical informatics. Neural networks, 18(8):1093–1110, 2005.
+
+[6] Suard F, Rakotomamonjy A, Bensrhair A. Kernel on Bag of Paths For Measuring Similarity of Shapes. InESANN 2007 Apr 25 (pp. 355-360).
+
+[7] Mahé, P., Ueda, N., Akutsu, T., Perret, J.L., Vert, J.P., 2004. Extensions of marginalized graph kernels, in: Proc. the twenty-first international conference on Machine learning, ACM. p. 70.
+
+[8] Lifan Xu, Wei Wang, M Alvarez, John Cavazos, and Dongping Zhang. Parallelization of shortest path graph kernels on multi-core cpus and gpus. Proceedings of the Programmability Issues for Heterogeneous Multicores (MultiProg), Vienna, Austria, 2014.
+
+[9] Edward Fredkin. Trie memory. Communications of the ACM, 3(9):490–499, 1960.
+
+[10] Gaüzere, B., Brun, L., Villemin, D., 2012. Two new graphs kernels in chemoinformatics. Pattern Recognition Letters 33, 2038–2047.
+
+[11] Shervashidze, N., Schweitzer, P., Leeuwen, E.J.v., Mehlhorn, K., Borgwardt, K.M., 2011. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research 12, 2539–2561.
diff --git a/lang/fr/docs/Makefile b/lang/fr/docs/Makefile
new file mode 100644
index 0000000000..69fe55ecfa
--- /dev/null
+++ b/lang/fr/docs/Makefile
@@ -0,0 +1,19 @@
+# Minimal makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS =
+SPHINXBUILD = sphinx-build
+SOURCEDIR = source
+BUILDDIR = build
+
+# Put it first so that "make" without argument is like "make help".
+help:
+ @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
+
+.PHONY: help Makefile
+
+# Catch-all target: route all unknown targets to Sphinx using the new
+# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
+%: Makefile
+ @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
\ No newline at end of file
diff --git a/lang/fr/docs/commands.md b/lang/fr/docs/commands.md
new file mode 100644
index 0000000000..ff7cc4cd79
--- /dev/null
+++ b/lang/fr/docs/commands.md
@@ -0,0 +1,5 @@
+sphinx-apidoc -o docs/ gklearn/ --separate
+
+sphinx-apidoc -o source/ ../gklearn/ --separate --force --module-first --no-toc
+
+make html
diff --git a/lang/fr/docs/make.bat b/lang/fr/docs/make.bat
new file mode 100644
index 0000000000..543c6b13b4
--- /dev/null
+++ b/lang/fr/docs/make.bat
@@ -0,0 +1,35 @@
+@ECHO OFF
+
+pushd %~dp0
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+ set SPHINXBUILD=sphinx-build
+)
+set SOURCEDIR=source
+set BUILDDIR=build
+
+if "%1" == "" goto help
+
+%SPHINXBUILD% >NUL 2>NUL
+if errorlevel 9009 (
+ echo.
+ echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
+ echo.installed, then set the SPHINXBUILD environment variable to point
+ echo.to the full path of the 'sphinx-build' executable. Alternatively you
+ echo.may add the Sphinx directory to PATH.
+ echo.
+ echo.If you don't have Sphinx installed, grab it from
+ echo.http://sphinx-doc.org/
+ exit /b 1
+)
+
+%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
+goto end
+
+:help
+%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
+
+:end
+popd
diff --git a/lang/fr/docs/requirements.txt b/lang/fr/docs/requirements.txt
new file mode 100644
index 0000000000..52189409e2
--- /dev/null
+++ b/lang/fr/docs/requirements.txt
@@ -0,0 +1,4 @@
+sphinx
+m2r
+nbsphinx
+ipykernel
diff --git a/lang/fr/docs/source/conf.py b/lang/fr/docs/source/conf.py
new file mode 100644
index 0000000000..b0fae5a482
--- /dev/null
+++ b/lang/fr/docs/source/conf.py
@@ -0,0 +1,194 @@
+# -*- coding: utf-8 -*-
+#
+# Configuration file for the Sphinx documentation builder.
+#
+# This file does only contain a selection of the most common options. For a
+# full list see the documentation:
+# http://www.sphinx-doc.org/en/master/config
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+sys.path.insert(0, os.path.abspath('.'))
+# sys.path.insert(0, os.path.abspath('..'))
+sys.path.insert(0, '../')
+sys.path.insert(0, '../../')
+
+# -- Project information -----------------------------------------------------
+
+project = 'graphkit-learn'
+copyright = '2020, Linlin Jia'
+author = 'Linlin Jia'
+
+# The short X.Y version
+version = ''
+# The full version, including alpha/beta/rc tags
+release = '1.0.0'
+
+
+# -- General configuration ---------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#
+# needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx.ext.autodoc',
+ 'sphinx.ext.doctest',
+ 'sphinx.ext.todo',
+ 'sphinx.ext.coverage',
+ 'sphinx.ext.mathjax',
+ 'sphinx.ext.ifconfig',
+ 'sphinx.ext.viewcode',
+ 'm2r',
+]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+#
+source_suffix = ['.rst', '.md']
+# source_suffix = '.rst'
+
+# The master toctree document.
+master_doc = 'index'
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = None
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = None
+
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+# html_theme = 'alabaster'
+html_theme = 'sphinx_rtd_theme'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#
+# html_theme_options = {}
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# Custom sidebar templates, must be a dictionary that maps document names
+# to template names.
+#
+# The default sidebars (for documents that don't match any pattern) are
+# defined by theme itself. Builtin themes are using these templates by
+# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
+# 'searchbox.html']``.
+#
+# html_sidebars = {}
+
+
+# -- Options for HTMLHelp output ---------------------------------------------
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'graphkit-learndoc'
+
+
+# -- Options for LaTeX output ------------------------------------------------
+
+latex_elements = {
+ # The paper size ('letterpaper' or 'a4paper').
+ #
+ # 'papersize': 'letterpaper',
+
+ # The font size ('10pt', '11pt' or '12pt').
+ #
+ # 'pointsize': '10pt',
+
+ # Additional stuff for the LaTeX preamble.
+ #
+ # 'preamble': '',
+
+ # Latex figure (float) alignment
+ #
+ # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+# author, documentclass [howto, manual, or own class]).
+latex_documents = [
+ (master_doc, 'graphkit-learn.tex', 'graphkit-learn Documentation',
+ 'Linlin Jia', 'manual'),
+]
+
+
+# -- Options for manual page output ------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ (master_doc, 'graphkit-learn', 'graphkit-learn Documentation',
+ [author], 1)
+]
+
+
+# -- Options for Texinfo output ----------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ (master_doc, 'graphkit-learn', 'graphkit-learn Documentation',
+ author, 'graphkit-learn', 'One line description of project.',
+ 'Miscellaneous'),
+]
+
+
+# -- Options for Epub output -------------------------------------------------
+
+# Bibliographic Dublin Core info.
+epub_title = project
+
+# The unique identifier of the text. This can be a ISBN number
+# or the project homepage.
+#
+# epub_identifier = ''
+
+# A unique identification for the text.
+#
+# epub_uid = ''
+
+# A list of files that should not be packed into the epub file.
+epub_exclude_files = ['search.html']
+
+
+# -- Extension configuration -------------------------------------------------
+
+# -- Options for todo extension ----------------------------------------------
+
+# If true, `todo` and `todoList` produce output, else they produce nothing.
+todo_include_todos = True
+
+add_module_names = False
diff --git a/lang/fr/docs/source/experiments.rst b/lang/fr/docs/source/experiments.rst
new file mode 100644
index 0000000000..7d8d477afd
--- /dev/null
+++ b/lang/fr/docs/source/experiments.rst
@@ -0,0 +1,22 @@
+Experiments
+===========
+
+To exhibit the effectiveness and practicability of `graphkit-learn` library, we tested it on several benchmark datasets. See `(Kersting et al., 2016) `__ for details on these datasets.
+
+A two-layer nested cross-validation (CV) is applied to select and evaluate models, where outer CV randomly splits the dataset into 10 folds with 9 as validation set, and inner CV then randomly splits validation set to 10 folds with 9 as training set. The whole procedure is performed 30 times, and the average performance is computed over these trails. Possible parameters of a graph kernel are also tuned during this procedure.
+
+The machine used to execute the experiments is a cluster with 28 CPU cores of Intel(R) Xeon(R) E5-2680 v4 @ 2.40GHz, 252GB memory, and 64-bit operating system CentOS Linux release 7.3.1611. All results were run with Python 3.5.2.
+
+The figure below exhibits accuracies achieved by graph kernels implemented in `graphkit-learn` library, in terms of regression error (the upper table) and classification rate (the lower table). Red color indicates the worse results and dark green the best ones. Gray cells with the “inf” marker indicate that the computation of the graph kernel on the dataset is omitted due to much higher consumption of computational resources than other kernels.
+
+.. image:: figures/all_test_accuracy.svg
+ :width: 600
+ :alt: accuracies
+
+The figure below displays computational time consumed to compute Gram matrices of each graph
+kernels (in :math:`log10` of seconds) on each dataset. Color legends have the same meaning as in the figure above.
+
+.. image:: figures/all_ave_gm_times.svg
+ :width: 600
+ :alt: computational time
+
diff --git a/lang/fr/docs/source/figures/all_ave_gm_times.svg b/lang/fr/docs/source/figures/all_ave_gm_times.svg
new file mode 100644
index 0000000000..037a6a1cd6
--- /dev/null
+++ b/lang/fr/docs/source/figures/all_ave_gm_times.svg
@@ -0,0 +1,2059 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/lang/fr/docs/source/figures/all_test_accuracy.svg b/lang/fr/docs/source/figures/all_test_accuracy.svg
new file mode 100644
index 0000000000..13fa813bb2
--- /dev/null
+++ b/lang/fr/docs/source/figures/all_test_accuracy.svg
@@ -0,0 +1,2131 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/lang/fr/docs/source/gklearn.kernels.commonWalkKernel.rst b/lang/fr/docs/source/gklearn.kernels.commonWalkKernel.rst
new file mode 100644
index 0000000000..1b4b4d8d9d
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.kernels.commonWalkKernel.rst
@@ -0,0 +1,7 @@
+gklearn.kernels.commonWalkKernel
+================================
+
+.. automodule:: gklearn.kernels.commonWalkKernel
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.kernels.marginalizedKernel.rst b/lang/fr/docs/source/gklearn.kernels.marginalizedKernel.rst
new file mode 100644
index 0000000000..70141f7a16
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.kernels.marginalizedKernel.rst
@@ -0,0 +1,7 @@
+gklearn.kernels.marginalizedKernel
+==================================
+
+.. automodule:: gklearn.kernels.marginalizedKernel
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.kernels.randomWalkKernel.rst b/lang/fr/docs/source/gklearn.kernels.randomWalkKernel.rst
new file mode 100644
index 0000000000..f6a24d6618
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.kernels.randomWalkKernel.rst
@@ -0,0 +1,7 @@
+gklearn.kernels.randomWalkKernel
+================================
+
+.. automodule:: gklearn.kernels.randomWalkKernel
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.kernels.rst b/lang/fr/docs/source/gklearn.kernels.rst
new file mode 100644
index 0000000000..404d2d3641
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.kernels.rst
@@ -0,0 +1,19 @@
+gklearn.kernels
+===============
+
+.. automodule:: gklearn.kernels
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. toctree::
+
+ gklearn.kernels.commonWalkKernel
+ gklearn.kernels.marginalizedKernel
+ gklearn.kernels.randomWalkKernel
+ gklearn.kernels.spKernel
+ gklearn.kernels.structuralspKernel
+ gklearn.kernels.treeletKernel
+ gklearn.kernels.untilHPathKernel
+ gklearn.kernels.weisfeilerLehmanKernel
+
diff --git a/lang/fr/docs/source/gklearn.kernels.spKernel.rst b/lang/fr/docs/source/gklearn.kernels.spKernel.rst
new file mode 100644
index 0000000000..d9da9bcdcf
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.kernels.spKernel.rst
@@ -0,0 +1,7 @@
+gklearn.kernels.spKernel
+========================
+
+.. automodule:: gklearn.kernels.spKernel
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.kernels.structuralspKernel.rst b/lang/fr/docs/source/gklearn.kernels.structuralspKernel.rst
new file mode 100644
index 0000000000..90c0fe3c2d
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.kernels.structuralspKernel.rst
@@ -0,0 +1,7 @@
+gklearn.kernels.structuralspKernel
+==================================
+
+.. automodule:: gklearn.kernels.structuralspKernel
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.kernels.treeletKernel.rst b/lang/fr/docs/source/gklearn.kernels.treeletKernel.rst
new file mode 100644
index 0000000000..c88016dcb8
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.kernels.treeletKernel.rst
@@ -0,0 +1,7 @@
+gklearn.kernels.treeletKernel
+=============================
+
+.. automodule:: gklearn.kernels.treeletKernel
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.kernels.untilHPathKernel.rst b/lang/fr/docs/source/gklearn.kernels.untilHPathKernel.rst
new file mode 100644
index 0000000000..76f39105bb
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.kernels.untilHPathKernel.rst
@@ -0,0 +1,7 @@
+gklearn.kernels.untilHPathKernel
+================================
+
+.. automodule:: gklearn.kernels.untilHPathKernel
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.kernels.weisfeilerLehmanKernel.rst b/lang/fr/docs/source/gklearn.kernels.weisfeilerLehmanKernel.rst
new file mode 100644
index 0000000000..f5797a2217
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.kernels.weisfeilerLehmanKernel.rst
@@ -0,0 +1,7 @@
+gklearn.kernels.weisfeilerLehmanKernel
+======================================
+
+.. automodule:: gklearn.kernels.weisfeilerLehmanKernel
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.rst b/lang/fr/docs/source/gklearn.rst
new file mode 100644
index 0000000000..d7de14a196
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.rst
@@ -0,0 +1,13 @@
+gklearn
+=======
+
+.. automodule:: gklearn
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. toctree::
+
+ gklearn.kernels
+ gklearn.utils
+
diff --git a/lang/fr/docs/source/gklearn.utils.graphdataset.rst b/lang/fr/docs/source/gklearn.utils.graphdataset.rst
new file mode 100644
index 0000000000..4e2aae17db
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.utils.graphdataset.rst
@@ -0,0 +1,7 @@
+gklearn.utils.graphdataset
+==========================
+
+.. automodule:: gklearn.utils.graphdataset
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.utils.graphfiles.rst b/lang/fr/docs/source/gklearn.utils.graphfiles.rst
new file mode 100644
index 0000000000..48b5e06277
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.utils.graphfiles.rst
@@ -0,0 +1,7 @@
+gklearn.utils.graphfiles
+========================
+
+.. automodule:: gklearn.utils.graphfiles
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.utils.kernels.rst b/lang/fr/docs/source/gklearn.utils.kernels.rst
new file mode 100644
index 0000000000..023cb3ec32
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.utils.kernels.rst
@@ -0,0 +1,7 @@
+gklearn.utils.kernels
+=====================
+
+.. automodule:: gklearn.utils.kernels
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.utils.model_selection_precomputed.rst b/lang/fr/docs/source/gklearn.utils.model_selection_precomputed.rst
new file mode 100644
index 0000000000..b80e8fcc5e
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.utils.model_selection_precomputed.rst
@@ -0,0 +1,7 @@
+gklearn.utils.model\_selection\_precomputed
+===========================================
+
+.. automodule:: gklearn.utils.model_selection_precomputed
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.utils.parallel.rst b/lang/fr/docs/source/gklearn.utils.parallel.rst
new file mode 100644
index 0000000000..8469b0a87a
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.utils.parallel.rst
@@ -0,0 +1,7 @@
+gklearn.utils.parallel
+======================
+
+.. automodule:: gklearn.utils.parallel
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.utils.rst b/lang/fr/docs/source/gklearn.utils.rst
new file mode 100644
index 0000000000..3d8a0e6933
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.utils.rst
@@ -0,0 +1,19 @@
+gklearn.utils
+=============
+
+.. automodule:: gklearn.utils
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+
+.. toctree::
+
+ gklearn.utils.graphdataset
+ gklearn.utils.graphfiles
+ gklearn.utils.kernels
+ gklearn.utils.model_selection_precomputed
+ gklearn.utils.parallel
+ gklearn.utils.trie
+ gklearn.utils.utils
+
diff --git a/lang/fr/docs/source/gklearn.utils.trie.rst b/lang/fr/docs/source/gklearn.utils.trie.rst
new file mode 100644
index 0000000000..1310cb13db
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.utils.trie.rst
@@ -0,0 +1,7 @@
+gklearn.utils.trie
+==================
+
+.. automodule:: gklearn.utils.trie
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/gklearn.utils.utils.rst b/lang/fr/docs/source/gklearn.utils.utils.rst
new file mode 100644
index 0000000000..004db5886f
--- /dev/null
+++ b/lang/fr/docs/source/gklearn.utils.utils.rst
@@ -0,0 +1,7 @@
+gklearn.utils.utils
+===================
+
+.. automodule:: gklearn.utils.utils
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/lang/fr/docs/source/index.rst b/lang/fr/docs/source/index.rst
new file mode 100644
index 0000000000..b531ba1fc5
--- /dev/null
+++ b/lang/fr/docs/source/index.rst
@@ -0,0 +1,24 @@
+.. graphkit-learn documentation master file, created by
+ sphinx-quickstart on Wed Feb 12 15:06:37 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+.. mdinclude:: ../../README.md
+
+Documentation
+-------------
+
+.. toctree::
+ :maxdepth: 1
+
+ modules.rst
+ experiments.rst
+
+
+
+Indices and tables
+------------------
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/lang/fr/docs/source/modules.rst b/lang/fr/docs/source/modules.rst
new file mode 100644
index 0000000000..536f81ba2b
--- /dev/null
+++ b/lang/fr/docs/source/modules.rst
@@ -0,0 +1,7 @@
+Modules
+=======
+
+.. toctree::
+ :maxdepth: 4
+
+ gklearn
diff --git a/lang/fr/gklearn/__init__.py b/lang/fr/gklearn/__init__.py
new file mode 100644
index 0000000000..08ca4ed6c7
--- /dev/null
+++ b/lang/fr/gklearn/__init__.py
@@ -0,0 +1,21 @@
+# -*-coding:utf-8 -*-
+"""
+gklearn
+
+This package contains 4 sub packages :
+ * c_ext : binders to C++ code
+ * ged : allows to compute graph edit distance between networkX graphs
+ * kernels : computation of graph kernels, ie graph similarity measure compatible with SVM
+ * notebooks : examples of code using this library
+ * utils : Diverse computation on graphs
+"""
+
+# info
+__version__ = "0.1"
+__author__ = "Benoit Gaüzère"
+__date__ = "November 2017"
+
+# import sub modules
+# from gklearn import c_ext
+# from gklearn import ged
+# from gklearn import utils
diff --git a/lang/fr/gklearn/examples/ged/compute_graph_edit_distance.py b/lang/fr/gklearn/examples/ged/compute_graph_edit_distance.py
new file mode 100644
index 0000000000..027d1e4cd0
--- /dev/null
+++ b/lang/fr/gklearn/examples/ged/compute_graph_edit_distance.py
@@ -0,0 +1,58 @@
+# -*- coding: utf-8 -*-
+"""compute_graph_edit_distance.ipynb
+
+Automatically generated by Colaboratory.
+
+Original file is located at
+ https://colab.research.google.com/drive/1Wfgn7WVuyOQQgwOvdUQBz0BzEVdp0YM3
+
+**This script demonstrates how to compute a graph edit distance.**
+---
+
+**0. Install `graphkit-learn`.**
+"""
+
+"""**1. Get dataset.**"""
+
+from gklearn.utils import Dataset
+
+# Predefined dataset name, use dataset "MUTAG".
+ds_name = 'MUTAG'
+
+# Initialize a Dataset.
+dataset = Dataset()
+# Load predefined dataset "MUTAG".
+dataset.load_predefined_dataset(ds_name)
+graph1 = dataset.graphs[0]
+graph2 = dataset.graphs[1]
+print(graph1, graph2)
+
+"""**2. Compute graph edit distance.**"""
+
+from gklearn.ged.env import GEDEnv
+
+
+ged_env = GEDEnv() # initailize GED environment.
+ged_env.set_edit_cost('CONSTANT', # GED cost type.
+ edit_cost_constants=[3, 3, 1, 3, 3, 1] # edit costs.
+ )
+ged_env.add_nx_graph(graph1, '') # add graph1
+ged_env.add_nx_graph(graph2, '') # add graph2
+listID = ged_env.get_all_graph_ids() # get list IDs of graphs
+ged_env.init(init_type='LAZY_WITHOUT_SHUFFLED_COPIES') # initialize GED environment.
+options = {'initialization_method': 'RANDOM', # or 'NODE', etc.
+ 'threads': 1 # parallel threads.
+ }
+ged_env.set_method('BIPARTITE', # GED method.
+ options # options for GED method.
+ )
+ged_env.init_method() # initialize GED method.
+
+ged_env.run_method(listID[0], listID[1]) # run.
+
+pi_forward = ged_env.get_forward_map(listID[0], listID[1]) # forward map.
+pi_backward = ged_env.get_backward_map(listID[0], listID[1]) # backward map.
+dis = ged_env.get_upper_bound(listID[0], listID[1]) # GED bewteen two graphs.
+print(pi_forward)
+print(pi_backward)
+print(dis)
\ No newline at end of file
diff --git a/lang/fr/gklearn/examples/kernels/compute_distance_in_kernel_space.py b/lang/fr/gklearn/examples/kernels/compute_distance_in_kernel_space.py
new file mode 100644
index 0000000000..76c74947ce
--- /dev/null
+++ b/lang/fr/gklearn/examples/kernels/compute_distance_in_kernel_space.py
@@ -0,0 +1,73 @@
+# -*- coding: utf-8 -*-
+"""compute_distance_in_kernel_space.ipynb
+
+Automatically generated by Colaboratory.
+
+Original file is located at
+ https://colab.research.google.com/drive/17tZP6IrineQmzo9sRtfZOnHpHx6HnlMA
+
+**This script demonstrates how to compute distance in kernel space between the image of a graph and the mean of images of a group of graphs.**
+---
+
+**0. Install `graphkit-learn`.**
+"""
+
+"""**1. Get dataset.**"""
+
+from gklearn.utils import Dataset
+
+# Predefined dataset name, use dataset "MUTAG".
+ds_name = 'MUTAG'
+
+# Initialize a Dataset.
+dataset = Dataset()
+# Load predefined dataset "MUTAG".
+dataset.load_predefined_dataset(ds_name)
+len(dataset.graphs)
+
+"""**2. Compute graph kernel.**"""
+
+from gklearn.kernels import PathUpToH
+import multiprocessing
+
+# Initailize parameters for graph kernel computation.
+kernel_options = {'depth': 3,
+ 'k_func': 'MinMax',
+ 'compute_method': 'trie'
+ }
+
+# Initialize graph kernel.
+graph_kernel = PathUpToH(node_labels=dataset.node_labels, # list of node label names.
+ edge_labels=dataset.edge_labels, # list of edge label names.
+ ds_infos=dataset.get_dataset_infos(keys=['directed']), # dataset information required for computation.
+ **kernel_options, # options for computation.
+ )
+
+# Compute Gram matrix.
+gram_matrix, run_time = graph_kernel.compute(dataset.graphs,
+ parallel='imap_unordered', # or None.
+ n_jobs=multiprocessing.cpu_count(), # number of parallel jobs.
+ normalize=True, # whether to return normalized Gram matrix.
+ verbose=2 # whether to print out results.
+ )
+
+"""**3. Compute distance in kernel space.**
+
+Given a dataset $\mathcal{G}_N$, compute the distance in kernel space between the image of $G_1 \in \mathcal{G}_N$ and the mean of images of $\mathcal{G}_k \subset \mathcal{G}_N$.
+"""
+
+from gklearn.preimage.utils import compute_k_dis
+
+# Index of $G_1$.
+idx_1 = 10
+# Indices of graphs in $\mathcal{G}_k$.
+idx_graphs = range(0, 10)
+
+# Compute the distance in kernel space.
+dis_k = compute_k_dis(idx_1,
+ idx_graphs,
+ [1 / len(idx_graphs)] * len(idx_graphs), # weights for images of graphs in $\mathcal{G}_k$; all equal when computing the mean.
+ gram_matrix, # gram matrix of al graphs.
+ withterm3=False
+ )
+print(dis_k)
\ No newline at end of file
diff --git a/lang/fr/gklearn/examples/kernels/compute_graph_kernel.py b/lang/fr/gklearn/examples/kernels/compute_graph_kernel.py
new file mode 100644
index 0000000000..2fe8d529c9
--- /dev/null
+++ b/lang/fr/gklearn/examples/kernels/compute_graph_kernel.py
@@ -0,0 +1,87 @@
+# -*- coding: utf-8 -*-
+"""compute_graph_kernel.ipynb
+
+Automatically generated by Colaboratory.
+
+Original file is located at
+ https://colab.research.google.com/drive/17Q2QCl9CAtDweGF8LiWnWoN2laeJqT0u
+
+**This script demonstrates how to compute a graph kernel.**
+---
+
+**0. Install `graphkit-learn`.**
+"""
+
+"""**1. Get dataset.**"""
+
+from gklearn.utils import Dataset
+
+# Predefined dataset name, use dataset "MUTAG".
+ds_name = 'MUTAG'
+
+# Initialize a Dataset.
+dataset = Dataset()
+# Load predefined dataset "MUTAG".
+dataset.load_predefined_dataset(ds_name)
+len(dataset.graphs)
+
+"""**2. Compute graph kernel.**"""
+
+from gklearn.kernels import PathUpToH
+
+# Initailize parameters for graph kernel computation.
+kernel_options = {'depth': 3,
+ 'k_func': 'MinMax',
+ 'compute_method': 'trie'
+ }
+
+# Initialize graph kernel.
+graph_kernel = PathUpToH(node_labels=dataset.node_labels, # list of node label names.
+ edge_labels=dataset.edge_labels, # list of edge label names.
+ ds_infos=dataset.get_dataset_infos(keys=['directed']), # dataset information required for computation.
+ **kernel_options, # options for computation.
+ )
+
+print('done.')
+
+import multiprocessing
+import matplotlib.pyplot as plt
+
+# Compute Gram matrix.
+gram_matrix, run_time = graph_kernel.compute(dataset.graphs,
+ parallel='imap_unordered', # or None.
+ n_jobs=multiprocessing.cpu_count(), # number of parallel jobs.
+ normalize=True, # whether to return normalized Gram matrix.
+ verbose=2 # whether to print out results.
+ )
+# Print results.
+print()
+print(gram_matrix)
+print(run_time)
+plt.imshow(gram_matrix)
+
+import multiprocessing
+
+# Compute grah kernels between a graph and a list of graphs.
+kernel_list, run_time = graph_kernel.compute(dataset.graphs, # a list of graphs.
+ dataset.graphs[0], # a single graph.
+ parallel='imap_unordered', # or None.
+ n_jobs=multiprocessing.cpu_count(), # number of parallel jobs.
+ verbose=2 # whether to print out results.
+ )
+# Print results.
+print()
+print(kernel_list)
+print(run_time)
+
+import multiprocessing
+
+# Compute a grah kernel between two graphs.
+kernel, run_time = graph_kernel.compute(dataset.graphs[0], # a single graph.
+ dataset.graphs[1], # another single graph.
+ verbose=2 # whether to print out results.
+ )
+# Print results.
+print()
+print(kernel)
+print(run_time)
\ No newline at end of file
diff --git a/lang/fr/gklearn/examples/kernels/compute_graph_kernel_old.py b/lang/fr/gklearn/examples/kernels/compute_graph_kernel_old.py
new file mode 100644
index 0000000000..7149c68c97
--- /dev/null
+++ b/lang/fr/gklearn/examples/kernels/compute_graph_kernel_old.py
@@ -0,0 +1,31 @@
+# -*- coding: utf-8 -*-
+"""compute_graph_kernel_v0.1.ipynb
+
+Automatically generated by Colaboratory.
+
+Original file is located at
+ https://colab.research.google.com/drive/10jUz7-ahPiE_T1qvFrh2NvCVs1e47noj
+
+**This script demonstrates how to compute a graph kernel.**
+---
+
+**0. Install `graphkit-learn`.**
+"""
+
+"""**1. Get dataset.**"""
+
+from gklearn.utils.graphfiles import loadDataset
+
+graphs, targets = loadDataset('../../../datasets/MUTAG/MUTAG_A.txt')
+
+"""**2. Compute graph kernel.**"""
+
+from gklearn.kernels import untilhpathkernel
+
+gram_matrix, run_time = untilhpathkernel(
+ graphs, # The list of input graphs.
+ depth=5, # The longest length of paths.
+ k_func='MinMax', # Or 'tanimoto'.
+ compute_method='trie', # Or 'naive'.
+ n_jobs=1, # The number of jobs to run in parallel.
+ verbose=True)
\ No newline at end of file
diff --git a/lang/fr/gklearn/examples/kernels/model_selection_old.py b/lang/fr/gklearn/examples/kernels/model_selection_old.py
new file mode 100644
index 0000000000..ca66be6ea8
--- /dev/null
+++ b/lang/fr/gklearn/examples/kernels/model_selection_old.py
@@ -0,0 +1,38 @@
+# -*- coding: utf-8 -*-
+"""model_selection_old.ipynb
+
+Automatically generated by Colaboratory.
+
+Original file is located at
+ https://colab.research.google.com/drive/1uVkl7scNgEPrimX8ks6iEC5ijuhB8L_D
+
+**This script demonstrates how to compute a graph kernel.**
+---
+
+**0. Install `graphkit-learn`.**
+"""
+
+"""**1. Perform model seletion and classification.**"""
+
+from gklearn.utils import model_selection_for_precomputed_kernel
+from gklearn.kernels import untilhpathkernel
+import numpy as np
+
+# Set parameters.
+datafile = '../../../datasets/MUTAG/MUTAG_A.txt'
+param_grid_precomputed = {'depth': np.linspace(1, 10, 10),
+ 'k_func': ['MinMax', 'tanimoto'],
+ 'compute_method': ['trie']}
+param_grid = {'C': np.logspace(-10, 10, num=41, base=10)}
+
+# Perform model selection and classification.
+model_selection_for_precomputed_kernel(
+ datafile, # The path of dataset file.
+ untilhpathkernel, # The graph kernel used for estimation.
+ param_grid_precomputed, # The parameters used to compute gram matrices.
+ param_grid, # The penelty Parameters used for penelty items.
+ 'classification', # Or 'regression'.
+ NUM_TRIALS=30, # The number of the random trials of the outer CV loop.
+ ds_name='MUTAG', # The name of the dataset.
+ n_jobs=1,
+ verbose=True)
\ No newline at end of file
diff --git a/lang/fr/gklearn/examples/preimage/median_preimege_generator.py b/lang/fr/gklearn/examples/preimage/median_preimege_generator.py
new file mode 100644
index 0000000000..9afc7bd4d4
--- /dev/null
+++ b/lang/fr/gklearn/examples/preimage/median_preimege_generator.py
@@ -0,0 +1,115 @@
+# -*- coding: utf-8 -*-
+"""example_median_preimege_generator.ipynb
+
+Automatically generated by Colaboratory.
+
+Original file is located at
+ https://colab.research.google.com/drive/1PIDvHOcmiLEQ5Np3bgBDdu0kLOquOMQK
+
+**This script demonstrates how to generate a graph preimage using Boria's method.**
+---
+"""
+
+"""**1. Get dataset.**"""
+
+from gklearn.utils import Dataset, split_dataset_by_target
+
+# Predefined dataset name, use dataset "MAO".
+ds_name = 'MAO'
+# The node/edge labels that will not be used in the computation.
+irrelevant_labels = {'node_attrs': ['x', 'y', 'z'], 'edge_labels': ['bond_stereo']}
+
+# Initialize a Dataset.
+dataset_all = Dataset()
+# Load predefined dataset "MAO".
+dataset_all.load_predefined_dataset(ds_name)
+# Remove irrelevant labels.
+dataset_all.remove_labels(**irrelevant_labels)
+# Split the whole dataset according to the classification targets.
+datasets = split_dataset_by_target(dataset_all)
+# Get the first class of graphs, whose median preimage will be computed.
+dataset = datasets[0]
+len(dataset.graphs)
+
+"""**2. Set parameters.**"""
+
+import multiprocessing
+
+# Parameters for MedianPreimageGenerator (our method).
+mpg_options = {'fit_method': 'k-graphs', # how to fit edit costs. "k-graphs" means use all graphs in median set when fitting.
+ 'init_ecc': [4, 4, 2, 1, 1, 1], # initial edit costs.
+ 'ds_name': ds_name, # name of the dataset.
+ 'parallel': True, # whether the parallel scheme is to be used.
+ 'time_limit_in_sec': 0, # maximum time limit to compute the preimage. If set to 0 then no limit.
+ 'max_itrs': 100, # maximum iteration limit to optimize edit costs. If set to 0 then no limit.
+ 'max_itrs_without_update': 3, # If the times that edit costs is not update is more than this number, then the optimization stops.
+ 'epsilon_residual': 0.01, # In optimization, the residual is only considered changed if the change is bigger than this number.
+ 'epsilon_ec': 0.1, # In optimization, the edit costs are only considered changed if the changes are bigger than this number.
+ 'verbose': 2 # whether to print out results.
+ }
+# Parameters for graph kernel computation.
+kernel_options = {'name': 'PathUpToH', # use path kernel up to length h.
+ 'depth': 9,
+ 'k_func': 'MinMax',
+ 'compute_method': 'trie',
+ 'parallel': 'imap_unordered', # or None
+ 'n_jobs': multiprocessing.cpu_count(),
+ 'normalize': True, # whether to use normalized Gram matrix to optimize edit costs.
+ 'verbose': 2 # whether to print out results.
+ }
+# Parameters for GED computation.
+ged_options = {'method': 'IPFP', # use IPFP huristic.
+ 'initialization_method': 'RANDOM', # or 'NODE', etc.
+ 'initial_solutions': 10, # when bigger than 1, then the method is considered mIPFP.
+ 'edit_cost': 'CONSTANT', # use CONSTANT cost.
+ 'attr_distance': 'euclidean', # the distance between non-symbolic node/edge labels is computed by euclidean distance.
+ 'ratio_runs_from_initial_solutions': 1,
+ 'threads': multiprocessing.cpu_count(), # parallel threads. Do not work if mpg_options['parallel'] = False.
+ 'init_option': 'EAGER_WITHOUT_SHUFFLED_COPIES'
+ }
+# Parameters for MedianGraphEstimator (Boria's method).
+mge_options = {'init_type': 'MEDOID', # how to initial median (compute set-median). "MEDOID" is to use the graph with smallest SOD.
+ 'random_inits': 10, # number of random initialization when 'init_type' = 'RANDOM'.
+ 'time_limit': 600, # maximum time limit to compute the generalized median. If set to 0 then no limit.
+ 'verbose': 2, # whether to print out results.
+ 'refine': False # whether to refine the final SODs or not.
+ }
+print('done.')
+
+"""**3. Run median preimage generator.**"""
+
+from gklearn.preimage import MedianPreimageGenerator
+
+# Create median preimage generator instance.
+mpg = MedianPreimageGenerator()
+# Add dataset.
+mpg.dataset = dataset
+# Set parameters.
+mpg.set_options(**mpg_options.copy())
+mpg.kernel_options = kernel_options.copy()
+mpg.ged_options = ged_options.copy()
+mpg.mge_options = mge_options.copy()
+# Run.
+mpg.run()
+
+"""**4. Get results.**"""
+
+# Get results.
+import pprint
+pp = pprint.PrettyPrinter(indent=4) # pretty print
+results = mpg.get_results()
+pp.pprint(results)
+
+# Draw generated graphs.
+def draw_graph(graph):
+ import matplotlib.pyplot as plt
+ import networkx as nx
+ plt.figure()
+ pos = nx.spring_layout(graph)
+ nx.draw(graph, pos, node_size=500, labels=nx.get_node_attributes(graph, 'atom_symbol'), font_color='w', width=3, with_labels=True)
+ plt.show()
+ plt.clf()
+ plt.close()
+
+draw_graph(mpg.set_median)
+draw_graph(mpg.gen_median)
\ No newline at end of file
diff --git a/lang/fr/gklearn/examples/preimage/median_preimege_generator_cml.py b/lang/fr/gklearn/examples/preimage/median_preimege_generator_cml.py
new file mode 100644
index 0000000000..314be9787b
--- /dev/null
+++ b/lang/fr/gklearn/examples/preimage/median_preimege_generator_cml.py
@@ -0,0 +1,113 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Tue Jun 16 15:41:26 2020
+
+@author: ljia
+
+**This script demonstrates how to generate a graph preimage using Boria's method with cost matrices learning.**
+"""
+
+"""**1. Get dataset.**"""
+
+from gklearn.utils import Dataset, split_dataset_by_target
+
+# Predefined dataset name, use dataset "MAO".
+ds_name = 'MAO'
+# The node/edge labels that will not be used in the computation.
+irrelevant_labels = {'node_attrs': ['x', 'y', 'z'], 'edge_labels': ['bond_stereo']}
+
+# Initialize a Dataset.
+dataset_all = Dataset()
+# Load predefined dataset "MAO".
+dataset_all.load_predefined_dataset(ds_name)
+# Remove irrelevant labels.
+dataset_all.remove_labels(**irrelevant_labels)
+# Split the whole dataset according to the classification targets.
+datasets = split_dataset_by_target(dataset_all)
+# Get the first class of graphs, whose median preimage will be computed.
+dataset = datasets[0]
+len(dataset.graphs)
+
+"""**2. Set parameters.**"""
+
+import multiprocessing
+
+# Parameters for MedianPreimageGenerator (our method).
+mpg_options = {'init_method': 'random', # how to initialize node label cost vector. "random" means to initialize randomly.
+ 'init_ecc': [4, 4, 2, 1, 1, 1], # initial edit costs.
+ 'ds_name': ds_name, # name of the dataset.
+ 'parallel': True, # @todo: whether the parallel scheme is to be used.
+ 'time_limit_in_sec': 0, # maximum time limit to compute the preimage. If set to 0 then no limit.
+ 'max_itrs': 3, # maximum iteration limit to optimize edit costs. If set to 0 then no limit.
+ 'max_itrs_without_update': 3, # If the times that edit costs is not update is more than this number, then the optimization stops.
+ 'epsilon_residual': 0.01, # In optimization, the residual is only considered changed if the change is bigger than this number.
+ 'epsilon_ec': 0.1, # In optimization, the edit costs are only considered changed if the changes are bigger than this number.
+ 'verbose': 2 # whether to print out results.
+ }
+# Parameters for graph kernel computation.
+kernel_options = {'name': 'PathUpToH', # use path kernel up to length h.
+ 'depth': 9,
+ 'k_func': 'MinMax',
+ 'compute_method': 'trie',
+ 'parallel': 'imap_unordered', # or None
+ 'n_jobs': multiprocessing.cpu_count(),
+ 'normalize': True, # whether to use normalized Gram matrix to optimize edit costs.
+ 'verbose': 2 # whether to print out results.
+ }
+# Parameters for GED computation.
+ged_options = {'method': 'BIPARTITE', # use Bipartite huristic.
+ 'initialization_method': 'RANDOM', # or 'NODE', etc.
+ 'initial_solutions': 10, # when bigger than 1, then the method is considered mIPFP.
+ 'edit_cost': 'CONSTANT', # @todo: not needed. use CONSTANT cost.
+ 'attr_distance': 'euclidean', # @todo: not needed. the distance between non-symbolic node/edge labels is computed by euclidean distance.
+ 'ratio_runs_from_initial_solutions': 1,
+ 'threads': multiprocessing.cpu_count(), # parallel threads. Do not work if mpg_options['parallel'] = False.
+ 'init_option': 'LAZY_WITHOUT_SHUFFLED_COPIES' # 'EAGER_WITHOUT_SHUFFLED_COPIES'
+ }
+# Parameters for MedianGraphEstimator (Boria's method).
+mge_options = {'init_type': 'MEDOID', # how to initial median (compute set-median). "MEDOID" is to use the graph with smallest SOD.
+ 'random_inits': 10, # number of random initialization when 'init_type' = 'RANDOM'.
+ 'time_limit': 600, # maximum time limit to compute the generalized median. If set to 0 then no limit.
+ 'verbose': 2, # whether to print out results.
+ 'refine': False # whether to refine the final SODs or not.
+ }
+print('done.')
+
+"""**3. Run median preimage generator.**"""
+
+from gklearn.preimage import MedianPreimageGeneratorCML
+
+# Create median preimage generator instance.
+mpg = MedianPreimageGeneratorCML()
+# Add dataset.
+mpg.dataset = dataset
+# Set parameters.
+mpg.set_options(**mpg_options.copy())
+mpg.kernel_options = kernel_options.copy()
+mpg.ged_options = ged_options.copy()
+mpg.mge_options = mge_options.copy()
+# Run.
+mpg.run()
+
+"""**4. Get results.**"""
+
+# Get results.
+import pprint
+pp = pprint.PrettyPrinter(indent=4) # pretty print
+results = mpg.get_results()
+pp.pprint(results)
+
+# Draw generated graphs.
+def draw_graph(graph):
+ import matplotlib.pyplot as plt
+ import networkx as nx
+ plt.figure()
+ pos = nx.spring_layout(graph)
+ nx.draw(graph, pos, node_size=500, labels=nx.get_node_attributes(graph, 'atom_symbol'), font_color='w', width=3, with_labels=True)
+ plt.show()
+ plt.clf()
+ plt.close()
+
+draw_graph(mpg.set_median)
+draw_graph(mpg.gen_median)
\ No newline at end of file
diff --git a/lang/fr/gklearn/examples/preimage/median_preimege_generator_py.py b/lang/fr/gklearn/examples/preimage/median_preimege_generator_py.py
new file mode 100644
index 0000000000..5b8152eb80
--- /dev/null
+++ b/lang/fr/gklearn/examples/preimage/median_preimege_generator_py.py
@@ -0,0 +1,114 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Tue Jun 16 15:41:26 2020
+
+@author: ljia
+
+**This script demonstrates how to generate a graph preimage using Boria's method with cost matrices learning.**
+"""
+
+"""**1. Get dataset.**"""
+
+from gklearn.utils import Dataset, split_dataset_by_target
+
+# Predefined dataset name, use dataset "MAO".
+ds_name = 'MAO'
+# The node/edge labels that will not be used in the computation.
+irrelevant_labels = {'node_attrs': ['x', 'y', 'z'], 'edge_labels': ['bond_stereo']}
+
+# Initialize a Dataset.
+dataset_all = Dataset()
+# Load predefined dataset "MAO".
+dataset_all.load_predefined_dataset(ds_name)
+# Remove irrelevant labels.
+dataset_all.remove_labels(**irrelevant_labels)
+# Split the whole dataset according to the classification targets.
+datasets = split_dataset_by_target(dataset_all)
+# Get the first class of graphs, whose median preimage will be computed.
+dataset = datasets[0]
+# dataset.cut_graphs(range(0, 10))
+len(dataset.graphs)
+
+"""**2. Set parameters.**"""
+
+import multiprocessing
+
+# Parameters for MedianPreimageGenerator (our method).
+mpg_options = {'fit_method': 'k-graphs', # how to fit edit costs. "k-graphs" means use all graphs in median set when fitting.
+ 'init_ecc': [4, 4, 2, 1, 1, 1], # initial edit costs.
+ 'ds_name': ds_name, # name of the dataset.
+ 'parallel': True, # @todo: whether the parallel scheme is to be used.
+ 'time_limit_in_sec': 0, # maximum time limit to compute the preimage. If set to 0 then no limit.
+ 'max_itrs': 100, # maximum iteration limit to optimize edit costs. If set to 0 then no limit.
+ 'max_itrs_without_update': 3, # If the times that edit costs is not update is more than this number, then the optimization stops.
+ 'epsilon_residual': 0.01, # In optimization, the residual is only considered changed if the change is bigger than this number.
+ 'epsilon_ec': 0.1, # In optimization, the edit costs are only considered changed if the changes are bigger than this number.
+ 'verbose': 2 # whether to print out results.
+ }
+# Parameters for graph kernel computation.
+kernel_options = {'name': 'PathUpToH', # use path kernel up to length h.
+ 'depth': 9,
+ 'k_func': 'MinMax',
+ 'compute_method': 'trie',
+ 'parallel': 'imap_unordered', # or None
+ 'n_jobs': multiprocessing.cpu_count(),
+ 'normalize': True, # whether to use normalized Gram matrix to optimize edit costs.
+ 'verbose': 2 # whether to print out results.
+ }
+# Parameters for GED computation.
+ged_options = {'method': 'BIPARTITE', # use Bipartite huristic.
+ 'initialization_method': 'RANDOM', # or 'NODE', etc.
+ 'initial_solutions': 10, # when bigger than 1, then the method is considered mIPFP.
+ 'edit_cost': 'CONSTANT', # use CONSTANT cost.
+ 'attr_distance': 'euclidean', # the distance between non-symbolic node/edge labels is computed by euclidean distance.
+ 'ratio_runs_from_initial_solutions': 1,
+ 'threads': multiprocessing.cpu_count(), # parallel threads. Do not work if mpg_options['parallel'] = False.
+ 'init_option': 'LAZY_WITHOUT_SHUFFLED_COPIES' # 'EAGER_WITHOUT_SHUFFLED_COPIES'
+ }
+# Parameters for MedianGraphEstimator (Boria's method).
+mge_options = {'init_type': 'MEDOID', # how to initial median (compute set-median). "MEDOID" is to use the graph with smallest SOD.
+ 'random_inits': 10, # number of random initialization when 'init_type' = 'RANDOM'.
+ 'time_limit': 600, # maximum time limit to compute the generalized median. If set to 0 then no limit.
+ 'verbose': 2, # whether to print out results.
+ 'refine': False # whether to refine the final SODs or not.
+ }
+print('done.')
+
+"""**3. Run median preimage generator.**"""
+
+from gklearn.preimage import MedianPreimageGeneratorPy
+
+# Create median preimage generator instance.
+mpg = MedianPreimageGeneratorPy()
+# Add dataset.
+mpg.dataset = dataset
+# Set parameters.
+mpg.set_options(**mpg_options.copy())
+mpg.kernel_options = kernel_options.copy()
+mpg.ged_options = ged_options.copy()
+mpg.mge_options = mge_options.copy()
+# Run.
+mpg.run()
+
+"""**4. Get results.**"""
+
+# Get results.
+import pprint
+pp = pprint.PrettyPrinter(indent=4) # pretty print
+results = mpg.get_results()
+pp.pprint(results)
+
+# Draw generated graphs.
+def draw_graph(graph):
+ import matplotlib.pyplot as plt
+ import networkx as nx
+ plt.figure()
+ pos = nx.spring_layout(graph)
+ nx.draw(graph, pos, node_size=500, labels=nx.get_node_attributes(graph, 'atom_symbol'), font_color='w', width=3, with_labels=True)
+ plt.show()
+ plt.clf()
+ plt.close()
+
+draw_graph(mpg.set_median)
+draw_graph(mpg.gen_median)
\ No newline at end of file
diff --git a/lang/fr/gklearn/experiments/ged/check_results_of_ged_env.py b/lang/fr/gklearn/experiments/ged/check_results_of_ged_env.py
new file mode 100644
index 0000000000..7c81c5d4af
--- /dev/null
+++ b/lang/fr/gklearn/experiments/ged/check_results_of_ged_env.py
@@ -0,0 +1,126 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Thu Jun 25 11:31:46 2020
+
+@author: ljia
+"""
+
+def xp_check_results_of_GEDEnv():
+ """Compare results of GEDEnv to GEDLIB.
+ """
+ """**1. Get dataset.**"""
+
+ from gklearn.utils import Dataset
+
+ # Predefined dataset name, use dataset "MUTAG".
+ ds_name = 'MUTAG'
+
+ # Initialize a Dataset.
+ dataset = Dataset()
+ # Load predefined dataset "MUTAG".
+ dataset.load_predefined_dataset(ds_name)
+
+ results1 = compute_geds_by_GEDEnv(dataset)
+ results2 = compute_geds_by_GEDLIB(dataset)
+
+ # Show results.
+ import pprint
+ pp = pprint.PrettyPrinter(indent=4) # pretty print
+ print('Restuls using GEDEnv:')
+ pp.pprint(results1)
+ print()
+ print('Restuls using GEDLIB:')
+ pp.pprint(results2)
+
+ return results1, results2
+
+
+def compute_geds_by_GEDEnv(dataset):
+ from gklearn.ged.env import GEDEnv
+ import numpy as np
+
+ graph1 = dataset.graphs[0]
+ graph2 = dataset.graphs[1]
+
+ ged_env = GEDEnv() # initailize GED environment.
+ ged_env.set_edit_cost('CONSTANT', # GED cost type.
+ edit_cost_constants=[3, 3, 1, 3, 3, 1] # edit costs.
+ )
+ for g in dataset.graphs[0:10]:
+ ged_env.add_nx_graph(g, '')
+# ged_env.add_nx_graph(graph1, '') # add graph1
+# ged_env.add_nx_graph(graph2, '') # add graph2
+ listID = ged_env.get_all_graph_ids() # get list IDs of graphs
+ ged_env.init(init_type='LAZY_WITHOUT_SHUFFLED_COPIES') # initialize GED environment.
+ options = {'threads': 1 # parallel threads.
+ }
+ ged_env.set_method('BIPARTITE', # GED method.
+ options # options for GED method.
+ )
+ ged_env.init_method() # initialize GED method.
+
+ ged_mat = np.empty((10, 10))
+ for i in range(0, 10):
+ for j in range(i, 10):
+ ged_env.run_method(i, j) # run.
+ ged_mat[i, j] = ged_env.get_upper_bound(i, j)
+ ged_mat[j, i] = ged_mat[i, j]
+
+ results = {}
+ results['pi_forward'] = ged_env.get_forward_map(listID[0], listID[1]) # forward map.
+ results['pi_backward'] = ged_env.get_backward_map(listID[0], listID[1]) # backward map.
+ results['upper_bound'] = ged_env.get_upper_bound(listID[0], listID[1]) # GED bewteen two graphs.
+ results['runtime'] = ged_env.get_runtime(listID[0], listID[1])
+ results['init_time'] = ged_env.get_init_time()
+ results['ged_mat'] = ged_mat
+
+ return results
+
+
+def compute_geds_by_GEDLIB(dataset):
+ from gklearn.gedlib import librariesImport, gedlibpy
+ from gklearn.ged.util import ged_options_to_string
+ import numpy as np
+
+ graph1 = dataset.graphs[5]
+ graph2 = dataset.graphs[6]
+
+ ged_env = gedlibpy.GEDEnv() # initailize GED environment.
+ ged_env.set_edit_cost('CONSTANT', # GED cost type.
+ edit_cost_constant=[3, 3, 1, 3, 3, 1] # edit costs.
+ )
+# ged_env.add_nx_graph(graph1, '') # add graph1
+# ged_env.add_nx_graph(graph2, '') # add graph2
+ for g in dataset.graphs[0:10]:
+ ged_env.add_nx_graph(g, '')
+ listID = ged_env.get_all_graph_ids() # get list IDs of graphs
+ ged_env.init(init_option='LAZY_WITHOUT_SHUFFLED_COPIES') # initialize GED environment.
+ options = {'initialization-method': 'RANDOM', # or 'NODE', etc.
+ 'threads': 1 # parallel threads.
+ }
+ ged_env.set_method('BIPARTITE', # GED method.
+ ged_options_to_string(options) # options for GED method.
+ )
+ ged_env.init_method() # initialize GED method.
+
+ ged_mat = np.empty((10, 10))
+ for i in range(0, 10):
+ for j in range(i, 10):
+ ged_env.run_method(i, j) # run.
+ ged_mat[i, j] = ged_env.get_upper_bound(i, j)
+ ged_mat[j, i] = ged_mat[i, j]
+
+ results = {}
+ results['pi_forward'] = ged_env.get_forward_map(listID[0], listID[1]) # forward map.
+ results['pi_backward'] = ged_env.get_backward_map(listID[0], listID[1]) # backward map.
+ results['upper_bound'] = ged_env.get_upper_bound(listID[0], listID[1]) # GED bewteen two graphs.
+ results['runtime'] = ged_env.get_runtime(listID[0], listID[1])
+ results['init_time'] = ged_env.get_init_time()
+ results['ged_mat'] = ged_mat
+
+ return results
+
+
+if __name__ == '__main__':
+ results1, results2 = xp_check_results_of_GEDEnv()
\ No newline at end of file
diff --git a/lang/fr/gklearn/experiments/papers/PRL_2020/accuracy_diff_entropy.py b/lang/fr/gklearn/experiments/papers/PRL_2020/accuracy_diff_entropy.py
new file mode 100644
index 0000000000..0ababc3fcf
--- /dev/null
+++ b/lang/fr/gklearn/experiments/papers/PRL_2020/accuracy_diff_entropy.py
@@ -0,0 +1,196 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Oct 5 16:08:33 2020
+
+@author: ljia
+
+This script compute classification accuracy of each geaph kernel on datasets
+with different entropy of degree distribution.
+"""
+from utils import Graph_Kernel_List, cross_validate
+import numpy as np
+import logging
+
+num_nodes = 40
+half_num_graphs = 100
+
+
+def generate_graphs():
+# from gklearn.utils.graph_synthesizer import GraphSynthesizer
+# gsyzer = GraphSynthesizer()
+# graphs = gsyzer.unified_graphs(num_graphs=1000, num_nodes=20, num_edges=40, num_node_labels=0, num_edge_labels=0, seed=None, directed=False)
+# return graphs
+ import networkx as nx
+
+ degrees11 = [5] * num_nodes
+# degrees12 = [2] * num_nodes
+ degrees12 = [5] * num_nodes
+ degrees21 = list(range(1, 11)) * 6
+# degrees22 = [5 * i for i in list(range(1, 11)) * 6]
+ degrees22 = list(range(1, 11)) * 6
+
+ # method 1
+ graphs11 = [nx.configuration_model(degrees11, create_using=nx.Graph) for i in range(half_num_graphs)]
+ graphs12 = [nx.configuration_model(degrees12, create_using=nx.Graph) for i in range(half_num_graphs)]
+
+ for g in graphs11:
+ g.remove_edges_from(nx.selfloop_edges(g))
+ for g in graphs12:
+ g.remove_edges_from(nx.selfloop_edges(g))
+
+ # method 2: can easily generate isomorphic graphs.
+# graphs11 = [nx.random_regular_graph(2, num_nodes, seed=None) for i in range(half_num_graphs)]
+# graphs12 = [nx.random_regular_graph(10, num_nodes, seed=None) for i in range(half_num_graphs)]
+
+ # Add node labels.
+ for g in graphs11:
+ for n in g.nodes():
+ g.nodes[n]['atom'] = 0
+ for g in graphs12:
+ for n in g.nodes():
+ g.nodes[n]['atom'] = 1
+
+ graphs1 = graphs11 + graphs12
+
+ # method 1: the entorpy of the two classes is not the same.
+ graphs21 = [nx.configuration_model(degrees21, create_using=nx.Graph) for i in range(half_num_graphs)]
+ graphs22 = [nx.configuration_model(degrees22, create_using=nx.Graph) for i in range(half_num_graphs)]
+
+ for g in graphs21:
+ g.remove_edges_from(nx.selfloop_edges(g))
+ for g in graphs22:
+ g.remove_edges_from(nx.selfloop_edges(g))
+
+# # method 2: tooo slow, and may fail.
+# graphs21 = [nx.random_degree_sequence_graph(degrees21, seed=None, tries=100) for i in range(half_num_graphs)]
+# graphs22 = [nx.random_degree_sequence_graph(degrees22, seed=None, tries=100) for i in range(half_num_graphs)]
+
+# # method 3: no randomness.
+# graphs21 = [nx.havel_hakimi_graph(degrees21, create_using=None) for i in range(half_num_graphs)]
+# graphs22 = [nx.havel_hakimi_graph(degrees22, create_using=None) for i in range(half_num_graphs)]
+
+# # method 4:
+# graphs21 = [nx.configuration_model(degrees21, create_using=nx.Graph) for i in range(half_num_graphs)]
+# graphs22 = [nx.degree_sequence_tree(degrees21, create_using=nx.Graph) for i in range(half_num_graphs)]
+
+# # method 5: the entorpy of the two classes is not the same.
+# graphs21 = [nx.expected_degree_graph(degrees21, seed=None, selfloops=False) for i in range(half_num_graphs)]
+# graphs22 = [nx.expected_degree_graph(degrees22, seed=None, selfloops=False) for i in range(half_num_graphs)]
+
+# # method 6: seems there is no randomness0
+# graphs21 = [nx.random_powerlaw_tree(num_nodes, gamma=3, seed=None, tries=10000) for i in range(half_num_graphs)]
+# graphs22 = [nx.random_powerlaw_tree(num_nodes, gamma=3, seed=None, tries=10000) for i in range(half_num_graphs)]
+
+ # Add node labels.
+ for g in graphs21:
+ for n in g.nodes():
+ g.nodes[n]['atom'] = 0
+ for g in graphs22:
+ for n in g.nodes():
+ g.nodes[n]['atom'] = 1
+
+ graphs2 = graphs21 + graphs22
+
+# # check for isomorphism.
+# iso_mat1 = np.zeros((len(graphs1), len(graphs1)))
+# num1 = 0
+# num2 = 0
+# for i in range(len(graphs1)):
+# for j in range(i + 1, len(graphs1)):
+# if nx.is_isomorphic(graphs1[i], graphs1[j]):
+# iso_mat1[i, j] = 1
+# iso_mat1[j, i] = 1
+# num1 += 1
+# print('iso:', num1, ':', i, ',', j)
+# else:
+# num2 += 1
+# print('not iso:', num2, ':', i, ',', j)
+#
+# iso_mat2 = np.zeros((len(graphs2), len(graphs2)))
+# num1 = 0
+# num2 = 0
+# for i in range(len(graphs2)):
+# for j in range(i + 1, len(graphs2)):
+# if nx.is_isomorphic(graphs2[i], graphs2[j]):
+# iso_mat2[i, j] = 1
+# iso_mat2[j, i] = 1
+# num1 += 1
+# print('iso:', num1, ':', i, ',', j)
+# else:
+# num2 += 1
+# print('not iso:', num2, ':', i, ',', j)
+
+ return graphs1, graphs2
+
+
+def get_infos(graph):
+ from gklearn.utils import Dataset
+ ds = Dataset()
+ ds.load_graphs(graph)
+ infos = ds.get_dataset_infos(keys=['all_degree_entropy', 'ave_node_degree'])
+ infos['ave_degree_entropy'] = np.mean(infos['all_degree_entropy'])
+ print(infos['ave_degree_entropy'], ',', infos['ave_node_degree'])
+ return infos
+
+
+def xp_accuracy_diff_entropy():
+
+ # Generate graphs.
+ graphs1, graphs2 = generate_graphs()
+
+
+ # Compute entropy of degree distribution of the generated graphs.
+ info11 = get_infos(graphs1[0:half_num_graphs])
+ info12 = get_infos(graphs1[half_num_graphs:])
+ info21 = get_infos(graphs2[0:half_num_graphs])
+ info22 = get_infos(graphs2[half_num_graphs:])
+
+ # Run and save.
+ import pickle
+ import os
+ save_dir = 'outputs/accuracy_diff_entropy/'
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+
+ accuracies = {}
+ confidences = {}
+
+ for kernel_name in Graph_Kernel_List:
+ print()
+ print('Kernel:', kernel_name)
+
+ accuracies[kernel_name] = []
+ confidences[kernel_name] = []
+ for set_i, graphs in enumerate([graphs1, graphs2]):
+ print()
+ print('Graph set', set_i)
+
+ tmp_graphs = [g.copy() for g in graphs]
+ targets = [0] * half_num_graphs + [1] * half_num_graphs
+
+ accuracy = 'error'
+ confidence = 'error'
+ try:
+ accuracy, confidence = cross_validate(tmp_graphs, targets, kernel_name, ds_name=str(set_i), output_dir=save_dir) #, n_jobs=1)
+ except Exception as exp:
+ print('An exception occured when running this experiment:')
+ LOG_FILENAME = save_dir + 'error.txt'
+ logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
+ logging.exception('\n' + kernel_name + ', ' + str(set_i) + ':')
+ print(repr(exp))
+ accuracies[kernel_name].append(accuracy)
+ confidences[kernel_name].append(confidence)
+
+ pickle.dump(accuracy, open(save_dir + 'accuracy.' + kernel_name + '.' + str(set_i) + '.pkl', 'wb'))
+ pickle.dump(confidence, open(save_dir + 'confidence.' + kernel_name + '.' + str(set_i) + '.pkl', 'wb'))
+
+ # Save all.
+ pickle.dump(accuracies, open(save_dir + 'accuracies.pkl', 'wb'))
+ pickle.dump(confidences, open(save_dir + 'confidences.pkl', 'wb'))
+
+ return
+
+
+if __name__ == '__main__':
+ xp_accuracy_diff_entropy()
\ No newline at end of file
diff --git a/lang/fr/gklearn/experiments/papers/PRL_2020/runtimes_28cores.py b/lang/fr/gklearn/experiments/papers/PRL_2020/runtimes_28cores.py
new file mode 100644
index 0000000000..0e25f4656e
--- /dev/null
+++ b/lang/fr/gklearn/experiments/papers/PRL_2020/runtimes_28cores.py
@@ -0,0 +1,57 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Sep 21 10:34:26 2020
+
+@author: ljia
+"""
+from utils import Graph_Kernel_List, Dataset_List, compute_graph_kernel
+from gklearn.utils.graphdataset import load_predefined_dataset
+import logging
+
+
+def xp_runtimes_of_all_28cores():
+
+ # Run and save.
+ import pickle
+ import os
+ save_dir = 'outputs/runtimes_of_all_28cores/'
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+
+ run_times = {}
+
+ for ds_name in Dataset_List:
+ print()
+ print('Dataset:', ds_name)
+
+ run_times[ds_name] = []
+ for kernel_name in Graph_Kernel_List:
+ print()
+ print('Kernel:', kernel_name)
+
+ # get graphs.
+ graphs, _ = load_predefined_dataset(ds_name)
+
+ # Compute Gram matrix.
+ run_time = 'error'
+ try:
+ gram_matrix, run_time = compute_graph_kernel(graphs, kernel_name, n_jobs=28)
+ except Exception as exp:
+ print('An exception occured when running this experiment:')
+ LOG_FILENAME = save_dir + 'error.txt'
+ logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
+ logging.exception('')
+ print(repr(exp))
+ run_times[ds_name].append(run_time)
+
+ pickle.dump(run_time, open(save_dir + 'run_time.' + kernel_name + '.' + ds_name + '.pkl', 'wb'))
+
+ # Save all.
+ pickle.dump(run_times, open(save_dir + 'run_times.pkl', 'wb'))
+
+ return
+
+
+if __name__ == '__main__':
+ xp_runtimes_of_all_28cores()
diff --git a/lang/fr/gklearn/experiments/papers/PRL_2020/runtimes_diff_chunksizes.py b/lang/fr/gklearn/experiments/papers/PRL_2020/runtimes_diff_chunksizes.py
new file mode 100644
index 0000000000..6d118d8b74
--- /dev/null
+++ b/lang/fr/gklearn/experiments/papers/PRL_2020/runtimes_diff_chunksizes.py
@@ -0,0 +1,62 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Sep 21 10:34:26 2020
+
+@author: ljia
+"""
+from utils import Graph_Kernel_List, Dataset_List, compute_graph_kernel
+from gklearn.utils.graphdataset import load_predefined_dataset
+import logging
+
+
+def xp_runtimes_diff_chunksizes():
+
+ # Run and save.
+ import pickle
+ import os
+ save_dir = 'outputs/runtimes_diff_chunksizes/'
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+
+ run_times = {}
+
+ for ds_name in Dataset_List:
+ print()
+ print('Dataset:', ds_name)
+
+ run_times[ds_name] = []
+ for kernel_name in Graph_Kernel_List:
+ print()
+ print('Kernel:', kernel_name)
+
+ run_times[ds_name].append([])
+ for chunksize in [1, 5, 10, 50, 100, 500, 1000, 5000, 10000, 50000, 100000]:
+ print()
+ print('Chunksize:', chunksize)
+
+ # get graphs.
+ graphs, _ = load_predefined_dataset(ds_name)
+
+ # Compute Gram matrix.
+ run_time = 'error'
+ try:
+ gram_matrix, run_time = compute_graph_kernel(graphs, kernel_name, chunksize=chunksize)
+ except Exception as exp:
+ print('An exception occured when running this experiment:')
+ LOG_FILENAME = save_dir + 'error.txt'
+ logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
+ logging.exception('')
+ print(repr(exp))
+ run_times[ds_name][-1].append(run_time)
+
+ pickle.dump(run_time, open(save_dir + 'run_time.' + kernel_name + '.' + ds_name + '.' + str(chunksize) + '.pkl', 'wb'))
+
+ # Save all.
+ pickle.dump(run_times, open(save_dir + 'run_times.pkl', 'wb'))
+
+ return
+
+
+if __name__ == '__main__':
+ xp_runtimes_diff_chunksizes()
diff --git a/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_N.py b/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_N.py
new file mode 100644
index 0000000000..891ae4c919
--- /dev/null
+++ b/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_N.py
@@ -0,0 +1,64 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Sep 21 10:34:26 2020
+
+@author: ljia
+"""
+from utils import Graph_Kernel_List, compute_graph_kernel
+import logging
+
+
+def generate_graphs():
+ from gklearn.utils.graph_synthesizer import GraphSynthesizer
+ gsyzer = GraphSynthesizer()
+ graphs = gsyzer.unified_graphs(num_graphs=1000, num_nodes=20, num_edges=40, num_node_labels=0, num_edge_labels=0, seed=None, directed=False)
+ return graphs
+
+
+def xp_synthesized_graphs_dataset_size():
+
+ # Generate graphs.
+ graphs = generate_graphs()
+
+ # Run and save.
+ import pickle
+ import os
+ save_dir = 'outputs/synthesized_graphs_N/'
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+
+ run_times = {}
+
+ for kernel_name in Graph_Kernel_List:
+ print()
+ print('Kernel:', kernel_name)
+
+ run_times[kernel_name] = []
+ for num_graphs in [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]:
+ print()
+ print('Number of graphs:', num_graphs)
+
+ sub_graphs = [g.copy() for g in graphs[0:num_graphs]]
+
+ run_time = 'error'
+ try:
+ gram_matrix, run_time = compute_graph_kernel(sub_graphs, kernel_name)
+ except Exception as exp:
+ print('An exception occured when running this experiment:')
+ LOG_FILENAME = save_dir + 'error.txt'
+ logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
+ logging.exception('')
+ print(repr(exp))
+ run_times[kernel_name].append(run_time)
+
+ pickle.dump(run_time, open(save_dir + 'run_time.' + kernel_name + '.' + str(num_graphs) + '.pkl', 'wb'))
+
+ # Save all.
+ pickle.dump(run_times, open(save_dir + 'run_times.pkl', 'wb'))
+
+ return
+
+
+if __name__ == '__main__':
+ xp_synthesized_graphs_dataset_size()
diff --git a/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_degrees.py b/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_degrees.py
new file mode 100644
index 0000000000..f005172b8f
--- /dev/null
+++ b/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_degrees.py
@@ -0,0 +1,63 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Sep 21 10:34:26 2020
+
+@author: ljia
+"""
+from utils import Graph_Kernel_List, compute_graph_kernel
+import logging
+
+
+def generate_graphs(degree):
+ from gklearn.utils.graph_synthesizer import GraphSynthesizer
+ gsyzer = GraphSynthesizer()
+ graphs = gsyzer.unified_graphs(num_graphs=100, num_nodes=20, num_edges=int(10*degree), num_node_labels=0, num_edge_labels=0, seed=None, directed=False)
+ return graphs
+
+
+def xp_synthesized_graphs_degrees():
+
+ # Run and save.
+ import pickle
+ import os
+ save_dir = 'outputs/synthesized_graphs_degrees/'
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+
+ run_times = {}
+
+ for kernel_name in Graph_Kernel_List:
+ print()
+ print('Kernel:', kernel_name)
+
+ run_times[kernel_name] = []
+ for degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]:
+ print()
+ print('Degree:', degree)
+
+ # Generate graphs.
+ graphs = generate_graphs(degree)
+
+ # Compute Gram matrix.
+ run_time = 'error'
+ try:
+ gram_matrix, run_time = compute_graph_kernel(graphs, kernel_name)
+ except Exception as exp:
+ print('An exception occured when running this experiment:')
+ LOG_FILENAME = save_dir + 'error.txt'
+ logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
+ logging.exception('')
+ print(repr(exp))
+ run_times[kernel_name].append(run_time)
+
+ pickle.dump(run_time, open(save_dir + 'run_time.' + kernel_name + '.' + str(degree) + '.pkl', 'wb'))
+
+ # Save all.
+ pickle.dump(run_times, open(save_dir + 'run_times.pkl', 'wb'))
+
+ return
+
+
+if __name__ == '__main__':
+ xp_synthesized_graphs_degrees()
diff --git a/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_num_el.py b/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_num_el.py
new file mode 100644
index 0000000000..8e35c74fbf
--- /dev/null
+++ b/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_num_el.py
@@ -0,0 +1,63 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Sep 21 10:34:26 2020
+
+@author: ljia
+"""
+from utils import Graph_Kernel_List_ESym, compute_graph_kernel
+import logging
+
+
+def generate_graphs(num_el_alp):
+ from gklearn.utils.graph_synthesizer import GraphSynthesizer
+ gsyzer = GraphSynthesizer()
+ graphs = gsyzer.unified_graphs(num_graphs=100, num_nodes=20, num_edges=40, num_node_labels=0, num_edge_labels=num_el_alp, seed=None, directed=False)
+ return graphs
+
+
+def xp_synthesized_graphs_num_edge_label_alphabet():
+
+ # Run and save.
+ import pickle
+ import os
+ save_dir = 'outputs/synthesized_graphs_num_edge_label_alphabet/'
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+
+ run_times = {}
+
+ for kernel_name in Graph_Kernel_List_ESym:
+ print()
+ print('Kernel:', kernel_name)
+
+ run_times[kernel_name] = []
+ for num_el_alp in [0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40]:
+ print()
+ print('Number of edge label alphabet:', num_el_alp)
+
+ # Generate graphs.
+ graphs = generate_graphs(num_el_alp)
+
+ # Compute Gram matrix.
+ run_time = 'error'
+ try:
+ gram_matrix, run_time = compute_graph_kernel(graphs, kernel_name)
+ except Exception as exp:
+ print('An exception occured when running this experiment:')
+ LOG_FILENAME = save_dir + 'error.txt'
+ logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
+ logging.exception('')
+ print(repr(exp))
+ run_times[kernel_name].append(run_time)
+
+ pickle.dump(run_time, open(save_dir + 'run_time.' + kernel_name + '.' + str(num_el_alp) + '.pkl', 'wb'))
+
+ # Save all.
+ pickle.dump(run_times, open(save_dir + 'run_times.pkl', 'wb'))
+
+ return
+
+
+if __name__ == '__main__':
+ xp_synthesized_graphs_num_edge_label_alphabet()
diff --git a/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_num_nl.py b/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_num_nl.py
new file mode 100644
index 0000000000..51e1382ff5
--- /dev/null
+++ b/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_num_nl.py
@@ -0,0 +1,64 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Sep 21 10:34:26 2020
+
+@author: ljia
+"""
+from utils import Graph_Kernel_List_VSym, compute_graph_kernel
+import logging
+
+
+def generate_graphs(num_nl_alp):
+ from gklearn.utils.graph_synthesizer import GraphSynthesizer
+ gsyzer = GraphSynthesizer()
+ graphs = gsyzer.unified_graphs(num_graphs=100, num_nodes=20, num_edges=40, num_node_labels=num_nl_alp, num_edge_labels=0, seed=None, directed=False)
+ return graphs
+
+
+def xp_synthesized_graphs_num_node_label_alphabet():
+
+ # Run and save.
+ import pickle
+ import os
+ save_dir = 'outputs/synthesized_graphs_num_node_label_alphabet/'
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+
+ run_times = {}
+
+ for kernel_name in Graph_Kernel_List_VSym:
+ print()
+ print('Kernel:', kernel_name)
+
+ run_times[kernel_name] = []
+ for num_nl_alp in [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20]:
+ print()
+ print('Number of node label alphabet:', num_nl_alp)
+
+ # Generate graphs.
+ graphs = generate_graphs(num_nl_alp)
+
+ # Compute Gram matrix.
+ run_time = 'error'
+ try:
+ gram_matrix, run_time = compute_graph_kernel(graphs, kernel_name)
+ except Exception as exp:
+ run_times[kernel_name].append('error')
+ print('An exception occured when running this experiment:')
+ LOG_FILENAME = save_dir + 'error.txt'
+ logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
+ logging.exception('')
+ print(repr(exp))
+ run_times[kernel_name].append(run_time)
+
+ pickle.dump(run_time, open(save_dir + 'run_time.' + kernel_name + '.' + str(num_nl_alp) + '.pkl', 'wb'))
+
+ # Save all.
+ pickle.dump(run_times, open(save_dir + 'run_times.pkl', 'wb'))
+
+ return
+
+
+if __name__ == '__main__':
+ xp_synthesized_graphs_num_node_label_alphabet()
diff --git a/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_num_nodes.py b/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_num_nodes.py
new file mode 100644
index 0000000000..f63c404588
--- /dev/null
+++ b/lang/fr/gklearn/experiments/papers/PRL_2020/synthesized_graphs_num_nodes.py
@@ -0,0 +1,64 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Sep 21 10:34:26 2020
+
+@author: ljia
+"""
+from utils import Graph_Kernel_List, compute_graph_kernel
+import logging
+
+
+def generate_graphs(num_nodes):
+ from gklearn.utils.graph_synthesizer import GraphSynthesizer
+ gsyzer = GraphSynthesizer()
+ graphs = gsyzer.unified_graphs(num_graphs=100, num_nodes=num_nodes, num_edges=int(num_nodes*2), num_node_labels=0, num_edge_labels=0, seed=None, directed=False)
+ return graphs
+
+
+def xp_synthesized_graphs_num_nodes():
+
+ # Run and save.
+ import pickle
+ import os
+ save_dir = 'outputs/synthesized_graphs_num_nodes/'
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+
+ run_times = {}
+
+ for kernel_name in Graph_Kernel_List:
+ print()
+ print('Kernel:', kernel_name)
+
+ run_times[kernel_name] = []
+ for num_nodes in [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]:
+ print()
+ print('Number of nodes:', num_nodes)
+
+ # Generate graphs.
+ graphs = generate_graphs(num_nodes)
+
+ # Compute Gram matrix.
+ run_time = 'error'
+ try:
+ gram_matrix, run_time = compute_graph_kernel(graphs, kernel_name)
+ except Exception as exp:
+ run_times[kernel_name].append('error')
+ print('An exception occured when running this experiment:')
+ LOG_FILENAME = save_dir + 'error.txt'
+ logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
+ logging.exception('')
+ print(repr(exp))
+ run_times[kernel_name].append(run_time)
+
+ pickle.dump(run_time, open(save_dir + 'run_time.' + kernel_name + '.' + str(num_nodes) + '.pkl', 'wb'))
+
+ # Save all.
+ pickle.dump(run_times, open(save_dir + 'run_times.pkl', 'wb'))
+
+ return
+
+
+if __name__ == '__main__':
+ xp_synthesized_graphs_num_nodes()
diff --git a/lang/fr/gklearn/experiments/papers/PRL_2020/utils.py b/lang/fr/gklearn/experiments/papers/PRL_2020/utils.py
new file mode 100644
index 0000000000..b676af0021
--- /dev/null
+++ b/lang/fr/gklearn/experiments/papers/PRL_2020/utils.py
@@ -0,0 +1,236 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Tue Sep 22 11:33:28 2020
+
+@author: ljia
+"""
+import multiprocessing
+import numpy as np
+from gklearn.utils import model_selection_for_precomputed_kernel
+
+
+Graph_Kernel_List = ['PathUpToH', 'WLSubtree', 'SylvesterEquation', 'Marginalized', 'ShortestPath', 'Treelet', 'ConjugateGradient', 'FixedPoint', 'SpectralDecomposition', 'StructuralSP', 'CommonWalk']
+# Graph_Kernel_List = ['CommonWalk', 'Marginalized', 'SylvesterEquation', 'ConjugateGradient', 'FixedPoint', 'SpectralDecomposition', 'ShortestPath', 'StructuralSP', 'PathUpToH', 'Treelet', 'WLSubtree']
+
+
+Graph_Kernel_List_VSym = ['PathUpToH', 'WLSubtree', 'Marginalized', 'ShortestPath', 'Treelet', 'ConjugateGradient', 'FixedPoint', 'StructuralSP', 'CommonWalk']
+
+
+Graph_Kernel_List_ESym = ['PathUpToH', 'Marginalized', 'Treelet', 'ConjugateGradient', 'FixedPoint', 'StructuralSP', 'CommonWalk']
+
+
+Graph_Kernel_List_VCon = ['ShortestPath', 'ConjugateGradient', 'FixedPoint', 'StructuralSP']
+
+
+Graph_Kernel_List_ECon = ['ConjugateGradient', 'FixedPoint', 'StructuralSP']
+
+
+Dataset_List = ['Alkane', 'Acyclic', 'MAO', 'PAH', 'MUTAG', 'Letter-med', 'ENZYMES', 'AIDS', 'NCI1', 'NCI109', 'DD']
+
+
+def compute_graph_kernel(graphs, kernel_name, n_jobs=multiprocessing.cpu_count(), chunksize=None):
+
+ if kernel_name == 'CommonWalk':
+ from gklearn.kernels.commonWalkKernel import commonwalkkernel
+ estimator = commonwalkkernel
+ params = {'compute_method': 'geo', 'weight': 0.1}
+
+ elif kernel_name == 'Marginalized':
+ from gklearn.kernels.marginalizedKernel import marginalizedkernel
+ estimator = marginalizedkernel
+ params = {'p_quit': 0.5, 'n_iteration': 5, 'remove_totters': False}
+
+ elif kernel_name == 'SylvesterEquation':
+ from gklearn.kernels.randomWalkKernel import randomwalkkernel
+ estimator = randomwalkkernel
+ params = {'compute_method': 'sylvester', 'weight': 0.1}
+
+ elif kernel_name == 'ConjugateGradient':
+ from gklearn.kernels.randomWalkKernel import randomwalkkernel
+ estimator = randomwalkkernel
+ from gklearn.utils.kernels import deltakernel, gaussiankernel, kernelproduct
+ import functools
+ mixkernel = functools.partial(kernelproduct, deltakernel, gaussiankernel)
+ sub_kernel = {'symb': deltakernel, 'nsymb': gaussiankernel, 'mix': mixkernel}
+ params = {'compute_method': 'conjugate', 'weight': 0.1, 'node_kernels': sub_kernel, 'edge_kernels': sub_kernel}
+
+ elif kernel_name == 'FixedPoint':
+ from gklearn.kernels.randomWalkKernel import randomwalkkernel
+ estimator = randomwalkkernel
+ from gklearn.utils.kernels import deltakernel, gaussiankernel, kernelproduct
+ import functools
+ mixkernel = functools.partial(kernelproduct, deltakernel, gaussiankernel)
+ sub_kernel = {'symb': deltakernel, 'nsymb': gaussiankernel, 'mix': mixkernel}
+ params = {'compute_method': 'fp', 'weight': 1e-4, 'node_kernels': sub_kernel, 'edge_kernels': sub_kernel}
+
+ elif kernel_name == 'SpectralDecomposition':
+ from gklearn.kernels.randomWalkKernel import randomwalkkernel
+ estimator = randomwalkkernel
+ params = {'compute_method': 'spectral', 'sub_kernel': 'geo', 'weight': 0.1}
+
+ elif kernel_name == 'ShortestPath':
+ from gklearn.kernels.spKernel import spkernel
+ estimator = spkernel
+ from gklearn.utils.kernels import deltakernel, gaussiankernel, kernelproduct
+ import functools
+ mixkernel = functools.partial(kernelproduct, deltakernel, gaussiankernel)
+ sub_kernel = {'symb': deltakernel, 'nsymb': gaussiankernel, 'mix': mixkernel}
+ params = {'node_kernels': sub_kernel}
+
+ elif kernel_name == 'StructuralSP':
+ from gklearn.kernels.structuralspKernel import structuralspkernel
+ estimator = structuralspkernel
+ from gklearn.utils.kernels import deltakernel, gaussiankernel, kernelproduct
+ import functools
+ mixkernel = functools.partial(kernelproduct, deltakernel, gaussiankernel)
+ sub_kernel = {'symb': deltakernel, 'nsymb': gaussiankernel, 'mix': mixkernel}
+ params = {'node_kernels': sub_kernel, 'edge_kernels': sub_kernel}
+
+ elif kernel_name == 'PathUpToH':
+ from gklearn.kernels.untilHPathKernel import untilhpathkernel
+ estimator = untilhpathkernel
+ params = {'depth': 5, 'k_func': 'MinMax', 'compute_method': 'trie'}
+
+ elif kernel_name == 'Treelet':
+ from gklearn.kernels.treeletKernel import treeletkernel
+ estimator = treeletkernel
+ from gklearn.utils.kernels import polynomialkernel
+ import functools
+ sub_kernel = functools.partial(polynomialkernel, d=4, c=1e+8)
+ params = {'sub_kernel': sub_kernel}
+
+ elif kernel_name == 'WLSubtree':
+ from gklearn.kernels.weisfeilerLehmanKernel import weisfeilerlehmankernel
+ estimator = weisfeilerlehmankernel
+ params = {'base_kernel': 'subtree', 'height': 5}
+
+# params['parallel'] = None
+ params['n_jobs'] = n_jobs
+ params['chunksize'] = chunksize
+ params['verbose'] = True
+ results = estimator(graphs, **params)
+
+ return results[0], results[1]
+
+
+def cross_validate(graphs, targets, kernel_name, output_dir='outputs/', ds_name='synthesized', n_jobs=multiprocessing.cpu_count()):
+
+ param_grid = None
+
+ if kernel_name == 'CommonWalk':
+ from gklearn.kernels.commonWalkKernel import commonwalkkernel
+ estimator = commonwalkkernel
+ param_grid_precomputed = [{'compute_method': ['geo'],
+ 'weight': np.linspace(0.01, 0.15, 15)}]
+
+ elif kernel_name == 'Marginalized':
+ from gklearn.kernels.marginalizedKernel import marginalizedkernel
+ estimator = marginalizedkernel
+ param_grid_precomputed = {'p_quit': np.linspace(0.1, 0.9, 9),
+ 'n_iteration': np.linspace(1, 19, 7),
+ 'remove_totters': [False]}
+
+ elif kernel_name == 'SylvesterEquation':
+ from gklearn.kernels.randomWalkKernel import randomwalkkernel
+ estimator = randomwalkkernel
+ param_grid_precomputed = {'compute_method': ['sylvester'],
+# 'weight': np.linspace(0.01, 0.10, 10)}
+ 'weight': np.logspace(-1, -10, num=10, base=10)}
+
+ elif kernel_name == 'ConjugateGradient':
+ from gklearn.kernels.randomWalkKernel import randomwalkkernel
+ estimator = randomwalkkernel
+ from gklearn.utils.kernels import deltakernel, gaussiankernel, kernelproduct
+ import functools
+ mixkernel = functools.partial(kernelproduct, deltakernel, gaussiankernel)
+ sub_kernel = {'symb': deltakernel, 'nsymb': gaussiankernel, 'mix': mixkernel}
+ param_grid_precomputed = {'compute_method': ['conjugate'],
+ 'node_kernels': [sub_kernel], 'edge_kernels': [sub_kernel],
+ 'weight': np.logspace(-1, -10, num=10, base=10)}
+
+ elif kernel_name == 'FixedPoint':
+ from gklearn.kernels.randomWalkKernel import randomwalkkernel
+ estimator = randomwalkkernel
+ from gklearn.utils.kernels import deltakernel, gaussiankernel, kernelproduct
+ import functools
+ mixkernel = functools.partial(kernelproduct, deltakernel, gaussiankernel)
+ sub_kernel = {'symb': deltakernel, 'nsymb': gaussiankernel, 'mix': mixkernel}
+ param_grid_precomputed = {'compute_method': ['fp'],
+ 'node_kernels': [sub_kernel], 'edge_kernels': [sub_kernel],
+ 'weight': np.logspace(-4, -10, num=7, base=10)}
+
+ elif kernel_name == 'SpectralDecomposition':
+ from gklearn.kernels.randomWalkKernel import randomwalkkernel
+ estimator = randomwalkkernel
+ param_grid_precomputed = {'compute_method': ['spectral'],
+ 'weight': np.logspace(-1, -10, num=10, base=10),
+ 'sub_kernel': ['geo', 'exp']}
+
+ elif kernel_name == 'ShortestPath':
+ from gklearn.kernels.spKernel import spkernel
+ estimator = spkernel
+ from gklearn.utils.kernels import deltakernel, gaussiankernel, kernelproduct
+ import functools
+ mixkernel = functools.partial(kernelproduct, deltakernel, gaussiankernel)
+ sub_kernel = {'symb': deltakernel, 'nsymb': gaussiankernel, 'mix': mixkernel}
+ param_grid_precomputed = {'node_kernels': [sub_kernel]}
+
+ elif kernel_name == 'StructuralSP':
+ from gklearn.kernels.structuralspKernel import structuralspkernel
+ estimator = structuralspkernel
+ from gklearn.utils.kernels import deltakernel, gaussiankernel, kernelproduct
+ import functools
+ mixkernel = functools.partial(kernelproduct, deltakernel, gaussiankernel)
+ sub_kernel = {'symb': deltakernel, 'nsymb': gaussiankernel, 'mix': mixkernel}
+ param_grid_precomputed = {'node_kernels': [sub_kernel], 'edge_kernels': [sub_kernel],
+ 'compute_method': ['naive']}
+
+ elif kernel_name == 'PathUpToH':
+ from gklearn.kernels.untilHPathKernel import untilhpathkernel
+ estimator = untilhpathkernel
+ param_grid_precomputed = {'depth': np.linspace(1, 10, 10), # [2],
+ 'k_func': ['MinMax', 'tanimoto'], # ['MinMax'], #
+ 'compute_method': ['trie']} # ['MinMax']}
+
+ elif kernel_name == 'Treelet':
+ from gklearn.kernels.treeletKernel import treeletkernel
+ estimator = treeletkernel
+ from gklearn.utils.kernels import gaussiankernel, polynomialkernel
+ import functools
+ gkernels = [functools.partial(gaussiankernel, gamma=1 / ga)
+ # for ga in np.linspace(1, 10, 10)]
+ for ga in np.logspace(0, 10, num=11, base=10)]
+ pkernels = [functools.partial(polynomialkernel, d=d, c=c) for d in range(1, 11)
+ for c in np.logspace(0, 10, num=11, base=10)]
+# pkernels = [functools.partial(polynomialkernel, d=1, c=1)]
+
+ param_grid_precomputed = {'sub_kernel': pkernels + gkernels}
+# 'parallel': [None]}
+
+ elif kernel_name == 'WLSubtree':
+ from gklearn.kernels.weisfeilerLehmanKernel import weisfeilerlehmankernel
+ estimator = weisfeilerlehmankernel
+ param_grid_precomputed = {'base_kernel': ['subtree'],
+ 'height': np.linspace(0, 10, 11)}
+ param_grid = {'C': np.logspace(-10, 4, num=29, base=10)}
+
+ if param_grid is None:
+ param_grid = {'C': np.logspace(-10, 10, num=41, base=10)}
+
+ results = model_selection_for_precomputed_kernel(
+ graphs,
+ estimator,
+ param_grid_precomputed,
+ param_grid,
+ 'classification',
+ NUM_TRIALS=28,
+ datafile_y=targets,
+ extra_params=None,
+ ds_name=ds_name,
+ output_dir=output_dir,
+ n_jobs=n_jobs,
+ read_gm_from_file=False,
+ verbose=True)
+
+ return results[0], results[1]
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/edit_costs/__init__.py b/lang/fr/gklearn/ged/edit_costs/__init__.py
new file mode 100644
index 0000000000..b2a2b12361
--- /dev/null
+++ b/lang/fr/gklearn/ged/edit_costs/__init__.py
@@ -0,0 +1,2 @@
+from gklearn.ged.edit_costs.edit_cost import EditCost
+from gklearn.ged.edit_costs.constant import Constant
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/edit_costs/constant.py b/lang/fr/gklearn/ged/edit_costs/constant.py
new file mode 100644
index 0000000000..9dca1a214e
--- /dev/null
+++ b/lang/fr/gklearn/ged/edit_costs/constant.py
@@ -0,0 +1,50 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Wed Jun 17 17:52:23 2020
+
+@author: ljia
+"""
+from gklearn.ged.edit_costs import EditCost
+
+
+class Constant(EditCost):
+ """Implements constant edit cost functions.
+ """
+
+
+ def __init__(self, node_ins_cost=1, node_del_cost=1, node_rel_cost=1, edge_ins_cost=1, edge_del_cost=1, edge_rel_cost=1):
+ self._node_ins_cost = node_ins_cost
+ self._node_del_cost = node_del_cost
+ self._node_rel_cost = node_rel_cost
+ self._edge_ins_cost = edge_ins_cost
+ self._edge_del_cost = edge_del_cost
+ self._edge_rel_cost = edge_rel_cost
+
+
+ def node_ins_cost_fun(self, node_label):
+ return self._node_ins_cost
+
+
+ def node_del_cost_fun(self, node_label):
+ return self._node_del_cost
+
+
+ def node_rel_cost_fun(self, node_label_1, node_label_2):
+ if node_label_1 != node_label_2:
+ return self._node_rel_cost
+ return 0
+
+
+ def edge_ins_cost_fun(self, edge_label):
+ return self._edge_ins_cost
+
+
+ def edge_del_cost_fun(self, edge_label):
+ return self._edge_del_cost
+
+
+ def edge_rel_cost_fun(self, edge_label_1, edge_label_2):
+ if edge_label_1 != edge_label_2:
+ return self._edge_rel_cost
+ return 0
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/edit_costs/edit_cost.py b/lang/fr/gklearn/ged/edit_costs/edit_cost.py
new file mode 100644
index 0000000000..5d15827e5d
--- /dev/null
+++ b/lang/fr/gklearn/ged/edit_costs/edit_cost.py
@@ -0,0 +1,88 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Wed Jun 17 17:49:24 2020
+
+@author: ljia
+"""
+
+
+class EditCost(object):
+
+
+ def __init__(self):
+ pass
+
+
+ def node_ins_cost_fun(self, node_label):
+ """
+ /*!
+ * @brief Node insertions cost function.
+ * @param[in] node_label A node label.
+ * @return The cost of inserting a node with label @p node_label.
+ * @note Must be implemented by derived classes of ged::EditCosts.
+ */
+ """
+ return 0
+
+
+ def node_del_cost_fun(self, node_label):
+ """
+ /*!
+ * @brief Node deletion cost function.
+ * @param[in] node_label A node label.
+ * @return The cost of deleting a node with label @p node_label.
+ * @note Must be implemented by derived classes of ged::EditCosts.
+ */
+ """
+ return 0
+
+
+ def node_rel_cost_fun(self, node_label_1, node_label_2):
+ """
+ /*!
+ * @brief Node relabeling cost function.
+ * @param[in] node_label_1 A node label.
+ * @param[in] node_label_2 A node label.
+ * @return The cost of changing a node's label from @p node_label_1 to @p node_label_2.
+ * @note Must be implemented by derived classes of ged::EditCosts.
+ */
+ """
+ return 0
+
+
+ def edge_ins_cost_fun(self, edge_label):
+ """
+ /*!
+ * @brief Edge insertion cost function.
+ * @param[in] edge_label An edge label.
+ * @return The cost of inserting an edge with label @p edge_label.
+ * @note Must be implemented by derived classes of ged::EditCosts.
+ */
+ """
+ return 0
+
+
+ def edge_del_cost_fun(self, edge_label):
+ """
+ /*!
+ * @brief Edge deletion cost function.
+ * @param[in] edge_label An edge label.
+ * @return The cost of deleting an edge with label @p edge_label.
+ * @note Must be implemented by derived classes of ged::EditCosts.
+ */
+ """
+ return 0
+
+
+ def edge_rel_cost_fun(self, edge_label_1, edge_label_2):
+ """
+ /*!
+ * @brief Edge relabeling cost function.
+ * @param[in] edge_label_1 An edge label.
+ * @param[in] edge_label_2 An edge label.
+ * @return The cost of changing an edge's label from @p edge_label_1 to @p edge_label_2.
+ * @note Must be implemented by derived classes of ged::EditCosts.
+ */
+ """
+ return 0
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/env/__init__.py b/lang/fr/gklearn/ged/env/__init__.py
new file mode 100644
index 0000000000..1a5a0cefec
--- /dev/null
+++ b/lang/fr/gklearn/ged/env/__init__.py
@@ -0,0 +1,4 @@
+from gklearn.ged.env.common_types import Options, OptionsStringMap, AlgorithmState
+from gklearn.ged.env.ged_data import GEDData
+from gklearn.ged.env.ged_env import GEDEnv
+from gklearn.ged.env.node_map import NodeMap
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/env/common_types.py b/lang/fr/gklearn/ged/env/common_types.py
new file mode 100644
index 0000000000..091d952a44
--- /dev/null
+++ b/lang/fr/gklearn/ged/env/common_types.py
@@ -0,0 +1,159 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Thu Mar 19 18:17:38 2020
+
+@author: ljia
+"""
+
+from enum import Enum, unique
+
+
+class Options(object):
+ """Contains enums for options employed by ged::GEDEnv.
+ """
+
+
+ @unique
+ class GEDMethod(Enum):
+ """Selects the method.
+ """
+# @todo: what is this? #ifdef GUROBI
+ F1 = 1 # Selects ged::F1.
+ F2 = 2 # Selects ged::F2.
+ COMPACT_MIP = 3 # Selects ged::CompactMIP.
+ BLP_NO_EDGE_LABELS = 4 # Selects ged::BLPNoEdgeLabels.
+#endif /* GUROBI */
+ BRANCH = 5 # Selects ged::Branch.
+ BRANCH_FAST = 6 # Selects ged::BranchFast.
+ BRANCH_TIGHT = 7 # Selects ged::BranchTight.
+ BRANCH_UNIFORM = 8 # Selects ged::BranchUniform.
+ BRANCH_COMPACT = 9 # Selects ged::BranchCompact.
+ PARTITION = 10 # Selects ged::Partition.
+ HYBRID = 11 # Selects ged::Hybrid.
+ RING = 12 # Selects ged::Ring.
+ ANCHOR_AWARE_GED = 13 # Selects ged::AnchorAwareGED.
+ WALKS = 14 # Selects ged::Walks.
+ IPFP = 15 # Selects ged::IPFP
+ BIPARTITE = 16 # Selects ged::Bipartite.
+ SUBGRAPH = 17 # Selects ged::Subgraph.
+ NODE = 18 # Selects ged::Node.
+ RING_ML = 19 # Selects ged::RingML.
+ BIPARTITE_ML = 20 # Selects ged::BipartiteML.
+ REFINE = 21 # Selects ged::Refine.
+ BP_BEAM = 22 # Selects ged::BPBeam.
+ SIMULATED_ANNEALING = 23 # Selects ged::SimulatedAnnealing.
+ HED = 24 # Selects ged::HED.
+ STAR = 25 # Selects ged::Star.
+
+
+ @unique
+ class EditCosts(Enum):
+ """Selects the edit costs.
+ """
+ CHEM_1 = 1 # Selects ged::CHEM1.
+ CHEM_2 = 2 # Selects ged::CHEM2.
+ CMU = 3 # Selects ged::CMU.
+ GREC_1 = 4 # Selects ged::GREC1.
+ GREC_2 = 5 # Selects ged::GREC2.
+ PROTEIN = 6 # Selects ged::Protein.
+ FINGERPRINT = 7 # Selects ged::Fingerprint.
+ LETTER = 8 # Selects ged::Letter.
+ LETTER2 = 9 # Selects ged:Letter2.
+ NON_SYMBOLIC = 10 # Selects ged:NonSymbolic.
+ CONSTANT = 11 # Selects ged::Constant.
+
+
+ @unique
+ class InitType(Enum):
+ """@brief Selects the initialization type of the environment.
+ * @details If eager initialization is selected, all edit costs are pre-computed when initializing the environment.
+ * Otherwise, they are computed at runtime. If initialization with shuffled copies is selected, shuffled copies of
+ * all graphs are created. These copies are used when calling ged::GEDEnv::run_method() with two identical graph IDs.
+ * In this case, one of the IDs is internally replaced by the ID of the shuffled copy and the graph is hence
+ * compared to an isomorphic but non-identical graph. If initialization without shuffled copies is selected, no shuffled copies
+ * are created and calling ged::GEDEnv::run_method() with two identical graph IDs amounts to comparing a graph to itself.
+ """
+ LAZY_WITHOUT_SHUFFLED_COPIES = 1 # Lazy initialization, no shuffled graph copies are constructed.
+ EAGER_WITHOUT_SHUFFLED_COPIES = 2 # Eager initialization, no shuffled graph copies are constructed.
+ LAZY_WITH_SHUFFLED_COPIES = 3 # Lazy initialization, shuffled graph copies are constructed.
+ EAGER_WITH_SHUFFLED_COPIES = 4 # Eager initialization, shuffled graph copies are constructed.
+
+
+ @unique
+ class AlgorithmState(Enum):
+ """can be used to specify the state of an algorithm.
+ """
+ CALLED = 1 # The algorithm has been called.
+ INITIALIZED = 2 # The algorithm has been initialized.
+ CONVERGED = 3 # The algorithm has converged.
+ TERMINATED = 4 # The algorithm has terminated.
+
+
+class OptionsStringMap(object):
+
+
+ # Map of available computation methods between enum type and string.
+ GEDMethod = {
+ "BRANCH": Options.GEDMethod.BRANCH,
+ "BRANCH_FAST": Options.GEDMethod.BRANCH_FAST,
+ "BRANCH_TIGHT": Options.GEDMethod.BRANCH_TIGHT,
+ "BRANCH_UNIFORM": Options.GEDMethod.BRANCH_UNIFORM,
+ "BRANCH_COMPACT": Options.GEDMethod.BRANCH_COMPACT,
+ "PARTITION": Options.GEDMethod.PARTITION,
+ "HYBRID": Options.GEDMethod.HYBRID,
+ "RING": Options.GEDMethod.RING,
+ "ANCHOR_AWARE_GED": Options.GEDMethod.ANCHOR_AWARE_GED,
+ "WALKS": Options.GEDMethod.WALKS,
+ "IPFP": Options.GEDMethod.IPFP,
+ "BIPARTITE": Options.GEDMethod.BIPARTITE,
+ "SUBGRAPH": Options.GEDMethod.SUBGRAPH,
+ "NODE": Options.GEDMethod.NODE,
+ "RING_ML": Options.GEDMethod.RING_ML,
+ "BIPARTITE_ML": Options.GEDMethod.BIPARTITE_ML,
+ "REFINE": Options.GEDMethod.REFINE,
+ "BP_BEAM": Options.GEDMethod.BP_BEAM,
+ "SIMULATED_ANNEALING": Options.GEDMethod.SIMULATED_ANNEALING,
+ "HED": Options.GEDMethod.HED,
+ "STAR": Options.GEDMethod.STAR,
+ # ifdef GUROBI
+ "F1": Options.GEDMethod.F1,
+ "F2": Options.GEDMethod.F2,
+ "COMPACT_MIP": Options.GEDMethod.COMPACT_MIP,
+ "BLP_NO_EDGE_LABELS": Options.GEDMethod.BLP_NO_EDGE_LABELS
+ }
+
+
+ # Map of available edit cost functions between enum type and string.
+ EditCosts = {
+ "CHEM_1": Options.EditCosts.CHEM_1,
+ "CHEM_2": Options.EditCosts.CHEM_2,
+ "CMU": Options.EditCosts.CMU,
+ "GREC_1": Options.EditCosts.GREC_1,
+ "GREC_2": Options.EditCosts.GREC_2,
+ "LETTER": Options.EditCosts.LETTER,
+ "LETTER2": Options.EditCosts.LETTER2,
+ "NON_SYMBOLIC": Options.EditCosts.NON_SYMBOLIC,
+ "FINGERPRINT": Options.EditCosts.FINGERPRINT,
+ "PROTEIN": Options.EditCosts.PROTEIN,
+ "CONSTANT": Options.EditCosts.CONSTANT
+ }
+
+ # Map of available initialization types of the environment between enum type and string.
+ InitType = {
+ "LAZY_WITHOUT_SHUFFLED_COPIES": Options.InitType.LAZY_WITHOUT_SHUFFLED_COPIES,
+ "EAGER_WITHOUT_SHUFFLED_COPIES": Options.InitType.EAGER_WITHOUT_SHUFFLED_COPIES,
+ "LAZY_WITH_SHUFFLED_COPIES": Options.InitType.LAZY_WITH_SHUFFLED_COPIES,
+ "LAZY_WITH_SHUFFLED_COPIES": Options.InitType.LAZY_WITH_SHUFFLED_COPIES
+ }
+
+
+@unique
+class AlgorithmState(Enum):
+ """can be used to specify the state of an algorithm.
+ """
+ CALLED = 1 # The algorithm has been called.
+ INITIALIZED = 2 # The algorithm has been initialized.
+ CONVERGED = 3 # The algorithm has converged.
+ TERMINATED = 4 # The algorithm has terminated.
+
diff --git a/lang/fr/gklearn/ged/env/ged_data.py b/lang/fr/gklearn/ged/env/ged_data.py
new file mode 100644
index 0000000000..0e6881fa56
--- /dev/null
+++ b/lang/fr/gklearn/ged/env/ged_data.py
@@ -0,0 +1,249 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Wed Jun 17 15:05:01 2020
+
+@author: ljia
+"""
+from gklearn.ged.env import Options, OptionsStringMap
+from gklearn.ged.edit_costs import Constant
+from gklearn.utils import SpecialLabel, dummy_node
+
+
+class GEDData(object):
+
+
+ def __init__(self):
+ self._graphs = []
+ self._graph_names = []
+ self._graph_classes = []
+ self._num_graphs_without_shuffled_copies = 0
+ self._strings_to_internal_node_ids = []
+ self._internal_node_ids_to_strings = []
+ self._edit_cost = None
+ self._node_costs = None
+ self._edge_costs = None
+ self._node_label_costs = None
+ self._edge_label_costs = None
+ self._node_labels = []
+ self._edge_labels = []
+ self._init_type = Options.InitType.EAGER_WITHOUT_SHUFFLED_COPIES
+ self._delete_edit_cost = True
+ self._max_num_nodes = 0
+ self._max_num_edges = 0
+
+
+ def num_graphs(self):
+ """
+ /*!
+ * @brief Returns the number of graphs.
+ * @return Number of graphs in the instance.
+ */
+ """
+ return len(self._graphs)
+
+
+ def graph(self, graph_id):
+ """
+ /*!
+ * @brief Provides access to a graph.
+ * @param[in] graph_id The ID of the graph.
+ * @return Constant reference to the graph with ID @p graph_id.
+ */
+ """
+ return self._graphs[graph_id]
+
+
+ def shuffled_graph_copies_available(self):
+ """
+ /*!
+ * @brief Checks if shuffled graph copies are available.
+ * @return Boolean @p true if shuffled graph copies are available.
+ */
+ """
+ return (self._init_type == Options.InitType.EAGER_WITH_SHUFFLED_COPIES or self._init_type == Options.InitType.LAZY_WITH_SHUFFLED_COPIES)
+
+
+ def num_graphs_without_shuffled_copies(self):
+ """
+ /*!
+ * @brief Returns the number of graphs in the instance without the shuffled copies.
+ * @return Number of graphs without shuffled copies contained in the instance.
+ */
+ """
+ return self._num_graphs_without_shuffled_copies
+
+
+ def node_cost(self, label1, label2):
+ """
+ /*!
+ * @brief Returns node relabeling, insertion, or deletion cost.
+ * @param[in] label1 First node label.
+ * @param[in] label2 Second node label.
+ * @return Node relabeling cost if @p label1 and @p label2 are both different from ged::dummy_label(),
+ * node insertion cost if @p label1 equals ged::dummy_label and @p label2 does not,
+ * node deletion cost if @p label1 does not equal ged::dummy_label and @p label2 does,
+ * and 0 otherwise.
+ */
+ """
+ if self._node_label_costs is None:
+ if self._eager_init(): # @todo: check if correct
+ return self._node_costs[label1, label2]
+ if label1 == label2:
+ return 0
+ if label1 == SpecialLabel.DUMMY: # @todo: check dummy
+ return self._edit_cost.node_ins_cost_fun(label2) # self._node_labels[label2 - 1]) # @todo: check
+ if label2 == SpecialLabel.DUMMY: # @todo: check dummy
+ return self._edit_cost.node_del_cost_fun(label1) # self._node_labels[label1 - 1])
+ return self._edit_cost.node_rel_cost_fun(label1, label2) # self._node_labels[label1 - 1], self._node_labels[label2 - 1])
+ # use pre-computed node label costs.
+ else:
+ id1 = 0 if label1 == SpecialLabel.DUMMY else self._node_label_to_id(label1) # @todo: this is slow.
+ id2 = 0 if label2 == SpecialLabel.DUMMY else self._node_label_to_id(label2)
+ return self._node_label_costs[id1, id2]
+
+
+ def edge_cost(self, label1, label2):
+ """
+ /*!
+ * @brief Returns edge relabeling, insertion, or deletion cost.
+ * @param[in] label1 First edge label.
+ * @param[in] label2 Second edge label.
+ * @return Edge relabeling cost if @p label1 and @p label2 are both different from ged::dummy_label(),
+ * edge insertion cost if @p label1 equals ged::dummy_label and @p label2 does not,
+ * edge deletion cost if @p label1 does not equal ged::dummy_label and @p label2 does,
+ * and 0 otherwise.
+ */
+ """
+ if self._edge_label_costs is None:
+ if self._eager_init(): # @todo: check if correct
+ return self._node_costs[label1, label2]
+ if label1 == label2:
+ return 0
+ if label1 == SpecialLabel.DUMMY:
+ return self._edit_cost.edge_ins_cost_fun(label2) # self._edge_labels[label2 - 1])
+ if label2 == SpecialLabel.DUMMY:
+ return self._edit_cost.edge_del_cost_fun(label1) # self._edge_labels[label1 - 1])
+ return self._edit_cost.edge_rel_cost_fun(label1, label2) # self._edge_labels[label1 - 1], self._edge_labels[label2 - 1])
+
+ # use pre-computed edge label costs.
+ else:
+ id1 = 0 if label1 == SpecialLabel.DUMMY else self._edge_label_to_id(label1) # @todo: this is slow.
+ id2 = 0 if label2 == SpecialLabel.DUMMY else self._edge_label_to_id(label2)
+ return self._edge_label_costs[id1, id2]
+
+
+ def compute_induced_cost(self, g, h, node_map):
+ """
+ /*!
+ * @brief Computes the edit cost between two graphs induced by a node map.
+ * @param[in] g Input graph.
+ * @param[in] h Input graph.
+ * @param[in,out] node_map Node map whose induced edit cost is to be computed.
+ */
+ """
+ cost = 0
+
+ # collect node costs
+ for node in g.nodes():
+ image = node_map.image(node)
+ label2 = (SpecialLabel.DUMMY if image == dummy_node() else h.nodes[image]['label'])
+ cost += self.node_cost(g.nodes[node]['label'], label2)
+ for node in h.nodes():
+ pre_image = node_map.pre_image(node)
+ if pre_image == dummy_node():
+ cost += self.node_cost(SpecialLabel.DUMMY, h.nodes[node]['label'])
+
+ # collect edge costs
+ for (n1, n2) in g.edges():
+ image1 = node_map.image(n1)
+ image2 = node_map.image(n2)
+ label2 = (h.edges[(image2, image1)]['label'] if h.has_edge(image2, image1) else SpecialLabel.DUMMY)
+ cost += self.edge_cost(g.edges[(n1, n2)]['label'], label2)
+ for (n1, n2) in h.edges():
+ if not g.has_edge(node_map.pre_image(n2), node_map.pre_image(n1)):
+ cost += self.edge_cost(SpecialLabel.DUMMY, h.edges[(n1, n2)]['label'])
+
+ node_map.set_induced_cost(cost)
+
+
+ def _set_edit_cost(self, edit_cost, edit_cost_constants):
+ if self._delete_edit_cost:
+ self._edit_cost = None
+
+ if isinstance(edit_cost, str):
+ edit_cost = OptionsStringMap.EditCosts[edit_cost]
+
+ if edit_cost == Options.EditCosts.CHEM_1:
+ if len(edit_cost_constants) == 4:
+ self._edit_cost = CHEM1(edit_cost_constants[0], edit_cost_constants[1], edit_cost_constants[2], edit_cost_constants[3])
+ elif len(edit_cost_constants) == 0:
+ self._edit_cost = CHEM1()
+ else:
+ raise Exception('Wrong number of constants for selected edit costs Options::EditCosts::CHEM_1. Expected: 4 or 0; actual:', len(edit_cost_constants), '.')
+ elif edit_cost == Options.EditCosts.LETTER:
+ if len(edit_cost_constants) == 3:
+ self._edit_cost = Letter(edit_cost_constants[0], edit_cost_constants[1], edit_cost_constants[2])
+ elif len(edit_cost_constants) == 0:
+ self._edit_cost = Letter()
+ else:
+ raise Exception('Wrong number of constants for selected edit costs Options::EditCosts::LETTER. Expected: 3 or 0; actual:', len(edit_cost_constants), '.')
+ elif edit_cost == Options.EditCosts.LETTER2:
+ if len(edit_cost_constants) == 5:
+ self._edit_cost = Letter2(edit_cost_constants[0], edit_cost_constants[1], edit_cost_constants[2], edit_cost_constants[3], edit_cost_constants[4])
+ elif len(edit_cost_constants) == 0:
+ self._edit_cost = Letter2()
+ else:
+ raise Exception('Wrong number of constants for selected edit costs Options::EditCosts::LETTER2. Expected: 5 or 0; actual:', len(edit_cost_constants), '.')
+ elif edit_cost == Options.EditCosts.NON_SYMBOLIC:
+ if len(edit_cost_constants) == 6:
+ self._edit_cost = NonSymbolic(edit_cost_constants[0], edit_cost_constants[1], edit_cost_constants[2], edit_cost_constants[3], edit_cost_constants[4], edit_cost_constants[5])
+ elif len(edit_cost_constants) == 0:
+ self._edit_cost = NonSymbolic()
+ else:
+ raise Exception('Wrong number of constants for selected edit costs Options::EditCosts::NON_SYMBOLIC. Expected: 6 or 0; actual:', len(edit_cost_constants), '.')
+ elif edit_cost == Options.EditCosts.CONSTANT:
+ if len(edit_cost_constants) == 6:
+ self._edit_cost = Constant(edit_cost_constants[0], edit_cost_constants[1], edit_cost_constants[2], edit_cost_constants[3], edit_cost_constants[4], edit_cost_constants[5])
+ elif len(edit_cost_constants) == 0:
+ self._edit_cost = Constant()
+ else:
+ raise Exception('Wrong number of constants for selected edit costs Options::EditCosts::CONSTANT. Expected: 6 or 0; actual:', len(edit_cost_constants), '.')
+
+ self._delete_edit_cost = True
+
+
+ def id_to_node_label(self, label_id):
+ if label_id > len(self._node_labels) or label_id == 0:
+ raise Exception('Invalid node label ID', str(label_id), '.')
+ return self._node_labels[label_id - 1]
+
+
+ def _node_label_to_id(self, node_label):
+ n_id = 0
+ for n_l in self._node_labels:
+ if n_l == node_label:
+ return n_id + 1
+ n_id += 1
+ self._node_labels.append(node_label)
+ return n_id + 1
+
+
+ def id_to_edge_label(self, label_id):
+ if label_id > len(self._edge_labels) or label_id == 0:
+ raise Exception('Invalid edge label ID', str(label_id), '.')
+ return self._edge_labels[label_id - 1]
+
+
+ def _edge_label_to_id(self, edge_label):
+ e_id = 0
+ for e_l in self._edge_labels:
+ if e_l == edge_label:
+ return e_id + 1
+ e_id += 1
+ self._edge_labels.append(edge_label)
+ return e_id + 1
+
+
+ def _eager_init(self):
+ return (self._init_type == Options.InitType.EAGER_WITHOUT_SHUFFLED_COPIES or self._init_type == Options.InitType.EAGER_WITH_SHUFFLED_COPIES)
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/env/ged_env.py b/lang/fr/gklearn/ged/env/ged_env.py
new file mode 100644
index 0000000000..3d7644b77f
--- /dev/null
+++ b/lang/fr/gklearn/ged/env/ged_env.py
@@ -0,0 +1,733 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Wed Jun 17 12:02:36 2020
+
+@author: ljia
+"""
+import numpy as np
+import networkx as nx
+from gklearn.ged.env import Options, OptionsStringMap
+from gklearn.ged.env import GEDData
+
+
+class GEDEnv(object):
+
+
+ def __init__(self):
+ self._initialized = False
+ self._new_graph_ids = []
+ self._ged_data = GEDData()
+ # Variables needed for approximating ged_instance_.
+ self._lower_bounds = {}
+ self._upper_bounds = {}
+ self._runtimes = {}
+ self._node_maps = {}
+ self._original_to_internal_node_ids = []
+ self._internal_to_original_node_ids = []
+ self._ged_method = None
+
+
+ def set_edit_cost(self, edit_cost, edit_cost_constants=[]):
+ """
+ /*!
+ * @brief Sets the edit costs to one of the predefined edit costs.
+ * @param[in] edit_costs Select one of the predefined edit costs.
+ * @param[in] edit_cost_constants Constants passed to the constructor of the edit cost class selected by @p edit_costs.
+ */
+ """
+ self._ged_data._set_edit_cost(edit_cost, edit_cost_constants)
+
+
+ def add_graph(self, graph_name='', graph_class=''):
+ """
+ /*!
+ * @brief Adds a new uninitialized graph to the environment. Call init() after calling this method.
+ * @param[in] graph_name The name of the added graph. Empty if not specified.
+ * @param[in] graph_class The class of the added graph. Empty if not specified.
+ * @return The ID of the newly added graph.
+ */
+ """
+ # @todo: graphs are not uninitialized.
+ self._initialized = False
+ graph_id = self._ged_data._num_graphs_without_shuffled_copies
+ self._ged_data._num_graphs_without_shuffled_copies += 1
+ self._new_graph_ids.append(graph_id)
+ self._ged_data._graphs.append(nx.Graph())
+ self._ged_data._graph_names.append(graph_name)
+ self._ged_data._graph_classes.append(graph_class)
+ self._original_to_internal_node_ids.append({})
+ self._internal_to_original_node_ids.append({})
+ self._ged_data._strings_to_internal_node_ids.append({})
+ self._ged_data._internal_node_ids_to_strings.append({})
+ return graph_id
+
+
+ def clear_graph(self, graph_id):
+ """
+ /*!
+ * @brief Clears and de-initializes a graph that has previously been added to the environment. Call init() after calling this method.
+ * @param[in] graph_id ID of graph that has to be cleared.
+ */
+ """
+ if graph_id > self._ged_data.num_graphs_without_shuffled_copies():
+ raise Exception('The graph', self.get_graph_name(graph_id), 'has not been added to the environment.')
+ self._ged_data._graphs[graph_id].clear()
+ self._original_to_internal_node_ids[graph_id].clear()
+ self._internal_to_original_node_ids[graph_id].clear()
+ self._ged_data._strings_to_internal_node_ids[graph_id].clear()
+ self._ged_data._internal_node_ids_to_strings[graph_id].clear()
+ self._initialized = False
+
+
+ def add_node(self, graph_id, node_id, node_label):
+ """
+ /*!
+ * @brief Adds a labeled node.
+ * @param[in] graph_id ID of graph that has been added to the environment.
+ * @param[in] node_id The user-specific ID of the vertex that has to be added.
+ * @param[in] node_label The label of the vertex that has to be added. Set to ged::NoLabel() if template parameter @p UserNodeLabel equals ged::NoLabel.
+ */
+ """
+ # @todo: check ids.
+ self._initialized = False
+ internal_node_id = nx.number_of_nodes(self._ged_data._graphs[graph_id])
+ self._ged_data._graphs[graph_id].add_node(internal_node_id, label=node_label)
+ self._original_to_internal_node_ids[graph_id][node_id] = internal_node_id
+ self._internal_to_original_node_ids[graph_id][internal_node_id] = node_id
+ self._ged_data._strings_to_internal_node_ids[graph_id][str(node_id)] = internal_node_id
+ self._ged_data._internal_node_ids_to_strings[graph_id][internal_node_id] = str(node_id)
+ self._ged_data._node_label_to_id(node_label)
+ label_id = self._ged_data._node_label_to_id(node_label)
+ # @todo: ged_data_.graphs_[graph_id].set_label
+
+
+ def add_edge(self, graph_id, nd_from, nd_to, edge_label, ignore_duplicates=True):
+ """
+ /*!
+ * @brief Adds a labeled edge.
+ * @param[in] graph_id ID of graph that has been added to the environment.
+ * @param[in] tail The user-specific ID of the tail of the edge that has to be added.
+ * @param[in] head The user-specific ID of the head of the edge that has to be added.
+ * @param[in] edge_label The label of the vertex that has to be added. Set to ged::NoLabel() if template parameter @p UserEdgeLabel equals ged::NoLabel.
+ * @param[in] ignore_duplicates If @p true, duplicate edges are ignores. Otherwise, an exception is thrown if an existing edge is added to the graph.
+ */
+ """
+ # @todo: check everything.
+ self._initialized = False
+ # @todo: check ignore_duplicates.
+ self._ged_data._graphs[graph_id].add_edge(self._original_to_internal_node_ids[graph_id][nd_from], self._original_to_internal_node_ids[graph_id][nd_to], label=edge_label)
+ label_id = self._ged_data._edge_label_to_id(edge_label)
+ # @todo: ged_data_.graphs_[graph_id].set_label
+
+
+ def add_nx_graph(self, g, classe, ignore_duplicates=True) :
+ """
+ Add a Graph (made by networkx) on the environment. Be careful to respect the same format as GXL graphs for labelling nodes and edges.
+
+ :param g: The graph to add (networkx graph)
+ :param ignore_duplicates: If True, duplicate edges are ignored, otherwise it's raise an error if an existing edge is added. True by default
+ :type g: networkx.graph
+ :type ignore_duplicates: bool
+ :return: The ID of the newly added graphe
+ :rtype: size_t
+
+ .. note:: The NX graph must respect the GXL structure. Please see how a GXL graph is construct.
+
+ """
+ graph_id = self.add_graph(g.name, classe) # check if the graph name already exists.
+ for node in g.nodes: # @todo: if the keys of labels include int and str at the same time.
+ self.add_node(graph_id, node, tuple(sorted(g.nodes[node].items(), key=lambda kv: kv[0])))
+ for edge in g.edges:
+ self.add_edge(graph_id, edge[0], edge[1], tuple(sorted(g.edges[(edge[0], edge[1])].items(), key=lambda kv: kv[0])), ignore_duplicates)
+ return graph_id
+
+
+ def load_nx_graph(self, nx_graph, graph_id, graph_name='', graph_class=''):
+ """
+ Loads NetworkX Graph into the GED environment.
+
+ Parameters
+ ----------
+ nx_graph : NetworkX Graph object
+ The graph that should be loaded.
+
+ graph_id : int or None
+ The ID of a graph contained the environment (overwrite existing graph) or add new graph if `None`.
+
+ graph_name : string, optional
+ The name of newly added graph. The default is ''. Has no effect unless `graph_id` equals `None`.
+
+ graph_class : string, optional
+ The class of newly added graph. The default is ''. Has no effect unless `graph_id` equals `None`.
+
+ Returns
+ -------
+ int
+ The ID of the newly loaded graph.
+ """
+ if graph_id is None: # @todo: undefined.
+ graph_id = self.add_graph(graph_name, graph_class)
+ else:
+ self.clear_graph(graph_id)
+ for node in nx_graph.nodes:
+ self.add_node(graph_id, node, tuple(sorted(nx_graph.nodes[node].items(), key=lambda kv: kv[0])))
+ for edge in nx_graph.edges:
+ self.add_edge(graph_id, edge[0], edge[1], tuple(sorted(nx_graph.edges[(edge[0], edge[1])].items(), key=lambda kv: kv[0])))
+ return graph_id
+
+
+ def init(self, init_type=Options.InitType.EAGER_WITHOUT_SHUFFLED_COPIES, print_to_stdout=False):
+ if isinstance(init_type, str):
+ init_type = OptionsStringMap.InitType[init_type]
+
+ # Throw an exception if no edit costs have been selected.
+ if self._ged_data._edit_cost is None:
+ raise Exception('No edit costs have been selected. Call set_edit_cost() before calling init().')
+
+ # Return if the environment is initialized.
+ if self._initialized:
+ return
+
+ # Set initialization type.
+ self._ged_data._init_type = init_type
+
+ # @todo: Construct shuffled graph copies if necessary.
+
+ # Re-initialize adjacency matrices (also previously initialized graphs must be re-initialized because of possible re-allocation).
+ # @todo: setup_adjacency_matrix, don't know if neccessary.
+ self._ged_data._max_num_nodes = np.max([nx.number_of_nodes(g) for g in self._ged_data._graphs])
+ self._ged_data._max_num_edges = np.max([nx.number_of_edges(g) for g in self._ged_data._graphs])
+
+ # Initialize cost matrices if necessary.
+ if self._ged_data._eager_init():
+ pass # @todo: init_cost_matrices_: 1. Update node cost matrix if new node labels have been added to the environment; 2. Update edge cost matrix if new edge labels have been added to the environment.
+
+ # Mark environment as initialized.
+ self._initialized = True
+ self._new_graph_ids.clear()
+
+
+ def is_initialized(self):
+ """
+ /*!
+ * @brief Check if the environment is initialized.
+ * @return True if the environment is initialized.
+ */
+ """
+ return self._initialized
+
+
+ def get_init_type(self):
+ """
+ /*!
+ * @brief Returns the initialization type of the last initialization.
+ * @return Initialization type.
+ */
+ """
+ return self._ged_data._init_type
+
+
+ def set_label_costs(self, node_label_costs=None, edge_label_costs=None):
+ """Set the costs between labels.
+ """
+ if node_label_costs is not None:
+ self._ged_data._node_label_costs = node_label_costs
+ if edge_label_costs is not None:
+ self._ged_data._edge_label_costs = edge_label_costs
+
+
+ def set_method(self, method, options=''):
+ """
+ /*!
+ * @brief Sets the GEDMethod to be used by run_method().
+ * @param[in] method Select the method that is to be used.
+ * @param[in] options An options string of the form @"[--@ @] [...]@" passed to the selected method.
+ */
+ """
+ del self._ged_method
+
+ if isinstance(method, str):
+ method = OptionsStringMap.GEDMethod[method]
+
+ if method == Options.GEDMethod.BRANCH:
+ self._ged_method = Branch(self._ged_data)
+ elif method == Options.GEDMethod.BRANCH_FAST:
+ self._ged_method = BranchFast(self._ged_data)
+ elif method == Options.GEDMethod.BRANCH_FAST:
+ self._ged_method = BranchFast(self._ged_data)
+ elif method == Options.GEDMethod.BRANCH_TIGHT:
+ self._ged_method = BranchTight(self._ged_data)
+ elif method == Options.GEDMethod.BRANCH_UNIFORM:
+ self._ged_method = BranchUniform(self._ged_data)
+ elif method == Options.GEDMethod.BRANCH_COMPACT:
+ self._ged_method = BranchCompact(self._ged_data)
+ elif method == Options.GEDMethod.PARTITION:
+ self._ged_method = Partition(self._ged_data)
+ elif method == Options.GEDMethod.HYBRID:
+ self._ged_method = Hybrid(self._ged_data)
+ elif method == Options.GEDMethod.RING:
+ self._ged_method = Ring(self._ged_data)
+ elif method == Options.GEDMethod.ANCHOR_AWARE_GED:
+ self._ged_method = AnchorAwareGED(self._ged_data)
+ elif method == Options.GEDMethod.WALKS:
+ self._ged_method = Walks(self._ged_data)
+ elif method == Options.GEDMethod.IPFP:
+ self._ged_method = IPFP(self._ged_data)
+ elif method == Options.GEDMethod.BIPARTITE:
+ from gklearn.ged.methods import Bipartite
+ self._ged_method = Bipartite(self._ged_data)
+ elif method == Options.GEDMethod.SUBGRAPH:
+ self._ged_method = Subgraph(self._ged_data)
+ elif method == Options.GEDMethod.NODE:
+ self._ged_method = Node(self._ged_data)
+ elif method == Options.GEDMethod.RING_ML:
+ self._ged_method = RingML(self._ged_data)
+ elif method == Options.GEDMethod.BIPARTITE_ML:
+ self._ged_method = BipartiteML(self._ged_data)
+ elif method == Options.GEDMethod.REFINE:
+ self._ged_method = Refine(self._ged_data)
+ elif method == Options.GEDMethod.BP_BEAM:
+ self._ged_method = BPBeam(self._ged_data)
+ elif method == Options.GEDMethod.SIMULATED_ANNEALING:
+ self._ged_method = SimulatedAnnealing(self._ged_data)
+ elif method == Options.GEDMethod.HED:
+ self._ged_method = HED(self._ged_data)
+ elif method == Options.GEDMethod.STAR:
+ self._ged_method = STAR(self._ged_data)
+ # #ifdef GUROBI
+ elif method == Options.GEDMethod.F1:
+ self._ged_method = F1(self._ged_data)
+ elif method == Options.GEDMethod.F2:
+ self._ged_method = F2(self._ged_data)
+ elif method == Options.GEDMethod.COMPACT_MIP:
+ self._ged_method = CompactMIP(self._ged_data)
+ elif method == Options.GEDMethod.BLP_NO_EDGE_LABELS:
+ self._ged_method = BLPNoEdgeLabels(self._ged_data)
+
+ self._ged_method.set_options(options)
+
+
+ def run_method(self, g_id, h_id):
+ """
+ /*!
+ * @brief Runs the GED method specified by call to set_method() between the graphs with IDs @p g_id and @p h_id.
+ * @param[in] g_id ID of an input graph that has been added to the environment.
+ * @param[in] h_id ID of an input graph that has been added to the environment.
+ */
+ """
+ if g_id >= self._ged_data.num_graphs():
+ raise Exception('The graph with ID', str(g_id), 'has not been added to the environment.')
+ if h_id >= self._ged_data.num_graphs():
+ raise Exception('The graph with ID', str(h_id), 'has not been added to the environment.')
+ if not self._initialized:
+ raise Exception('The environment is uninitialized. Call init() after adding all graphs to the environment.')
+ if self._ged_method is None:
+ raise Exception('No method has been set. Call set_method() before calling run().')
+
+ # Call selected GEDMethod and store results.
+ if self._ged_data.shuffled_graph_copies_available() and (g_id == h_id):
+ self._ged_method.run(g_id, self._ged_data.id_shuffled_graph_copy(h_id)) # @todo: why shuffle?
+ else:
+ self._ged_method.run(g_id, h_id)
+ self._lower_bounds[(g_id, h_id)] = self._ged_method.get_lower_bound()
+ self._upper_bounds[(g_id, h_id)] = self._ged_method.get_upper_bound()
+ self._runtimes[(g_id, h_id)] = self._ged_method.get_runtime()
+ self._node_maps[(g_id, h_id)] = self._ged_method.get_node_map()
+
+
+ def init_method(self):
+ """Initializes the method specified by call to set_method().
+ """
+ if not self._initialized:
+ raise Exception('The environment is uninitialized. Call init() before calling init_method().')
+ if self._ged_method is None:
+ raise Exception('No method has been set. Call set_method() before calling init_method().')
+ self._ged_method.init()
+
+
+ def get_num_node_labels(self):
+ """
+ /*!
+ * @brief Returns the number of node labels.
+ * @return Number of pairwise different node labels contained in the environment.
+ * @note If @p 1 is returned, the nodes are unlabeled.
+ */
+ """
+ return len(self._ged_data._node_labels)
+
+
+ def get_all_node_labels(self):
+ """
+ /*!
+ * @brief Returns the list of all node labels.
+ * @return List of pairwise different node labels contained in the environment.
+ * @note If @p 1 is returned, the nodes are unlabeled.
+ */
+ """
+ return self._ged_data._node_labels
+
+
+ def get_node_label(self, label_id, to_dict=True):
+ """
+ /*!
+ * @brief Returns node label.
+ * @param[in] label_id ID of node label that should be returned. Must be between 1 and num_node_labels().
+ * @return Node label for selected label ID.
+ */
+ """
+ if label_id < 1 or label_id > self.get_num_node_labels():
+ raise Exception('The environment does not contain a node label with ID', str(label_id), '.')
+ if to_dict:
+ return dict(self._ged_data._node_labels[label_id - 1])
+ return self._ged_data._node_labels[label_id - 1]
+
+
+ def get_num_edge_labels(self):
+ """
+ /*!
+ * @brief Returns the number of edge labels.
+ * @return Number of pairwise different edge labels contained in the environment.
+ * @note If @p 1 is returned, the edges are unlabeled.
+ */
+ """
+ return len(self._ged_data._edge_labels)
+
+
+ def get_all_edge_labels(self):
+ """
+ /*!
+ * @brief Returns the list of all edge labels.
+ * @return List of pairwise different edge labels contained in the environment.
+ * @note If @p 1 is returned, the edges are unlabeled.
+ */
+ """
+ return self._ged_data._edge_labels
+
+
+ def get_edge_label(self, label_id, to_dict=True):
+ """
+ /*!
+ * @brief Returns edge label.
+ * @param[in] label_id ID of edge label that should be returned. Must be between 1 and num_node_labels().
+ * @return Edge label for selected label ID.
+ */
+ """
+ if label_id < 1 or label_id > self.get_num_edge_labels():
+ raise Exception('The environment does not contain an edge label with ID', str(label_id), '.')
+ if to_dict:
+ return dict(self._ged_data._edge_labels[label_id - 1])
+ return self._ged_data._edge_labels[label_id - 1]
+
+
+ def get_upper_bound(self, g_id, h_id):
+ """
+ /*!
+ * @brief Returns upper bound for edit distance between the input graphs.
+ * @param[in] g_id ID of an input graph that has been added to the environment.
+ * @param[in] h_id ID of an input graph that has been added to the environment.
+ * @return Upper bound computed by the last call to run_method() with arguments @p g_id and @p h_id.
+ */
+ """
+ if (g_id, h_id) not in self._upper_bounds:
+ raise Exception('Call run(' + str(g_id) + ',' + str(h_id) + ') before calling get_upper_bound(' + str(g_id) + ',' + str(h_id) + ').')
+ return self._upper_bounds[(g_id, h_id)]
+
+
+ def get_lower_bound(self, g_id, h_id):
+ """
+ /*!
+ * @brief Returns lower bound for edit distance between the input graphs.
+ * @param[in] g_id ID of an input graph that has been added to the environment.
+ * @param[in] h_id ID of an input graph that has been added to the environment.
+ * @return Lower bound computed by the last call to run_method() with arguments @p g_id and @p h_id.
+ */
+ """
+ if (g_id, h_id) not in self._lower_bounds:
+ raise Exception('Call run(' + str(g_id) + ',' + str(h_id) + ') before calling get_lower_bound(' + str(g_id) + ',' + str(h_id) + ').')
+ return self._lower_bounds[(g_id, h_id)]
+
+
+ def get_runtime(self, g_id, h_id):
+ """
+ /*!
+ * @brief Returns runtime.
+ * @param[in] g_id ID of an input graph that has been added to the environment.
+ * @param[in] h_id ID of an input graph that has been added to the environment.
+ * @return Runtime of last call to run_method() with arguments @p g_id and @p h_id.
+ */
+ """
+ if (g_id, h_id) not in self._runtimes:
+ raise Exception('Call run(' + str(g_id) + ',' + str(h_id) + ') before calling get_runtime(' + str(g_id) + ',' + str(h_id) + ').')
+ return self._runtimes[(g_id, h_id)]
+
+
+ def get_init_time(self):
+ """
+ /*!
+ * @brief Returns initialization time.
+ * @return Runtime of the last call to init_method().
+ */
+ """
+ return self._ged_method.get_init_time()
+
+
+ def get_node_map(self, g_id, h_id):
+ """
+ /*!
+ * @brief Returns node map between the input graphs.
+ * @param[in] g_id ID of an input graph that has been added to the environment.
+ * @param[in] h_id ID of an input graph that has been added to the environment.
+ * @return Node map computed by the last call to run_method() with arguments @p g_id and @p h_id.
+ */
+ """
+ if (g_id, h_id) not in self._node_maps:
+ raise Exception('Call run(' + str(g_id) + ',' + str(h_id) + ') before calling get_node_map(' + str(g_id) + ',' + str(h_id) + ').')
+ return self._node_maps[(g_id, h_id)]
+
+
+ def get_forward_map(self, g_id, h_id) :
+ """
+ Returns the forward map (or the half of the adjacence matrix) between nodes of the two indicated graphs.
+
+ :param g: The Id of the first compared graph
+ :param h: The Id of the second compared graph
+ :type g: size_t
+ :type h: size_t
+ :return: The forward map to the adjacence matrix between nodes of the two graphs
+ :rtype: list[npy_uint32]
+
+ .. seealso:: run_method(), get_upper_bound(), get_lower_bound(), get_backward_map(), get_runtime(), quasimetric_cost(), get_node_map(), get_assignment_matrix()
+ .. warning:: run_method() between the same two graph must be called before this function.
+ .. note:: I don't know how to connect the two map to reconstruct the adjacence matrix. Please come back when I know how it's work !
+ """
+ return self.get_node_map(g_id, h_id).forward_map
+
+
+ def get_backward_map(self, g_id, h_id) :
+ """
+ Returns the backward map (or the half of the adjacence matrix) between nodes of the two indicated graphs.
+
+ :param g: The Id of the first compared graph
+ :param h: The Id of the second compared graph
+ :type g: size_t
+ :type h: size_t
+ :return: The backward map to the adjacence matrix between nodes of the two graphs
+ :rtype: list[npy_uint32]
+
+ .. seealso:: run_method(), get_upper_bound(), get_lower_bound(), get_forward_map(), get_runtime(), quasimetric_cost(), get_node_map(), get_assignment_matrix()
+ .. warning:: run_method() between the same two graph must be called before this function.
+ .. note:: I don't know how to connect the two map to reconstruct the adjacence matrix. Please come back when I know how it's work !
+ """
+ return self.get_node_map(g_id, h_id).backward_map
+
+
+ def compute_induced_cost(self, g_id, h_id, node_map):
+ """
+ /*!
+ * @brief Computes the edit cost between two graphs induced by a node map.
+ * @param[in] g_id ID of input graph.
+ * @param[in] h_id ID of input graph.
+ * @param[in,out] node_map Node map whose induced edit cost is to be computed.
+ */
+ """
+ self._ged_data.compute_induced_cost(self._ged_data._graphs[g_id], self._ged_data._graphs[h_id], node_map)
+
+
+ def get_nx_graph(self, graph_id):
+ """
+ * @brief Returns NetworkX.Graph() representation.
+ * @param[in] graph_id ID of the selected graph.
+ """
+ graph = nx.Graph() # @todo: add graph attributes.
+ graph.graph['id'] = graph_id
+
+ nb_nodes = self.get_graph_num_nodes(graph_id)
+ original_node_ids = self.get_original_node_ids(graph_id)
+ node_labels = self.get_graph_node_labels(graph_id, to_dict=True)
+ graph.graph['original_node_ids'] = original_node_ids
+
+ for node_id in range(0, nb_nodes):
+ graph.add_node(node_id, **node_labels[node_id])
+
+ edges = self.get_graph_edges(graph_id, to_dict=True)
+ for (head, tail), labels in edges.items():
+ graph.add_edge(head, tail, **labels)
+
+ return graph
+
+
+ def get_graph_node_labels(self, graph_id, to_dict=True):
+ """
+ Searchs and returns all the labels of nodes on a graph, selected by its ID.
+
+ :param graph_id: The ID of the wanted graph
+ :type graph_id: size_t
+ :return: The list of nodes' labels on the selected graph
+ :rtype: list[dict{string : string}]
+
+ .. seealso:: get_graph_internal_id(), get_graph_num_nodes(), get_graph_num_edges(), get_original_node_ids(), get_graph_edges(), get_graph_adjacence_matrix()
+ .. note:: These functions allow to collect all the graph's informations.
+ """
+ graph = self._ged_data.graph(graph_id)
+ node_labels = []
+ for n in graph.nodes():
+ node_labels.append(graph.nodes[n]['label'])
+ if to_dict:
+ return [dict(i) for i in node_labels]
+ return node_labels
+
+
+ def get_graph_edges(self, graph_id, to_dict=True):
+ """
+ Searchs and returns all the edges on a graph, selected by its ID.
+
+ :param graph_id: The ID of the wanted graph
+ :type graph_id: size_t
+ :return: The list of edges on the selected graph
+ :rtype: dict{tuple(size_t, size_t) : dict{string : string}}
+
+ .. seealso::get_graph_internal_id(), get_graph_num_nodes(), get_graph_num_edges(), get_original_node_ids(), get_graph_node_labels(), get_graph_adjacence_matrix()
+ .. note:: These functions allow to collect all the graph's informations.
+ """
+ graph = self._ged_data.graph(graph_id)
+ if to_dict:
+ edges = {}
+ for n1, n2, attr in graph.edges(data=True):
+ edges[(n1, n2)] = dict(attr['label'])
+ return edges
+ return {(n1, n2): attr['label'] for n1, n2, attr in graph.edges(data=True)}
+
+
+
+ def get_graph_name(self, graph_id):
+ """
+ /*!
+ * @brief Returns the graph name.
+ * @param[in] graph_id ID of an input graph that has been added to the environment.
+ * @return Name of the input graph.
+ */
+ """
+ return self._ged_data._graph_names[graph_id]
+
+
+ def get_graph_num_nodes(self, graph_id):
+ """
+ /*!
+ * @brief Returns the number of nodes.
+ * @param[in] graph_id ID of an input graph that has been added to the environment.
+ * @return Number of nodes in the graph.
+ */
+ """
+ return nx.number_of_nodes(self._ged_data.graph(graph_id))
+
+
+ def get_original_node_ids(self, graph_id):
+ """
+ Searchs and returns all th Ids of nodes on a graph, selected by its ID.
+
+ :param graph_id: The ID of the wanted graph
+ :type graph_id: size_t
+ :return: The list of IDs's nodes on the selected graph
+ :rtype: list[string]
+
+ .. seealso::get_graph_internal_id(), get_graph_num_nodes(), get_graph_num_edges(), get_graph_node_labels(), get_graph_edges(), get_graph_adjacence_matrix()
+ .. note:: These functions allow to collect all the graph's informations.
+ """
+ return [i for i in self._internal_to_original_node_ids[graph_id].values()]
+
+
+ def get_node_cost(self, node_label_1, node_label_2):
+ return self._ged_data.node_cost(node_label_1, node_label_2)
+
+
+ def get_node_rel_cost(self, node_label_1, node_label_2):
+ """
+ /*!
+ * @brief Returns node relabeling cost.
+ * @param[in] node_label_1 First node label.
+ * @param[in] node_label_2 Second node label.
+ * @return Node relabeling cost for the given node labels.
+ */
+ """
+ if isinstance(node_label_1, dict):
+ node_label_1 = tuple(sorted(node_label_1.items(), key=lambda kv: kv[0]))
+ if isinstance(node_label_2, dict):
+ node_label_2 = tuple(sorted(node_label_2.items(), key=lambda kv: kv[0]))
+ return self._ged_data._edit_cost.node_rel_cost_fun(node_label_1, node_label_2) # @todo: may need to use node_cost() instead (or change node_cost() and modify ged_method for pre-defined cost matrices.)
+
+
+ def get_node_del_cost(self, node_label):
+ """
+ /*!
+ * @brief Returns node deletion cost.
+ * @param[in] node_label Node label.
+ * @return Cost of deleting node with given label.
+ */
+ """
+ if isinstance(node_label, dict):
+ node_label = tuple(sorted(node_label.items(), key=lambda kv: kv[0]))
+ return self._ged_data._edit_cost.node_del_cost_fun(node_label)
+
+
+ def get_node_ins_cost(self, node_label):
+ """
+ /*!
+ * @brief Returns node insertion cost.
+ * @param[in] node_label Node label.
+ * @return Cost of inserting node with given label.
+ */
+ """
+ if isinstance(node_label, dict):
+ node_label = tuple(sorted(node_label.items(), key=lambda kv: kv[0]))
+ return self._ged_data._edit_cost.node_ins_cost_fun(node_label)
+
+
+ def get_edge_cost(self, edge_label_1, edge_label_2):
+ return self._ged_data.edge_cost(edge_label_1, edge_label_2)
+
+
+ def get_edge_rel_cost(self, edge_label_1, edge_label_2):
+ """
+ /*!
+ * @brief Returns edge relabeling cost.
+ * @param[in] edge_label_1 First edge label.
+ * @param[in] edge_label_2 Second edge label.
+ * @return Edge relabeling cost for the given edge labels.
+ */
+ """
+ if isinstance(edge_label_1, dict):
+ edge_label_1 = tuple(sorted(edge_label_1.items(), key=lambda kv: kv[0]))
+ if isinstance(edge_label_2, dict):
+ edge_label_2 = tuple(sorted(edge_label_2.items(), key=lambda kv: kv[0]))
+ return self._ged_data._edit_cost.edge_rel_cost_fun(edge_label_1, edge_label_2)
+
+
+ def get_edge_del_cost(self, edge_label):
+ """
+ /*!
+ * @brief Returns edge deletion cost.
+ * @param[in] edge_label Edge label.
+ * @return Cost of deleting edge with given label.
+ */
+ """
+ if isinstance(edge_label, dict):
+ edge_label = tuple(sorted(edge_label.items(), key=lambda kv: kv[0]))
+ return self._ged_data._edit_cost.edge_del_cost_fun(edge_label)
+
+
+ def get_edge_ins_cost(self, edge_label):
+ """
+ /*!
+ * @brief Returns edge insertion cost.
+ * @param[in] edge_label Edge label.
+ * @return Cost of inserting edge with given label.
+ */
+ """
+ if isinstance(edge_label, dict):
+ edge_label = tuple(sorted(edge_label.items(), key=lambda kv: kv[0]))
+ return self._ged_data._edit_cost.edge_ins_cost_fun(edge_label)
+
+
+ def get_all_graph_ids(self):
+ return [i for i in range(0, self._ged_data._num_graphs_without_shuffled_copies)]
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/env/node_map.py b/lang/fr/gklearn/ged/env/node_map.py
new file mode 100644
index 0000000000..71b68d8502
--- /dev/null
+++ b/lang/fr/gklearn/ged/env/node_map.py
@@ -0,0 +1,102 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Wed Apr 22 11:31:26 2020
+
+@author: ljia
+"""
+import numpy as np
+from gklearn.utils import dummy_node, undefined_node
+
+
+class NodeMap(object):
+
+ def __init__(self, num_nodes_g, num_nodes_h):
+ self._forward_map = [undefined_node()] * num_nodes_g
+ self._backward_map = [undefined_node()] * num_nodes_h
+ self._induced_cost = np.inf
+
+
+ def clear(self):
+ """
+ /*!
+ * @brief Clears the node map.
+ */
+ """
+ self._forward_map = [undefined_node() for i in range(len(self._forward_map))]
+ self._backward_map = [undefined_node() for i in range(len(self._backward_map))]
+
+
+ def num_source_nodes(self):
+ return len(self._forward_map)
+
+
+ def num_target_nodes(self):
+ return len(self._backward_map)
+
+
+ def image(self, node):
+ if node < len(self._forward_map):
+ return self._forward_map[node]
+ else:
+ raise Exception('The node with ID ', str(node), ' is not contained in the source nodes of the node map.')
+ return undefined_node()
+
+
+ def pre_image(self, node):
+ if node < len(self._backward_map):
+ return self._backward_map[node]
+ else:
+ raise Exception('The node with ID ', str(node), ' is not contained in the target nodes of the node map.')
+ return undefined_node()
+
+
+ def as_relation(self, relation):
+ relation.clear()
+ for i in range(0, len(self._forward_map)):
+ k = self._forward_map[i]
+ if k != undefined_node():
+ relation.append(tuple((i, k)))
+ for k in range(0, len(self._backward_map)):
+ i = self._backward_map[k]
+ if i == dummy_node():
+ relation.append(tuple((i, k)))
+
+
+ def add_assignment(self, i, k):
+ if i != dummy_node():
+ if i < len(self._forward_map):
+ self._forward_map[i] = k
+ else:
+ raise Exception('The node with ID ', str(i), ' is not contained in the source nodes of the node map.')
+ if k != dummy_node():
+ if k < len(self._backward_map):
+ self._backward_map[k] = i
+ else:
+ raise Exception('The node with ID ', str(k), ' is not contained in the target nodes of the node map.')
+
+
+ def set_induced_cost(self, induced_cost):
+ self._induced_cost = induced_cost
+
+
+ def induced_cost(self):
+ return self._induced_cost
+
+
+ @property
+ def forward_map(self):
+ return self._forward_map
+
+ @forward_map.setter
+ def forward_map(self, value):
+ self._forward_map = value
+
+
+ @property
+ def backward_map(self):
+ return self._backward_map
+
+ @backward_map.setter
+ def backward_map(self, value):
+ self._backward_map = value
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/learning/__init__.py b/lang/fr/gklearn/ged/learning/__init__.py
new file mode 100644
index 0000000000..f867ab3987
--- /dev/null
+++ b/lang/fr/gklearn/ged/learning/__init__.py
@@ -0,0 +1,9 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Tue Jul 7 16:07:25 2020
+
+@author: ljia
+"""
+
+from gklearn.ged.learning.cost_matrices_learner import CostMatricesLearner
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/learning/cost_matrices_learner.py b/lang/fr/gklearn/ged/learning/cost_matrices_learner.py
new file mode 100644
index 0000000000..d2c39c22d5
--- /dev/null
+++ b/lang/fr/gklearn/ged/learning/cost_matrices_learner.py
@@ -0,0 +1,148 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Tue Jul 7 11:42:48 2020
+
+@author: ljia
+"""
+import numpy as np
+import cvxpy as cp
+import time
+from gklearn.ged.learning.costs_learner import CostsLearner
+from gklearn.ged.util import compute_geds_cml
+
+
+class CostMatricesLearner(CostsLearner):
+
+
+ def __init__(self, edit_cost='CONSTANT', triangle_rule=False, allow_zeros=True, parallel=False, verbose=2):
+ super().__init__(parallel, verbose)
+ self._edit_cost = edit_cost
+ self._triangle_rule = triangle_rule
+ self._allow_zeros = allow_zeros
+
+
+ def fit(self, X, y):
+ if self._edit_cost == 'LETTER':
+ raise Exception('Cannot compute for cost "LETTER".')
+ elif self._edit_cost == 'LETTER2':
+ raise Exception('Cannot compute for cost "LETTER2".')
+ elif self._edit_cost == 'NON_SYMBOLIC':
+ raise Exception('Cannot compute for cost "NON_SYMBOLIC".')
+ elif self._edit_cost == 'CONSTANT': # @todo: node/edge may not labeled.
+ if not self._triangle_rule and self._allow_zeros:
+ w = cp.Variable(X.shape[1])
+ cost_fun = cp.sum_squares(X @ w - y)
+ constraints = [w >= [0.0 for i in range(X.shape[1])]]
+ prob = cp.Problem(cp.Minimize(cost_fun), constraints)
+ self.execute_cvx(prob)
+ edit_costs_new = w.value
+ residual = np.sqrt(prob.value)
+ elif self._triangle_rule and self._allow_zeros: # @todo
+ x = cp.Variable(nb_cost_mat.shape[1])
+ cost_fun = cp.sum_squares(nb_cost_mat @ x - dis_k_vec)
+ constraints = [x >= [0.0 for i in range(nb_cost_mat.shape[1])],
+ np.array([1.0, 0.0, 0.0, 0.0, 0.0, 0.0]).T@x >= 0.01,
+ np.array([0.0, 1.0, 0.0, 0.0, 0.0, 0.0]).T@x >= 0.01,
+ np.array([0.0, 0.0, 0.0, 1.0, 0.0, 0.0]).T@x >= 0.01,
+ np.array([0.0, 0.0, 0.0, 0.0, 1.0, 0.0]).T@x >= 0.01,
+ np.array([1.0, 1.0, -1.0, 0.0, 0.0, 0.0]).T@x >= 0.0,
+ np.array([0.0, 0.0, 0.0, 1.0, 1.0, -1.0]).T@x >= 0.0]
+ prob = cp.Problem(cp.Minimize(cost_fun), constraints)
+ self._execute_cvx(prob)
+ edit_costs_new = x.value
+ residual = np.sqrt(prob.value)
+ elif not self._triangle_rule and not self._allow_zeros: # @todo
+ x = cp.Variable(nb_cost_mat.shape[1])
+ cost_fun = cp.sum_squares(nb_cost_mat @ x - dis_k_vec)
+ constraints = [x >= [0.01 for i in range(nb_cost_mat.shape[1])]]
+ prob = cp.Problem(cp.Minimize(cost_fun), constraints)
+ self._execute_cvx(prob)
+ edit_costs_new = x.value
+ residual = np.sqrt(prob.value)
+ elif self._triangle_rule and not self._allow_zeros: # @todo
+ x = cp.Variable(nb_cost_mat.shape[1])
+ cost_fun = cp.sum_squares(nb_cost_mat @ x - dis_k_vec)
+ constraints = [x >= [0.01 for i in range(nb_cost_mat.shape[1])],
+ np.array([1.0, 1.0, -1.0, 0.0, 0.0, 0.0]).T@x >= 0.0,
+ np.array([0.0, 0.0, 0.0, 1.0, 1.0, -1.0]).T@x >= 0.0]
+ prob = cp.Problem(cp.Minimize(cost_fun), constraints)
+ self._execute_cvx(prob)
+ edit_costs_new = x.value
+ residual = np.sqrt(prob.value)
+ else:
+ raise Exception('The edit cost "', self._ged_options['edit_cost'], '" is not supported for update progress.')
+
+ self._cost_list.append(edit_costs_new)
+
+
+ def init_geds_and_nb_eo(self, y, graphs):
+ time0 = time.time()
+ self._cost_list.append(np.concatenate((self._ged_options['node_label_costs'],
+ self._ged_options['edge_label_costs'])))
+ ged_vec, self._nb_eo = self.compute_geds_and_nb_eo(graphs)
+ self._residual_list.append(np.sqrt(np.sum(np.square(np.array(ged_vec) - y))))
+ self._runtime_list.append(time.time() - time0)
+
+ if self._verbose >= 2:
+ print('Current node label costs:', self._cost_list[-1][0:len(self._ged_options['node_label_costs'])])
+ print('Current edge label costs:', self._cost_list[-1][len(self._ged_options['node_label_costs']):])
+ print('Residual list:', self._residual_list)
+
+
+ def update_geds_and_nb_eo(self, y, graphs, time0):
+ self._ged_options['node_label_costs'] = self._cost_list[-1][0:len(self._ged_options['node_label_costs'])]
+ self._ged_options['edge_label_costs'] = self._cost_list[-1][len(self._ged_options['node_label_costs']):]
+ ged_vec, self._nb_eo = self.compute_geds_and_nb_eo(graphs)
+ self._residual_list.append(np.sqrt(np.sum(np.square(np.array(ged_vec) - y))))
+ self._runtime_list.append(time.time() - time0)
+
+
+ def compute_geds_and_nb_eo(self, graphs):
+ ged_vec, ged_mat, n_edit_operations = compute_geds_cml(graphs, options=self._ged_options, parallel=self._parallel, verbose=(self._verbose > 1))
+ return ged_vec, np.array(n_edit_operations)
+
+
+ def check_convergency(self):
+ self._ec_changed = False
+ for i, cost in enumerate(self._cost_list[-1]):
+ if cost == 0:
+ if self._cost_list[-2][i] > self._epsilon_ec:
+ self._ec_changed = True
+ break
+ elif abs(cost - self._cost_list[-2][i]) / cost > self._epsilon_ec:
+ self._ec_changed = True
+ break
+# if abs(cost - edit_cost_list[-2][i]) > self._epsilon_ec:
+# ec_changed = True
+# break
+ self._residual_changed = False
+ if self._residual_list[-1] == 0:
+ if self._residual_list[-2] > self._epsilon_residual:
+ self._residual_changed = True
+ elif abs(self._residual_list[-1] - self._residual_list[-2]) / self._residual_list[-1] > self._epsilon_residual:
+ self._residual_changed = True
+ self._converged = not (self._ec_changed or self._residual_changed)
+ if self._converged:
+ self._itrs_without_update += 1
+ else:
+ self._itrs_without_update = 0
+ self._num_updates_ecs += 1
+
+
+ def print_current_states(self):
+ print()
+ print('-------------------------------------------------------------------------')
+ print('States of iteration', self._itrs + 1)
+ print('-------------------------------------------------------------------------')
+# print('Time spend:', self._runtime_optimize_ec)
+ print('Total number of iterations for optimizing:', self._itrs + 1)
+ print('Total number of updating edit costs:', self._num_updates_ecs)
+ print('Was optimization of edit costs converged:', self._converged)
+ print('Did edit costs change:', self._ec_changed)
+ print('Did residual change:', self._residual_changed)
+ print('Iterations without update:', self._itrs_without_update)
+ print('Current node label costs:', self._cost_list[-1][0:len(self._ged_options['node_label_costs'])])
+ print('Current edge label costs:', self._cost_list[-1][len(self._ged_options['node_label_costs']):])
+ print('Residual list:', self._residual_list)
+ print('-------------------------------------------------------------------------')
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/learning/costs_learner.py b/lang/fr/gklearn/ged/learning/costs_learner.py
new file mode 100644
index 0000000000..844a1f5706
--- /dev/null
+++ b/lang/fr/gklearn/ged/learning/costs_learner.py
@@ -0,0 +1,175 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Tue Jul 7 11:30:31 2020
+
+@author: ljia
+"""
+import numpy as np
+import cvxpy as cp
+import time
+from gklearn.utils import Timer
+
+
+class CostsLearner(object):
+
+
+ def __init__(self, parallel, verbose):
+ ### To set.
+ self._parallel = parallel
+ self._verbose = verbose
+ # For update().
+ self._time_limit_in_sec = 0
+ self._max_itrs = 100
+ self._max_itrs_without_update = 3
+ self._epsilon_residual = 0.01
+ self._epsilon_ec = 0.1
+ ### To compute.
+ self._residual_list = []
+ self._runtime_list = []
+ self._cost_list = []
+ self._nb_eo = None
+ # For update().
+ self._itrs = 0
+ self._converged = False
+ self._num_updates_ecs = 0
+ self._ec_changed = None
+ self._residual_changed = None
+ self._itrs_without_update = 0
+ ### Both set and get.
+ self._ged_options = None
+
+
+ def fit(self, X, y):
+ pass
+
+
+ def preprocess(self):
+ pass # @todo: remove the zero numbers of edit costs.
+
+
+ def postprocess(self):
+ for i in range(len(self._cost_list[-1])):
+ if -1e-9 <= self._cost_list[-1][i] <= 1e-9:
+ self._cost_list[-1][i] = 0
+ if self._cost_list[-1][i] < 0:
+ raise ValueError('The edit cost is negative.')
+
+
+ def set_update_params(self, **kwargs):
+ self._time_limit_in_sec = kwargs.get('time_limit_in_sec', self._time_limit_in_sec)
+ self._max_itrs = kwargs.get('max_itrs', self._max_itrs)
+ self._max_itrs_without_update = kwargs.get('max_itrs_without_update', self._max_itrs_without_update)
+ self._epsilon_residual = kwargs.get('epsilon_residual', self._epsilon_residual)
+ self._epsilon_ec = kwargs.get('epsilon_ec', self._epsilon_ec)
+
+
+ def update(self, y, graphs, ged_options, **kwargs):
+ # Set parameters.
+ self._ged_options = ged_options
+ if kwargs != {}:
+ self.set_update_params(**kwargs)
+
+ # The initial iteration.
+ if self._verbose >= 2:
+ print('\ninitial:')
+ self.init_geds_and_nb_eo(y, graphs)
+
+ self._converged = False
+ self._itrs_without_update = 0
+ self._itrs = 0
+ self._num_updates_ecs = 0
+ timer = Timer(self._time_limit_in_sec)
+ # Run iterations from initial edit costs.
+ while not self.termination_criterion_met(self._converged, timer, self._itrs, self._itrs_without_update):
+ if self._verbose >= 2:
+ print('\niteration', self._itrs + 1)
+ time0 = time.time()
+
+ # Fit GED space to the target space.
+ self.preprocess()
+ self.fit(self._nb_eo, y)
+ self.postprocess()
+
+ # Compute new GEDs and numbers of edit operations.
+ self.update_geds_and_nb_eo(y, graphs, time0)
+
+ # Check convergency.
+ self.check_convergency()
+
+ # Print current states.
+ if self._verbose >= 2:
+ self.print_current_states()
+
+ self._itrs += 1
+
+
+ def init_geds_and_nb_eo(self, y, graphs):
+ pass
+
+
+ def update_geds_and_nb_eo(self, y, graphs, time0):
+ pass
+
+
+ def compute_geds_and_nb_eo(self, graphs):
+ pass
+
+
+ def check_convergency(self):
+ pass
+
+
+ def print_current_states(self):
+ pass
+
+
+ def termination_criterion_met(self, converged, timer, itr, itrs_without_update):
+ if timer.expired() or (itr >= self._max_itrs if self._max_itrs >= 0 else False):
+# if self._state == AlgorithmState.TERMINATED:
+# self._state = AlgorithmState.INITIALIZED
+ return True
+ return converged or (itrs_without_update > self._max_itrs_without_update if self._max_itrs_without_update >= 0 else False)
+
+
+ def execute_cvx(self, prob):
+ try:
+ prob.solve(verbose=(self._verbose>=2))
+ except MemoryError as error0:
+ if self._verbose >= 2:
+ print('\nUsing solver "OSQP" caused a memory error.')
+ print('the original error message is\n', error0)
+ print('solver status: ', prob.status)
+ print('trying solver "CVXOPT" instead...\n')
+ try:
+ prob.solve(solver=cp.CVXOPT, verbose=(self._verbose>=2))
+ except Exception as error1:
+ if self._verbose >= 2:
+ print('\nAn error occured when using solver "CVXOPT".')
+ print('the original error message is\n', error1)
+ print('solver status: ', prob.status)
+ print('trying solver "MOSEK" instead. Notice this solver is commercial and a lisence is required.\n')
+ prob.solve(solver=cp.MOSEK, verbose=(self._verbose>=2))
+ else:
+ if self._verbose >= 2:
+ print('solver status: ', prob.status)
+ else:
+ if self._verbose >= 2:
+ print('solver status: ', prob.status)
+ if self._verbose >= 2:
+ print()
+
+
+ def get_results(self):
+ results = {}
+ results['residual_list'] = self._residual_list
+ results['runtime_list'] = self._runtime_list
+ results['cost_list'] = self._cost_list
+ results['nb_eo'] = self._nb_eo
+ results['itrs'] = self._itrs
+ results['converged'] = self._converged
+ results['num_updates_ecs'] = self._num_updates_ecs
+ results['ec_changed'] = self._ec_changed
+ results['residual_changed'] = self._residual_changed
+ results['itrs_without_update'] = self._itrs_without_update
+ return results
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/median/__init__.py b/lang/fr/gklearn/ged/median/__init__.py
new file mode 100644
index 0000000000..9eb4384706
--- /dev/null
+++ b/lang/fr/gklearn/ged/median/__init__.py
@@ -0,0 +1,4 @@
+from gklearn.ged.median.median_graph_estimator import MedianGraphEstimator
+from gklearn.ged.median.median_graph_estimator_py import MedianGraphEstimatorPy
+from gklearn.ged.median.median_graph_estimator_cml import MedianGraphEstimatorCML
+from gklearn.ged.median.utils import constant_node_costs, mge_options_to_string
diff --git a/lang/fr/gklearn/ged/median/median_graph_estimator.py b/lang/fr/gklearn/ged/median/median_graph_estimator.py
new file mode 100644
index 0000000000..03c789290c
--- /dev/null
+++ b/lang/fr/gklearn/ged/median/median_graph_estimator.py
@@ -0,0 +1,1709 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Mar 16 18:04:55 2020
+
+@author: ljia
+"""
+import numpy as np
+from gklearn.ged.env import AlgorithmState, NodeMap
+from gklearn.ged.util import misc
+from gklearn.utils import Timer
+import time
+from tqdm import tqdm
+import sys
+import networkx as nx
+import multiprocessing
+from multiprocessing import Pool
+from functools import partial
+
+
+class MedianGraphEstimator(object): # @todo: differ dummy_node from undifined node?
+
+ def __init__(self, ged_env, constant_node_costs):
+ """Constructor.
+
+ Parameters
+ ----------
+ ged_env : gklearn.gedlib.gedlibpy.GEDEnv
+ Initialized GED environment. The edit costs must be set by the user.
+
+ constant_node_costs : Boolean
+ Set to True if the node relabeling costs are constant.
+ """
+ self.__ged_env = ged_env
+ self.__init_method = 'BRANCH_FAST'
+ self.__init_options = ''
+ self.__descent_method = 'BRANCH_FAST'
+ self.__descent_options = ''
+ self.__refine_method = 'IPFP'
+ self.__refine_options = ''
+ self.__constant_node_costs = constant_node_costs
+ self.__labeled_nodes = (ged_env.get_num_node_labels() > 1)
+ self.__node_del_cost = ged_env.get_node_del_cost(ged_env.get_node_label(1))
+ self.__node_ins_cost = ged_env.get_node_ins_cost(ged_env.get_node_label(1))
+ self.__labeled_edges = (ged_env.get_num_edge_labels() > 1)
+ self.__edge_del_cost = ged_env.get_edge_del_cost(ged_env.get_edge_label(1))
+ self.__edge_ins_cost = ged_env.get_edge_ins_cost(ged_env.get_edge_label(1))
+ self.__init_type = 'RANDOM'
+ self.__num_random_inits = 10
+ self.__desired_num_random_inits = 10
+ self.__use_real_randomness = True
+ self.__seed = 0
+ self.__parallel = True
+ self.__update_order = True
+ self.__sort_graphs = True # sort graphs by size when computing GEDs.
+ self.__refine = True
+ self.__time_limit_in_sec = 0
+ self.__epsilon = 0.0001
+ self.__max_itrs = 100
+ self.__max_itrs_without_update = 3
+ self.__num_inits_increase_order = 10
+ self.__init_type_increase_order = 'K-MEANS++'
+ self.__max_itrs_increase_order = 10
+ self.__print_to_stdout = 2
+ self.__median_id = np.inf # @todo: check
+ self.__node_maps_from_median = {}
+ self.__sum_of_distances = 0
+ self.__best_init_sum_of_distances = np.inf
+ self.__converged_sum_of_distances = np.inf
+ self.__runtime = None
+ self.__runtime_initialized = None
+ self.__runtime_converged = None
+ self.__itrs = [] # @todo: check: {} ?
+ self.__num_decrease_order = 0
+ self.__num_increase_order = 0
+ self.__num_converged_descents = 0
+ self.__state = AlgorithmState.TERMINATED
+ self.__label_names = {}
+
+ if ged_env is None:
+ raise Exception('The GED environment pointer passed to the constructor of MedianGraphEstimator is null.')
+ elif not ged_env.is_initialized():
+ raise Exception('The GED environment is uninitialized. Call gedlibpy.GEDEnv.init() before passing it to the constructor of MedianGraphEstimator.')
+
+
+ def set_options(self, options):
+ """Sets the options of the estimator.
+
+ Parameters
+ ----------
+ options : string
+ String that specifies with which options to run the estimator.
+ """
+ self.__set_default_options()
+ options_map = misc.options_string_to_options_map(options)
+ for opt_name, opt_val in options_map.items():
+ if opt_name == 'init-type':
+ self.__init_type = opt_val
+ if opt_val != 'MEDOID' and opt_val != 'RANDOM' and opt_val != 'MIN' and opt_val != 'MAX' and opt_val != 'MEAN':
+ raise Exception('Invalid argument ' + opt_val + ' for option init-type. Usage: options = "[--init-type RANDOM|MEDOID|EMPTY|MIN|MAX|MEAN] [...]"')
+ elif opt_name == 'random-inits':
+ try:
+ self.__num_random_inits = int(opt_val)
+ self.__desired_num_random_inits = self.__num_random_inits
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option random-inits. Usage: options = "[--random-inits ]"')
+
+ if self.__num_random_inits <= 0:
+ raise Exception('Invalid argument "' + opt_val + '" for option random-inits. Usage: options = "[--random-inits ]"')
+
+ elif opt_name == 'randomness':
+ if opt_val == 'PSEUDO':
+ self.__use_real_randomness = False
+
+ elif opt_val == 'REAL':
+ self.__use_real_randomness = True
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option randomness. Usage: options = "[--randomness REAL|PSEUDO] [...]"')
+
+ elif opt_name == 'stdout':
+ if opt_val == '0':
+ self.__print_to_stdout = 0
+
+ elif opt_val == '1':
+ self.__print_to_stdout = 1
+
+ elif opt_val == '2':
+ self.__print_to_stdout = 2
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option stdout. Usage: options = "[--stdout 0|1|2] [...]"')
+
+ elif opt_name == 'parallel':
+ if opt_val == 'TRUE':
+ self.__parallel = True
+
+ elif opt_val == 'FALSE':
+ self.__parallel = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option parallel. Usage: options = "[--parallel TRUE|FALSE] [...]"')
+
+ elif opt_name == 'update-order':
+ if opt_val == 'TRUE':
+ self.__update_order = True
+
+ elif opt_val == 'FALSE':
+ self.__update_order = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option update-order. Usage: options = "[--update-order TRUE|FALSE] [...]"')
+
+ elif opt_name == 'sort-graphs':
+ if opt_val == 'TRUE':
+ self.__sort_graphs = True
+
+ elif opt_val == 'FALSE':
+ self.__sort_graphs = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option sort-graphs. Usage: options = "[--sort-graphs TRUE|FALSE] [...]"')
+
+ elif opt_name == 'refine':
+ if opt_val == 'TRUE':
+ self.__refine = True
+
+ elif opt_val == 'FALSE':
+ self.__refine = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option refine. Usage: options = "[--refine TRUE|FALSE] [...]"')
+
+ elif opt_name == 'time-limit':
+ try:
+ self.__time_limit_in_sec = float(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option time-limit. Usage: options = "[--time-limit ] [...]')
+
+ elif opt_name == 'max-itrs':
+ try:
+ self.__max_itrs = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option max-itrs. Usage: options = "[--max-itrs ] [...]')
+
+ elif opt_name == 'max-itrs-without-update':
+ try:
+ self.__max_itrs_without_update = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option max-itrs-without-update. Usage: options = "[--max-itrs-without-update ] [...]')
+
+ elif opt_name == 'seed':
+ try:
+ self.__seed = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option seed. Usage: options = "[--seed ] [...]')
+
+ elif opt_name == 'epsilon':
+ try:
+ self.__epsilon = float(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option epsilon. Usage: options = "[--epsilon ] [...]')
+
+ if self.__epsilon <= 0:
+ raise Exception('Invalid argument "' + opt_val + '" for option epsilon. Usage: options = "[--epsilon ] [...]')
+
+ elif opt_name == 'inits-increase-order':
+ try:
+ self.__num_inits_increase_order = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option inits-increase-order. Usage: options = "[--inits-increase-order ]"')
+
+ if self.__num_inits_increase_order <= 0:
+ raise Exception('Invalid argument "' + opt_val + '" for option inits-increase-order. Usage: options = "[--inits-increase-order ]"')
+
+ elif opt_name == 'init-type-increase-order':
+ self.__init_type_increase_order = opt_val
+ if opt_val != 'CLUSTERS' and opt_val != 'K-MEANS++':
+ raise Exception('Invalid argument ' + opt_val + ' for option init-type-increase-order. Usage: options = "[--init-type-increase-order CLUSTERS|K-MEANS++] [...]"')
+
+ elif opt_name == 'max-itrs-increase-order':
+ try:
+ self.__max_itrs_increase_order = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option max-itrs-increase-order. Usage: options = "[--max-itrs-increase-order ] [...]')
+
+ else:
+ valid_options = '[--init-type ] [--random-inits ] [--randomness ] [--seed ] [--stdout ] '
+ valid_options += '[--time-limit ] [--max-itrs ] [--epsilon ] '
+ valid_options += '[--inits-increase-order ] [--init-type-increase-order ] [--max-itrs-increase-order ]'
+ raise Exception('Invalid option "' + opt_name + '". Usage: options = "' + valid_options + '"')
+
+
+ def set_init_method(self, init_method, init_options=''):
+ """Selects method to be used for computing the initial medoid graph.
+
+ Parameters
+ ----------
+ init_method : string
+ The selected method. Default: ged::Options::GEDMethod::BRANCH_UNIFORM.
+
+ init_options : string
+ The options for the selected method. Default: "".
+
+ Notes
+ -----
+ Has no effect unless "--init-type MEDOID" is passed to set_options().
+ """
+ self.__init_method = init_method;
+ self.__init_options = init_options;
+
+
+ def set_descent_method(self, descent_method, descent_options=''):
+ """Selects method to be used for block gradient descent..
+
+ Parameters
+ ----------
+ descent_method : string
+ The selected method. Default: ged::Options::GEDMethod::BRANCH_FAST.
+
+ descent_options : string
+ The options for the selected method. Default: "".
+
+ Notes
+ -----
+ Has no effect unless "--init-type MEDOID" is passed to set_options().
+ """
+ self.__descent_method = descent_method;
+ self.__descent_options = descent_options;
+
+
+ def set_refine_method(self, refine_method, refine_options):
+ """Selects method to be used for improving the sum of distances and the node maps for the converged median.
+
+ Parameters
+ ----------
+ refine_method : string
+ The selected method. Default: "IPFP".
+
+ refine_options : string
+ The options for the selected method. Default: "".
+
+ Notes
+ -----
+ Has no effect if "--refine FALSE" is passed to set_options().
+ """
+ self.__refine_method = refine_method
+ self.__refine_options = refine_options
+
+
+ def run(self, graph_ids, set_median_id, gen_median_id):
+ """Computes a generalized median graph.
+
+ Parameters
+ ----------
+ graph_ids : list[integer]
+ The IDs of the graphs for which the median should be computed. Must have been added to the environment passed to the constructor.
+
+ set_median_id : integer
+ The ID of the computed set-median. A dummy graph with this ID must have been added to the environment passed to the constructor. Upon termination, the computed median can be obtained via gklearn.gedlib.gedlibpy.GEDEnv.get_graph().
+
+
+ gen_median_id : integer
+ The ID of the computed generalized median. Upon termination, the computed median can be obtained via gklearn.gedlib.gedlibpy.GEDEnv.get_graph().
+ """
+ # Sanity checks.
+ if len(graph_ids) == 0:
+ raise Exception('Empty vector of graph IDs, unable to compute median.')
+ all_graphs_empty = True
+ for graph_id in graph_ids:
+ if self.__ged_env.get_graph_num_nodes(graph_id) > 0:
+ all_graphs_empty = False
+ break
+ if all_graphs_empty:
+ raise Exception('All graphs in the collection are empty.')
+
+ # Start timer and record start time.
+ start = time.time()
+ timer = Timer(self.__time_limit_in_sec)
+ self.__median_id = gen_median_id
+ self.__state = AlgorithmState.TERMINATED
+
+ # Get NetworkX graph representations of the input graphs.
+ graphs = {}
+ for graph_id in graph_ids:
+ # @todo: get_nx_graph() function may need to be modified according to the coming code.
+ graphs[graph_id] = self.__ged_env.get_nx_graph(graph_id, True, True, False)
+# print(self.__ged_env.get_graph_internal_id(0))
+# print(graphs[0].graph)
+# print(graphs[0].nodes(data=True))
+# print(graphs[0].edges(data=True))
+# print(nx.adjacency_matrix(graphs[0]))
+
+ # Construct initial medians.
+ medians = []
+ self.__construct_initial_medians(graph_ids, timer, medians)
+ end_init = time.time()
+ self.__runtime_initialized = end_init - start
+# print(medians[0].graph)
+# print(medians[0].nodes(data=True))
+# print(medians[0].edges(data=True))
+# print(nx.adjacency_matrix(medians[0]))
+
+ # Reset information about iterations and number of times the median decreases and increases.
+ self.__itrs = [0] * len(medians)
+ self.__num_decrease_order = 0
+ self.__num_increase_order = 0
+ self.__num_converged_descents = 0
+
+ # Initialize the best median.
+ best_sum_of_distances = np.inf
+ self.__best_init_sum_of_distances = np.inf
+ node_maps_from_best_median = {}
+
+ # Run block gradient descent from all initial medians.
+ self.__ged_env.set_method(self.__descent_method, self.__descent_options)
+ for median_pos in range(0, len(medians)):
+
+ # Terminate if the timer has expired and at least one SOD has been computed.
+ if timer.expired() and median_pos > 0:
+ break
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n===========================================================')
+ print('Block gradient descent for initial median', str(median_pos + 1), 'of', str(len(medians)), '.')
+ print('-----------------------------------------------------------')
+
+ # Get reference to the median.
+ median = medians[median_pos]
+
+ # Load initial median into the environment.
+ self.__ged_env.load_nx_graph(median, gen_median_id)
+ self.__ged_env.init(self.__ged_env.get_init_type())
+
+ # Compute node maps and sum of distances for initial median.
+# xxx = self.__node_maps_from_median
+ self.__compute_init_node_maps(graph_ids, gen_median_id)
+# yyy = self.__node_maps_from_median
+
+ self.__best_init_sum_of_distances = min(self.__best_init_sum_of_distances, self.__sum_of_distances)
+ self.__ged_env.load_nx_graph(median, set_median_id)
+# print(self.__best_init_sum_of_distances)
+
+ # Run block gradient descent from initial median.
+ converged = False
+ itrs_without_update = 0
+ while not self.__termination_criterion_met(converged, timer, self.__itrs[median_pos], itrs_without_update):
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n===========================================================')
+ print('Iteration', str(self.__itrs[median_pos] + 1), 'for initial median', str(median_pos + 1), 'of', str(len(medians)), '.')
+ print('-----------------------------------------------------------')
+
+ # Initialize flags that tell us what happened in the iteration.
+ median_modified = False
+ node_maps_modified = False
+ decreased_order = False
+ increased_order = False
+
+ # Update the median.
+ median_modified = self.__update_median(graphs, median)
+ if self.__update_order:
+ if not median_modified or self.__itrs[median_pos] == 0:
+ decreased_order = self.__decrease_order(graphs, median)
+ if not decreased_order or self.__itrs[median_pos] == 0:
+ increased_order = self.__increase_order(graphs, median)
+
+ # Update the number of iterations without update of the median.
+ if median_modified or decreased_order or increased_order:
+ itrs_without_update = 0
+ else:
+ itrs_without_update += 1
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Loading median to environment: ... ', end='')
+
+ # Load the median into the environment.
+ # @todo: should this function use the original node label?
+ self.__ged_env.load_nx_graph(median, gen_median_id)
+ self.__ged_env.init(self.__ged_env.get_init_type())
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Updating induced costs: ... ', end='')
+
+ # Compute induced costs of the old node maps w.r.t. the updated median.
+ for graph_id in graph_ids:
+# print(self.__node_maps_from_median[graph_id].induced_cost())
+# xxx = self.__node_maps_from_median[graph_id]
+ self.__ged_env.compute_induced_cost(gen_median_id, graph_id, self.__node_maps_from_median[graph_id])
+# print('---------------------------------------')
+# print(self.__node_maps_from_median[graph_id].induced_cost())
+ # @todo:!!!!!!!!!!!!!!!!!!!!!!!!!!!!This value is a slight different from the c++ program, which might be a bug! Use it very carefully!
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Update the node maps.
+ node_maps_modified = self.__update_node_maps()
+
+ # Update the order of the median if no improvement can be found with the current order.
+
+ # Update the sum of distances.
+ old_sum_of_distances = self.__sum_of_distances
+ self.__sum_of_distances = 0
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ self.__sum_of_distances += node_map.induced_cost()
+# print(self.__sum_of_distances)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Old local SOD: ', old_sum_of_distances)
+ print('New local SOD: ', self.__sum_of_distances)
+ print('Best converged SOD: ', best_sum_of_distances)
+ print('Modified median: ', median_modified)
+ print('Modified node maps: ', node_maps_modified)
+ print('Decreased order: ', decreased_order)
+ print('Increased order: ', increased_order)
+ print('===========================================================\n')
+
+ converged = not (median_modified or node_maps_modified or decreased_order or increased_order)
+
+ self.__itrs[median_pos] += 1
+
+ # Update the best median.
+ if self.__sum_of_distances < best_sum_of_distances:
+ best_sum_of_distances = self.__sum_of_distances
+ node_maps_from_best_median = self.__node_maps_from_median.copy() # @todo: this is a shallow copy, not sure if it is enough.
+ best_median = median
+
+ # Update the number of converged descents.
+ if converged:
+ self.__num_converged_descents += 1
+
+ # Store the best encountered median.
+ self.__sum_of_distances = best_sum_of_distances
+ self.__node_maps_from_median = node_maps_from_best_median
+ self.__ged_env.load_nx_graph(best_median, gen_median_id)
+ self.__ged_env.init(self.__ged_env.get_init_type())
+ end_descent = time.time()
+ self.__runtime_converged = end_descent - start
+
+ # Refine the sum of distances and the node maps for the converged median.
+ self.__converged_sum_of_distances = self.__sum_of_distances
+ if self.__refine:
+ self.__improve_sum_of_distances(timer)
+
+ # Record end time, set runtime and reset the number of initial medians.
+ end = time.time()
+ self.__runtime = end - start
+ self.__num_random_inits = self.__desired_num_random_inits
+
+ # Print global information.
+ if self.__print_to_stdout != 0:
+ print('\n===========================================================')
+ print('Finished computation of generalized median graph.')
+ print('-----------------------------------------------------------')
+ print('Best SOD after initialization: ', self.__best_init_sum_of_distances)
+ print('Converged SOD: ', self.__converged_sum_of_distances)
+ if self.__refine:
+ print('Refined SOD: ', self.__sum_of_distances)
+ print('Overall runtime: ', self.__runtime)
+ print('Runtime of initialization: ', self.__runtime_initialized)
+ print('Runtime of block gradient descent: ', self.__runtime_converged - self.__runtime_initialized)
+ if self.__refine:
+ print('Runtime of refinement: ', self.__runtime - self.__runtime_converged)
+ print('Number of initial medians: ', len(medians))
+ total_itr = 0
+ num_started_descents = 0
+ for itr in self.__itrs:
+ total_itr += itr
+ if itr > 0:
+ num_started_descents += 1
+ print('Size of graph collection: ', len(graph_ids))
+ print('Number of started descents: ', num_started_descents)
+ print('Number of converged descents: ', self.__num_converged_descents)
+ print('Overall number of iterations: ', total_itr)
+ print('Overall number of times the order decreased: ', self.__num_decrease_order)
+ print('Overall number of times the order increased: ', self.__num_increase_order)
+ print('===========================================================\n')
+
+
+ def __improve_sum_of_distances(self, timer): # @todo: go through and test
+ # Use method selected for refinement phase.
+ self.__ged_env.set_method(self.__refine_method, self.__refine_options)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Improving node maps', total=len(self.__node_maps_from_median), file=sys.stdout)
+ print('\n===========================================================')
+ print('Improving node maps and SOD for converged median.')
+ print('-----------------------------------------------------------')
+ progress.update(1)
+
+ # Improving the node maps.
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(self.__gen_median_id)
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ if time.expired():
+ if self.__state == AlgorithmState.TERMINATED:
+ self.__state = AlgorithmState.CONVERGED
+ break
+
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not self.__sort_graphs:
+ self.__ged_env.run_method(self.__gen_median_id, graph_id)
+ if self.__ged_env.get_upper_bound(self.__gen_median_id, graph_id) < node_map.induced_cost():
+ self.__node_maps_from_median[graph_id] = self.__ged_env.get_node_map(self.__gen_median_id, graph_id)
+ else:
+ self.__ged_env.run_method(graph_id, self.__gen_median_id)
+ if self.__ged_env.get_upper_bound(graph_id, self.__gen_median_id) < node_map.induced_cost():
+ node_map_tmp = self.__ged_env.get_node_map(graph_id, self.__gen_median_id)
+ node_map_tmp.forward_map, node_map_tmp.backward_map = node_map_tmp.backward_map, node_map_tmp.forward_map
+ self.__node_maps_from_median[graph_id] = node_map_tmp
+
+ self.__sum_of_distances += self.__node_maps_from_median[graph_id].induced_cost()
+
+ # Print information.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ self.__sum_of_distances = 0.0
+ for key, val in self.__node_maps_from_median.items():
+ self.__sum_of_distances += val.induced_cost()
+
+ # Print information.
+ if self.__print_to_stdout == 2:
+ print('===========================================================\n')
+
+
+ def __median_available(self):
+ return self.__median_id != np.inf
+
+
+ def get_state(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_state().')
+ return self.__state
+
+
+ def get_sum_of_distances(self, state=''):
+ """Returns the sum of distances.
+
+ Parameters
+ ----------
+ state : string
+ The state of the estimator. Can be 'initialized' or 'converged'. Default: ""
+
+ Returns
+ -------
+ float
+ The sum of distances (SOD) of the median when the estimator was in the state `state` during the last call to run(). If `state` is not given, the converged SOD (without refinement) or refined SOD (with refinement) is returned.
+ """
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_sum_of_distances().')
+ if state == 'initialized':
+ return self.__best_init_sum_of_distances
+ if state == 'converged':
+ return self.__converged_sum_of_distances
+ return self.__sum_of_distances
+
+
+ def get_runtime(self, state):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_runtime().')
+ if state == AlgorithmState.INITIALIZED:
+ return self.__runtime_initialized
+ if state == AlgorithmState.CONVERGED:
+ return self.__runtime_converged
+ return self.__runtime
+
+
+ def get_num_itrs(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_itrs().')
+ return self.__itrs
+
+
+ def get_num_times_order_decreased(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_times_order_decreased().')
+ return self.__num_decrease_order
+
+
+ def get_num_times_order_increased(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_times_order_increased().')
+ return self.__num_increase_order
+
+
+ def get_num_converged_descents(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_converged_descents().')
+ return self.__num_converged_descents
+
+
+ def get_ged_env(self):
+ return self.__ged_env
+
+
+ def __set_default_options(self):
+ self.__init_type = 'RANDOM'
+ self.__num_random_inits = 10
+ self.__desired_num_random_inits = 10
+ self.__use_real_randomness = True
+ self.__seed = 0
+ self.__parallel = True
+ self.__update_order = True
+ self.__sort_graphs = True
+ self.__refine = True
+ self.__time_limit_in_sec = 0
+ self.__epsilon = 0.0001
+ self.__max_itrs = 100
+ self.__max_itrs_without_update = 3
+ self.__num_inits_increase_order = 10
+ self.__init_type_increase_order = 'K-MEANS++'
+ self.__max_itrs_increase_order = 10
+ self.__print_to_stdout = 2
+ self.__label_names = {}
+
+
+ def __construct_initial_medians(self, graph_ids, timer, initial_medians):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n===========================================================')
+ print('Constructing initial median(s).')
+ print('-----------------------------------------------------------')
+
+ # Compute or sample the initial median(s).
+ initial_medians.clear()
+ if self.__init_type == 'MEDOID':
+ self.__compute_medoid(graph_ids, timer, initial_medians)
+ elif self.__init_type == 'MAX':
+ pass # @todo
+# compute_max_order_graph_(graph_ids, initial_medians)
+ elif self.__init_type == 'MIN':
+ pass # @todo
+# compute_min_order_graph_(graph_ids, initial_medians)
+ elif self.__init_type == 'MEAN':
+ pass # @todo
+# compute_mean_order_graph_(graph_ids, initial_medians)
+ else:
+ pass # @todo
+# sample_initial_medians_(graph_ids, initial_medians)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('===========================================================')
+
+
+ def __compute_medoid(self, graph_ids, timer, initial_medians):
+ # Use method selected for initialization phase.
+ self.__ged_env.set_method(self.__init_method, self.__init_options)
+
+ # Compute the medoid.
+ if self.__parallel:
+ # @todo: notice when parallel self.__ged_env is not modified.
+ sum_of_distances_list = [np.inf] * len(graph_ids)
+ len_itr = len(graph_ids)
+ itr = zip(graph_ids, range(0, len(graph_ids)))
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(ged_env_toshare):
+ global G_ged_env
+ G_ged_env = ged_env_toshare
+ do_fun = partial(_compute_medoid_parallel, graph_ids, self.__sort_graphs)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(self.__ged_env,))
+ if self.__print_to_stdout == 2:
+ iterator = tqdm(pool.imap_unordered(do_fun, itr, chunksize),
+ desc='Computing medoid', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_fun, itr, chunksize)
+ for i, dis in iterator:
+ sum_of_distances_list[i] = dis
+ pool.close()
+ pool.join()
+
+ medoid_id = np.argmin(sum_of_distances_list)
+ best_sum_of_distances = sum_of_distances_list[medoid_id]
+
+ initial_medians.append(self.__ged_env.get_nx_graph(medoid_id, True, True, False)) # @todo
+
+ else:
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Computing medoid', total=len(graph_ids), file=sys.stdout)
+
+ medoid_id = graph_ids[0]
+ best_sum_of_distances = np.inf
+ for g_id in graph_ids:
+ if timer.expired():
+ self.__state = AlgorithmState.CALLED
+ break
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(g_id)
+ sum_of_distances = 0
+ for h_id in graph_ids:
+ nb_nodes_h = self.__ged_env.get_graph_num_nodes(h_id)
+ if nb_nodes_g <= nb_nodes_h or not self.__sort_graphs:
+ self.__ged_env.run_method(g_id, h_id)
+ sum_of_distances += self.__ged_env.get_upper_bound(g_id, h_id)
+ else:
+ self.__ged_env.run_method(h_id, g_id)
+ sum_of_distances += self.__ged_env.get_upper_bound(h_id, g_id)
+ if sum_of_distances < best_sum_of_distances:
+ best_sum_of_distances = sum_of_distances
+ medoid_id = g_id
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ initial_medians.append(self.__ged_env.get_nx_graph(medoid_id, True, True, False)) # @todo
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n')
+
+
+ def __compute_init_node_maps(self, graph_ids, gen_median_id):
+ # Compute node maps and sum of distances for initial median.
+ if self.__parallel:
+ # @todo: notice when parallel self.__ged_env is not modified.
+ self.__sum_of_distances = 0
+ self.__node_maps_from_median.clear()
+ sum_of_distances_list = [0] * len(graph_ids)
+
+ len_itr = len(graph_ids)
+ itr = graph_ids
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(ged_env_toshare):
+ global G_ged_env
+ G_ged_env = ged_env_toshare
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(gen_median_id)
+ do_fun = partial(_compute_init_node_maps_parallel, gen_median_id, self.__sort_graphs, nb_nodes_median)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(self.__ged_env,))
+ if self.__print_to_stdout == 2:
+ iterator = tqdm(pool.imap_unordered(do_fun, itr, chunksize),
+ desc='Computing initial node maps', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_fun, itr, chunksize)
+ for g_id, sod, node_maps in iterator:
+ sum_of_distances_list[g_id] = sod
+ self.__node_maps_from_median[g_id] = node_maps
+ pool.close()
+ pool.join()
+
+ self.__sum_of_distances = np.sum(sum_of_distances_list)
+# xxx = self.__node_maps_from_median
+
+ else:
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Computing initial node maps', total=len(graph_ids), file=sys.stdout)
+
+ self.__sum_of_distances = 0
+ self.__node_maps_from_median.clear()
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(gen_median_id)
+ for graph_id in graph_ids:
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not self.__sort_graphs:
+ self.__ged_env.run_method(gen_median_id, graph_id)
+ self.__node_maps_from_median[graph_id] = self.__ged_env.get_node_map(gen_median_id, graph_id)
+ else:
+ self.__ged_env.run_method(graph_id, gen_median_id)
+ node_map_tmp = self.__ged_env.get_node_map(graph_id, gen_median_id)
+ node_map_tmp.forward_map, node_map_tmp.backward_map = node_map_tmp.backward_map, node_map_tmp.forward_map
+ self.__node_maps_from_median[graph_id] = node_map_tmp
+ # print(self.__node_maps_from_median[graph_id])
+ self.__sum_of_distances += self.__node_maps_from_median[graph_id].induced_cost()
+ # print(self.__sum_of_distances)
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n')
+
+
+ def __termination_criterion_met(self, converged, timer, itr, itrs_without_update):
+ if timer.expired() or (itr >= self.__max_itrs if self.__max_itrs >= 0 else False):
+ if self.__state == AlgorithmState.TERMINATED:
+ self.__state = AlgorithmState.INITIALIZED
+ return True
+ return converged or (itrs_without_update > self.__max_itrs_without_update if self.__max_itrs_without_update >= 0 else False)
+
+
+ def __update_median(self, graphs, median):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Updating median: ', end='')
+
+ # Store copy of the old median.
+ old_median = median.copy() # @todo: this is just a shallow copy.
+
+ # Update the node labels.
+ if self.__labeled_nodes:
+ self.__update_node_labels(graphs, median)
+
+ # Update the edges and their labels.
+ self.__update_edges(graphs, median)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ return not self.__are_graphs_equal(median, old_median)
+
+
+ def __update_node_labels(self, graphs, median):
+# print('----------------------------')
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('nodes ... ', end='')
+
+ # Iterate through all nodes of the median.
+ for i in range(0, nx.number_of_nodes(median)):
+# print('i: ', i)
+ # Collect the labels of the substituted nodes.
+ node_labels = []
+ for graph_id, graph in graphs.items():
+# print('graph_id: ', graph_id)
+# print(self.__node_maps_from_median[graph_id])
+# print(self.__node_maps_from_median[graph_id].forward_map, self.__node_maps_from_median[graph_id].backward_map)
+ k = self.__node_maps_from_median[graph_id].image(i)
+# print('k: ', k)
+ if k != np.inf:
+ node_labels.append(graph.nodes[k])
+
+ # Compute the median label and update the median.
+ if len(node_labels) > 0:
+# median_label = self.__ged_env.get_median_node_label(node_labels)
+ median_label = self.__get_median_node_label(node_labels)
+ if self.__ged_env.get_node_rel_cost(median.nodes[i], median_label) > self.__epsilon:
+ nx.set_node_attributes(median, {i: median_label})
+
+
+ def __update_edges(self, graphs, median):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('edges ... ', end='')
+
+# # Clear the adjacency lists of the median and reset number of edges to 0.
+# median_edges = list(median.edges)
+# for (head, tail) in median_edges:
+# median.remove_edge(head, tail)
+
+ # @todo: what if edge is not labeled?
+ # Iterate through all possible edges (i,j) of the median.
+ for i in range(0, nx.number_of_nodes(median)):
+ for j in range(i + 1, nx.number_of_nodes(median)):
+
+ # Collect the labels of the edges to which (i,j) is mapped by the node maps.
+ edge_labels = []
+ for graph_id, graph in graphs.items():
+ k = self.__node_maps_from_median[graph_id].image(i)
+ l = self.__node_maps_from_median[graph_id].image(j)
+ if k != np.inf and l != np.inf:
+ if graph.has_edge(k, l):
+ edge_labels.append(graph.edges[(k, l)])
+
+ # Compute the median edge label and the overall edge relabeling cost.
+ rel_cost = 0
+ median_label = self.__ged_env.get_edge_label(1)
+ if median.has_edge(i, j):
+ median_label = median.edges[(i, j)]
+ if self.__labeled_edges and len(edge_labels) > 0:
+ new_median_label = self.__get_median_edge_label(edge_labels)
+ if self.__ged_env.get_edge_rel_cost(median_label, new_median_label) > self.__epsilon:
+ median_label = new_median_label
+ for edge_label in edge_labels:
+ rel_cost += self.__ged_env.get_edge_rel_cost(median_label, edge_label)
+
+ # Update the median.
+ if median.has_edge(i, j):
+ median.remove_edge(i, j)
+ if rel_cost < (self.__edge_ins_cost + self.__edge_del_cost) * len(edge_labels) - self.__edge_del_cost * len(graphs):
+ median.add_edge(i, j, **median_label)
+# else:
+# if median.has_edge(i, j):
+# median.remove_edge(i, j)
+
+
+ def __update_node_maps(self):
+ # Update the node maps.
+ if self.__parallel:
+ # @todo: notice when parallel self.__ged_env is not modified.
+ node_maps_were_modified = False
+# xxx = self.__node_maps_from_median.copy()
+
+ len_itr = len(self.__node_maps_from_median)
+ itr = [item for item in self.__node_maps_from_median.items()]
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(ged_env_toshare):
+ global G_ged_env
+ G_ged_env = ged_env_toshare
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(self.__median_id)
+ do_fun = partial(_update_node_maps_parallel, self.__median_id, self.__epsilon, self.__sort_graphs, nb_nodes_median)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(self.__ged_env,))
+ if self.__print_to_stdout == 2:
+ iterator = tqdm(pool.imap_unordered(do_fun, itr, chunksize),
+ desc='Updating node maps', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_fun, itr, chunksize)
+ for g_id, node_map, nm_modified in iterator:
+ self.__node_maps_from_median[g_id] = node_map
+ if nm_modified:
+ node_maps_were_modified = True
+ pool.close()
+ pool.join()
+# yyy = self.__node_maps_from_median.copy()
+
+ else:
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Updating node maps', total=len(self.__node_maps_from_median), file=sys.stdout)
+
+ node_maps_were_modified = False
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(self.__median_id)
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(graph_id)
+
+ if nb_nodes_median <= nb_nodes_g or not self.__sort_graphs:
+ self.__ged_env.run_method(self.__median_id, graph_id)
+ if self.__ged_env.get_upper_bound(self.__median_id, graph_id) < node_map.induced_cost() - self.__epsilon:
+ # xxx = self.__node_maps_from_median[graph_id]
+ self.__node_maps_from_median[graph_id] = self.__ged_env.get_node_map(self.__median_id, graph_id)
+ node_maps_were_modified = True
+
+ else:
+ self.__ged_env.run_method(graph_id, self.__median_id)
+ if self.__ged_env.get_upper_bound(graph_id, self.__median_id) < node_map.induced_cost() - self.__epsilon:
+ node_map_tmp = self.__ged_env.get_node_map(graph_id, self.__median_id)
+ node_map_tmp.forward_map, node_map_tmp.backward_map = node_map_tmp.backward_map, node_map_tmp.forward_map
+ self.__node_maps_from_median[graph_id] = node_map_tmp
+ node_maps_were_modified = True
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n')
+
+ # Return true if the node maps were modified.
+ return node_maps_were_modified
+
+
+ def __decrease_order(self, graphs, median):
+ # Print information about current iteration
+ if self.__print_to_stdout == 2:
+ print('Trying to decrease order: ... ', end='')
+
+ if nx.number_of_nodes(median) <= 1:
+ if self.__print_to_stdout == 2:
+ print('median graph has only 1 node, skip decrease.')
+ return False
+
+ # Initialize ID of the node that is to be deleted.
+ id_deleted_node = [None] # @todo: or np.inf
+ decreased_order = False
+
+ # Decrease the order as long as the best deletion delta is negative.
+ while self.__compute_best_deletion_delta(graphs, median, id_deleted_node) < -self.__epsilon:
+ decreased_order = True
+ self.__delete_node_from_median(id_deleted_node[0], median)
+ if nx.number_of_nodes(median) <= 1:
+ if self.__print_to_stdout == 2:
+ print('decrease stopped because median graph remains only 1 node. ', end='')
+ break
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Return true iff the order was decreased.
+ return decreased_order
+
+
+ def __compute_best_deletion_delta(self, graphs, median, id_deleted_node):
+ best_delta = 0.0
+
+ # Determine node that should be deleted (if any).
+ for i in range(0, nx.number_of_nodes(median)):
+ # Compute cost delta.
+ delta = 0.0
+ for graph_id, graph in graphs.items():
+ k = self.__node_maps_from_median[graph_id].image(i)
+ if k == np.inf:
+ delta -= self.__node_del_cost
+ else:
+ delta += self.__node_ins_cost - self.__ged_env.get_node_rel_cost(median.nodes[i], graph.nodes[k])
+ for j, j_label in median[i].items():
+ l = self.__node_maps_from_median[graph_id].image(j)
+ if k == np.inf or l == np.inf:
+ delta -= self.__edge_del_cost
+ elif not graph.has_edge(k, l):
+ delta -= self.__edge_del_cost
+ else:
+ delta += self.__edge_ins_cost - self.__ged_env.get_edge_rel_cost(j_label, graph.edges[(k, l)])
+
+ # Update best deletion delta.
+ if delta < best_delta - self.__epsilon:
+ best_delta = delta
+ id_deleted_node[0] = i
+# id_deleted_node[0] = 3 # @todo:
+
+ return best_delta
+
+
+ def __delete_node_from_median(self, id_deleted_node, median):
+ # Update the median.
+ mapping = {}
+ for i in range(0, nx.number_of_nodes(median)):
+ if i != id_deleted_node:
+ new_i = (i if i < id_deleted_node else (i - 1))
+ mapping[i] = new_i
+ median.remove_node(id_deleted_node)
+ nx.relabel_nodes(median, mapping, copy=False)
+
+ # Update the node maps.
+# xxx = self.__node_maps_from_median
+ for key, node_map in self.__node_maps_from_median.items():
+ new_node_map = NodeMap(nx.number_of_nodes(median), node_map.num_target_nodes())
+ is_unassigned_target_node = [True] * node_map.num_target_nodes()
+ for i in range(0, nx.number_of_nodes(median) + 1):
+ if i != id_deleted_node:
+ new_i = (i if i < id_deleted_node else (i - 1))
+ k = node_map.image(i)
+ new_node_map.add_assignment(new_i, k)
+ if k != np.inf:
+ is_unassigned_target_node[k] = False
+ for k in range(0, node_map.num_target_nodes()):
+ if is_unassigned_target_node[k]:
+ new_node_map.add_assignment(np.inf, k)
+# print(self.__node_maps_from_median[key].forward_map, self.__node_maps_from_median[key].backward_map)
+# print(new_node_map.forward_map, new_node_map.backward_map
+ self.__node_maps_from_median[key] = new_node_map
+
+ # Increase overall number of decreases.
+ self.__num_decrease_order += 1
+
+
+ def __increase_order(self, graphs, median):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Trying to increase order: ... ', end='')
+
+ # Initialize the best configuration and the best label of the node that is to be inserted.
+ best_config = {}
+ best_label = self.__ged_env.get_node_label(1)
+ increased_order = False
+
+ # Increase the order as long as the best insertion delta is negative.
+ while self.__compute_best_insertion_delta(graphs, best_config, best_label) < - self.__epsilon:
+ increased_order = True
+ self.__add_node_to_median(best_config, best_label, median)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Return true iff the order was increased.
+ return increased_order
+
+
+ def __compute_best_insertion_delta(self, graphs, best_config, best_label):
+ # Construct sets of inserted nodes.
+ no_inserted_node = True
+ inserted_nodes = {}
+ for graph_id, graph in graphs.items():
+ inserted_nodes[graph_id] = []
+ best_config[graph_id] = np.inf
+ for k in range(nx.number_of_nodes(graph)):
+ if self.__node_maps_from_median[graph_id].pre_image(k) == np.inf:
+ no_inserted_node = False
+ inserted_nodes[graph_id].append((k, tuple(item for item in graph.nodes[k].items()))) # @todo: can order of label names be garantteed?
+
+ # Return 0.0 if no node is inserted in any of the graphs.
+ if no_inserted_node:
+ return 0.0
+
+ # Compute insertion configuration, label, and delta.
+ best_delta = 0.0 # @todo
+ if len(self.__label_names['node_labels']) == 0 and len(self.__label_names['node_attrs']) == 0: # @todo
+ best_delta = self.__compute_insertion_delta_unlabeled(inserted_nodes, best_config, best_label)
+ elif len(self.__label_names['node_labels']) > 0: # self.__constant_node_costs:
+ best_delta = self.__compute_insertion_delta_constant(inserted_nodes, best_config, best_label)
+ else:
+ best_delta = self.__compute_insertion_delta_generic(inserted_nodes, best_config, best_label)
+
+ # Return the best delta.
+ return best_delta
+
+
+ def __compute_insertion_delta_unlabeled(self, inserted_nodes, best_config, best_label): # @todo: go through and test.
+ # Construct the nest configuration and compute its insertion delta.
+ best_delta = 0.0
+ best_config.clear()
+ for graph_id, node_set in inserted_nodes.items():
+ if len(node_set) == 0:
+ best_config[graph_id] = np.inf
+ best_delta += self.__node_del_cost
+ else:
+ best_config[graph_id] = node_set[0][0]
+ best_delta -= self.__node_ins_cost
+
+ # Return the best insertion delta.
+ return best_delta
+
+
+ def __compute_insertion_delta_constant(self, inserted_nodes, best_config, best_label):
+ # Construct histogram and inverse label maps.
+ hist = {}
+ inverse_label_maps = {}
+ for graph_id, node_set in inserted_nodes.items():
+ inverse_label_maps[graph_id] = {}
+ for node in node_set:
+ k = node[0]
+ label = node[1]
+ if label not in inverse_label_maps[graph_id]:
+ inverse_label_maps[graph_id][label] = k
+ if label not in hist:
+ hist[label] = 1
+ else:
+ hist[label] += 1
+
+ # Determine the best label.
+ best_count = 0
+ for key, val in hist.items():
+ if val > best_count:
+ best_count = val
+ best_label_tuple = key
+
+ # get best label.
+ best_label.clear()
+ for key, val in best_label_tuple:
+ best_label[key] = val
+
+ # Construct the best configuration and compute its insertion delta.
+ best_config.clear()
+ best_delta = 0.0
+ node_rel_cost = self.__ged_env.get_node_rel_cost(self.__ged_env.get_node_label(1), self.__ged_env.get_node_label(2))
+ triangle_ineq_holds = (node_rel_cost <= self.__node_del_cost + self.__node_ins_cost)
+ for graph_id, _ in inserted_nodes.items():
+ if best_label_tuple in inverse_label_maps[graph_id]:
+ best_config[graph_id] = inverse_label_maps[graph_id][best_label_tuple]
+ best_delta -= self.__node_ins_cost
+ elif triangle_ineq_holds and not len(inserted_nodes[graph_id]) == 0:
+ best_config[graph_id] = inserted_nodes[graph_id][0][0]
+ best_delta += node_rel_cost - self.__node_ins_cost
+ else:
+ best_config[graph_id] = np.inf
+ best_delta += self.__node_del_cost
+
+ # Return the best insertion delta.
+ return best_delta
+
+
+ def __compute_insertion_delta_generic(self, inserted_nodes, best_config, best_label):
+ # Collect all node labels of inserted nodes.
+ node_labels = []
+ for _, node_set in inserted_nodes.items():
+ for node in node_set:
+ node_labels.append(node[1])
+
+ # Compute node label medians that serve as initial solutions for block gradient descent.
+ initial_node_labels = []
+ self.__compute_initial_node_labels(node_labels, initial_node_labels)
+
+ # Determine best insertion configuration, label, and delta via parallel block gradient descent from all initial node labels.
+ best_delta = 0.0
+ for node_label in initial_node_labels:
+ # Construct local configuration.
+ config = {}
+ for graph_id, _ in inserted_nodes.items():
+ config[graph_id] = tuple((np.inf, tuple(item for item in self.__ged_env.get_node_label(1).items())))
+
+ # Run block gradient descent.
+ converged = False
+ itr = 0
+ while not self.__insertion_termination_criterion_met(converged, itr):
+ converged = not self.__update_config(node_label, inserted_nodes, config, node_labels)
+ node_label_dict = dict(node_label)
+ converged = converged and (not self.__update_node_label([dict(item) for item in node_labels], node_label_dict)) # @todo: the dict is tupled again in the function, can be better.
+ node_label = tuple(item for item in node_label_dict.items()) # @todo: watch out: initial_node_labels[i] is not modified here.
+
+ itr += 1
+
+ # Compute insertion delta of converged solution.
+ delta = 0.0
+ for _, node in config.items():
+ if node[0] == np.inf:
+ delta += self.__node_del_cost
+ else:
+ delta += self.__ged_env.get_node_rel_cost(dict(node_label), dict(node[1])) - self.__node_ins_cost
+
+ # Update best delta and global configuration if improvement has been found.
+ if delta < best_delta - self.__epsilon:
+ best_delta = delta
+ best_label.clear()
+ for key, val in node_label:
+ best_label[key] = val
+ best_config.clear()
+ for graph_id, val in config.items():
+ best_config[graph_id] = val[0]
+
+ # Return the best delta.
+ return best_delta
+
+
+ def __compute_initial_node_labels(self, node_labels, median_labels):
+ median_labels.clear()
+ if self.__use_real_randomness: # @todo: may not work if parallelized.
+ rng = np.random.randint(0, high=2**32 - 1, size=1)
+ urng = np.random.RandomState(seed=rng[0])
+ else:
+ urng = np.random.RandomState(seed=self.__seed)
+
+ # Generate the initial node label medians.
+ if self.__init_type_increase_order == 'K-MEANS++':
+ # Use k-means++ heuristic to generate the initial node label medians.
+ already_selected = [False] * len(node_labels)
+ selected_label_id = urng.randint(low=0, high=len(node_labels), size=1)[0] # c++ test: 23
+ median_labels.append(node_labels[selected_label_id])
+ already_selected[selected_label_id] = True
+# xxx = [41, 0, 18, 9, 6, 14, 21, 25, 33] for c++ test
+# iii = 0 for c++ test
+ while len(median_labels) < self.__num_inits_increase_order:
+ weights = [np.inf] * len(node_labels)
+ for label_id in range(0, len(node_labels)):
+ if already_selected[label_id]:
+ weights[label_id] = 0
+ continue
+ for label in median_labels:
+ weights[label_id] = min(weights[label_id], self.__ged_env.get_node_rel_cost(dict(label), dict(node_labels[label_id])))
+
+ # get non-zero weights.
+ weights_p, idx_p = [], []
+ for i, w in enumerate(weights):
+ if w != 0:
+ weights_p.append(w)
+ idx_p.append(i)
+ if len(weights_p) > 0:
+ p = np.array(weights_p) / np.sum(weights_p)
+ selected_label_id = urng.choice(range(0, len(weights_p)), size=1, p=p)[0] # for c++ test: xxx[iii]
+ selected_label_id = idx_p[selected_label_id]
+# iii += 1 for c++ test
+ median_labels.append(node_labels[selected_label_id])
+ already_selected[selected_label_id] = True
+ else: # skip the loop when all node_labels are selected. This happens when len(node_labels) <= self.__num_inits_increase_order.
+ break
+ else:
+ # Compute the initial node medians as the medians of randomly generated clusters of (roughly) equal size.
+ # @todo: go through and test.
+ shuffled_node_labels = [np.inf] * len(node_labels) #@todo: random?
+ # @todo: std::shuffle(shuffled_node_labels.begin(), shuffled_node_labels.end(), urng);?
+ cluster_size = len(node_labels) / self.__num_inits_increase_order
+ pos = 0.0
+ cluster = []
+ while len(median_labels) < self.__num_inits_increase_order - 1:
+ while pos < (len(median_labels) + 1) * cluster_size:
+ cluster.append(shuffled_node_labels[pos])
+ pos += 1
+ median_labels.append(self.__get_median_node_label(cluster))
+ cluster.clear()
+ while pos < len(shuffled_node_labels):
+ pos += 1
+ cluster.append(shuffled_node_labels[pos])
+ median_labels.append(self.__get_median_node_label(cluster))
+ cluster.clear()
+
+ # Run Lloyd's Algorithm.
+ converged = False
+ closest_median_ids = [np.inf] * len(node_labels)
+ clusters = [[] for _ in range(len(median_labels))]
+ itr = 1
+ while not self.__insertion_termination_criterion_met(converged, itr):
+ converged = not self.__update_clusters(node_labels, median_labels, closest_median_ids)
+ if not converged:
+ for cluster in clusters:
+ cluster.clear()
+ for label_id in range(0, len(node_labels)):
+ clusters[closest_median_ids[label_id]].append(node_labels[label_id])
+ for cluster_id in range(0, len(clusters)):
+ node_label = dict(median_labels[cluster_id])
+ self.__update_node_label([dict(item) for item in clusters[cluster_id]], node_label) # @todo: the dict is tupled again in the function, can be better.
+ median_labels[cluster_id] = tuple(item for item in node_label.items())
+ itr += 1
+
+
+ def __insertion_termination_criterion_met(self, converged, itr):
+ return converged or (itr >= self.__max_itrs_increase_order if self.__max_itrs_increase_order > 0 else False)
+
+
+ def __update_config(self, node_label, inserted_nodes, config, node_labels):
+ # Determine the best configuration.
+ config_modified = False
+ for graph_id, node_set in inserted_nodes.items():
+ best_assignment = config[graph_id]
+ best_cost = 0.0
+ if best_assignment[0] == np.inf:
+ best_cost = self.__node_del_cost
+ else:
+ best_cost = self.__ged_env.get_node_rel_cost(dict(node_label), dict(best_assignment[1])) - self.__node_ins_cost
+ for node in node_set:
+ cost = self.__ged_env.get_node_rel_cost(dict(node_label), dict(node[1])) - self.__node_ins_cost
+ if cost < best_cost - self.__epsilon:
+ best_cost = cost
+ best_assignment = node
+ config_modified = True
+ if self.__node_del_cost < best_cost - self.__epsilon:
+ best_cost = self.__node_del_cost
+ best_assignment = tuple((np.inf, best_assignment[1]))
+ config_modified = True
+ config[graph_id] = best_assignment
+
+ # Collect the node labels contained in the best configuration.
+ node_labels.clear()
+ for key, val in config.items():
+ if val[0] != np.inf:
+ node_labels.append(val[1])
+
+ # Return true if the configuration was modified.
+ return config_modified
+
+
+ def __update_node_label(self, node_labels, node_label):
+ if len(node_labels) == 0: # @todo: check if this is the correct solution. Especially after calling __update_config().
+ return False
+ new_node_label = self.__get_median_node_label(node_labels)
+ if self.__ged_env.get_node_rel_cost(new_node_label, node_label) > self.__epsilon:
+ node_label.clear()
+ for key, val in new_node_label.items():
+ node_label[key] = val
+ return True
+ return False
+
+
+ def __update_clusters(self, node_labels, median_labels, closest_median_ids):
+ # Determine the closest median for each node label.
+ clusters_modified = False
+ for label_id in range(0, len(node_labels)):
+ closest_median_id = np.inf
+ dist_to_closest_median = np.inf
+ for median_id in range(0, len(median_labels)):
+ dist_to_median = self.__ged_env.get_node_rel_cost(dict(median_labels[median_id]), dict(node_labels[label_id]))
+ if dist_to_median < dist_to_closest_median - self.__epsilon:
+ dist_to_closest_median = dist_to_median
+ closest_median_id = median_id
+ if closest_median_id != closest_median_ids[label_id]:
+ closest_median_ids[label_id] = closest_median_id
+ clusters_modified = True
+
+ # Return true if the clusters were modified.
+ return clusters_modified
+
+
+ def __add_node_to_median(self, best_config, best_label, median):
+ # Update the median.
+ nb_nodes_median = nx.number_of_nodes(median)
+ median.add_node(nb_nodes_median, **best_label)
+
+ # Update the node maps.
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ node_map_as_rel = []
+ node_map.as_relation(node_map_as_rel)
+ new_node_map = NodeMap(nx.number_of_nodes(median), node_map.num_target_nodes())
+ for assignment in node_map_as_rel:
+ new_node_map.add_assignment(assignment[0], assignment[1])
+ new_node_map.add_assignment(nx.number_of_nodes(median) - 1, best_config[graph_id])
+ self.__node_maps_from_median[graph_id] = new_node_map
+
+ # Increase overall number of increases.
+ self.__num_increase_order += 1
+
+
+ def __are_graphs_equal(self, g1, g2):
+ """
+ Check if the two graphs are equal.
+
+ Parameters
+ ----------
+ g1 : NetworkX graph object
+ Graph 1 to be compared.
+
+ g2 : NetworkX graph object
+ Graph 2 to be compared.
+
+ Returns
+ -------
+ bool
+ True if the two graph are equal.
+
+ Notes
+ -----
+ This is not an identical check. Here the two graphs are equal if and only if their original_node_ids, nodes, all node labels, edges and all edge labels are equal. This function is specifically designed for class `MedianGraphEstimator` and should not be used elsewhere.
+ """
+ # check original node ids.
+ if not g1.graph['original_node_ids'] == g2.graph['original_node_ids']:
+ return False
+ # check nodes.
+ nlist1 = [n for n in g1.nodes(data=True)]
+ nlist2 = [n for n in g2.nodes(data=True)]
+ if not nlist1 == nlist2:
+ return False
+ # check edges.
+ elist1 = [n for n in g1.edges(data=True)]
+ elist2 = [n for n in g2.edges(data=True)]
+ if not elist1 == elist2:
+ return False
+
+ return True
+
+
+ def compute_my_cost(g, h, node_map):
+ cost = 0.0
+ for node in g.nodes:
+ cost += 0
+
+
+ def set_label_names(self, node_labels=[], edge_labels=[], node_attrs=[], edge_attrs=[]):
+ self.__label_names = {'node_labels': node_labels, 'edge_labels': edge_labels,
+ 'node_attrs': node_attrs, 'edge_attrs': edge_attrs}
+
+
+ def __get_median_node_label(self, node_labels):
+ if len(self.__label_names['node_labels']) > 0:
+ return self.__get_median_label_symbolic(node_labels)
+ elif len(self.__label_names['node_attrs']) > 0:
+ return self.__get_median_label_nonsymbolic(node_labels)
+ else:
+ raise Exception('Node label names are not given.')
+
+
+ def __get_median_edge_label(self, edge_labels):
+ if len(self.__label_names['edge_labels']) > 0:
+ return self.__get_median_label_symbolic(edge_labels)
+ elif len(self.__label_names['edge_attrs']) > 0:
+ return self.__get_median_label_nonsymbolic(edge_labels)
+ else:
+ raise Exception('Edge label names are not given.')
+
+
+ def __get_median_label_symbolic(self, labels):
+ # Construct histogram.
+ hist = {}
+ for label in labels:
+ label = tuple([kv for kv in label.items()]) # @todo: this may be slow.
+ if label not in hist:
+ hist[label] = 1
+ else:
+ hist[label] += 1
+
+ # Return the label that appears most frequently.
+ best_count = 0
+ median_label = {}
+ for label, count in hist.items():
+ if count > best_count:
+ best_count = count
+ median_label = {kv[0]: kv[1] for kv in label}
+
+ return median_label
+
+
+ def __get_median_label_nonsymbolic(self, labels):
+ if len(labels) == 0:
+ return {} # @todo
+ else:
+ # Transform the labels into coordinates and compute mean label as initial solution.
+ labels_as_coords = []
+ sums = {}
+ for key, val in labels[0].items():
+ sums[key] = 0
+ for label in labels:
+ coords = {}
+ for key, val in label.items():
+ label_f = float(val)
+ sums[key] += label_f
+ coords[key] = label_f
+ labels_as_coords.append(coords)
+ median = {}
+ for key, val in sums.items():
+ median[key] = val / len(labels)
+
+ # Run main loop of Weiszfeld's Algorithm.
+ epsilon = 0.0001
+ delta = 1.0
+ num_itrs = 0
+ all_equal = False
+ while ((delta > epsilon) and (num_itrs < 100) and (not all_equal)):
+ numerator = {}
+ for key, val in sums.items():
+ numerator[key] = 0
+ denominator = 0
+ for label_as_coord in labels_as_coords:
+ norm = 0
+ for key, val in label_as_coord.items():
+ norm += (val - median[key]) ** 2
+ norm = np.sqrt(norm)
+ if norm > 0:
+ for key, val in label_as_coord.items():
+ numerator[key] += val / norm
+ denominator += 1.0 / norm
+ if denominator == 0:
+ all_equal = True
+ else:
+ new_median = {}
+ delta = 0.0
+ for key, val in numerator.items():
+ this_median = val / denominator
+ new_median[key] = this_median
+ delta += np.abs(median[key] - this_median)
+ median = new_median
+
+ num_itrs += 1
+
+ # Transform the solution to strings and return it.
+ median_label = {}
+ for key, val in median.items():
+ median_label[key] = str(val)
+ return median_label
+
+
+# def __get_median_edge_label_symbolic(self, edge_labels):
+# pass
+
+
+# def __get_median_edge_label_nonsymbolic(self, edge_labels):
+# if len(edge_labels) == 0:
+# return {}
+# else:
+# # Transform the labels into coordinates and compute mean label as initial solution.
+# edge_labels_as_coords = []
+# sums = {}
+# for key, val in edge_labels[0].items():
+# sums[key] = 0
+# for edge_label in edge_labels:
+# coords = {}
+# for key, val in edge_label.items():
+# label = float(val)
+# sums[key] += label
+# coords[key] = label
+# edge_labels_as_coords.append(coords)
+# median = {}
+# for key, val in sums.items():
+# median[key] = val / len(edge_labels)
+#
+# # Run main loop of Weiszfeld's Algorithm.
+# epsilon = 0.0001
+# delta = 1.0
+# num_itrs = 0
+# all_equal = False
+# while ((delta > epsilon) and (num_itrs < 100) and (not all_equal)):
+# numerator = {}
+# for key, val in sums.items():
+# numerator[key] = 0
+# denominator = 0
+# for edge_label_as_coord in edge_labels_as_coords:
+# norm = 0
+# for key, val in edge_label_as_coord.items():
+# norm += (val - median[key]) ** 2
+# norm += np.sqrt(norm)
+# if norm > 0:
+# for key, val in edge_label_as_coord.items():
+# numerator[key] += val / norm
+# denominator += 1.0 / norm
+# if denominator == 0:
+# all_equal = True
+# else:
+# new_median = {}
+# delta = 0.0
+# for key, val in numerator.items():
+# this_median = val / denominator
+# new_median[key] = this_median
+# delta += np.abs(median[key] - this_median)
+# median = new_median
+#
+# num_itrs += 1
+#
+# # Transform the solution to ged::GXLLabel and return it.
+# median_label = {}
+# for key, val in median.items():
+# median_label[key] = str(val)
+# return median_label
+
+
+def _compute_medoid_parallel(graph_ids, sort, itr):
+ g_id = itr[0]
+ i = itr[1]
+ # @todo: timer not considered here.
+# if timer.expired():
+# self.__state = AlgorithmState.CALLED
+# break
+ nb_nodes_g = G_ged_env.get_graph_num_nodes(g_id)
+ sum_of_distances = 0
+ for h_id in graph_ids:
+ nb_nodes_h = G_ged_env.get_graph_num_nodes(h_id)
+ if nb_nodes_g <= nb_nodes_h or not sort:
+ G_ged_env.run_method(g_id, h_id)
+ sum_of_distances += G_ged_env.get_upper_bound(g_id, h_id)
+ else:
+ G_ged_env.run_method(h_id, g_id)
+ sum_of_distances += G_ged_env.get_upper_bound(h_id, g_id)
+ return i, sum_of_distances
+
+
+def _compute_init_node_maps_parallel(gen_median_id, sort, nb_nodes_median, itr):
+ graph_id = itr
+ nb_nodes_g = G_ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not sort:
+ G_ged_env.run_method(gen_median_id, graph_id)
+ node_map = G_ged_env.get_node_map(gen_median_id, graph_id)
+# print(self.__node_maps_from_median[graph_id])
+ else:
+ G_ged_env.run_method(graph_id, gen_median_id)
+ node_map = G_ged_env.get_node_map(graph_id, gen_median_id)
+ node_map.forward_map, node_map.backward_map = node_map.backward_map, node_map.forward_map
+ sum_of_distance = node_map.induced_cost()
+# print(self.__sum_of_distances)
+ return graph_id, sum_of_distance, node_map
+
+
+def _update_node_maps_parallel(median_id, epsilon, sort, nb_nodes_median, itr):
+ graph_id = itr[0]
+ node_map = itr[1]
+
+ node_maps_were_modified = False
+ nb_nodes_g = G_ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not sort:
+ G_ged_env.run_method(median_id, graph_id)
+ if G_ged_env.get_upper_bound(median_id, graph_id) < node_map.induced_cost() - epsilon:
+ node_map = G_ged_env.get_node_map(median_id, graph_id)
+ node_maps_were_modified = True
+ else:
+ G_ged_env.run_method(graph_id, median_id)
+ if G_ged_env.get_upper_bound(graph_id, median_id) < node_map.induced_cost() - epsilon:
+ node_map = G_ged_env.get_node_map(graph_id, median_id)
+ node_map.forward_map, node_map.backward_map = node_map.backward_map, node_map.forward_map
+ node_maps_were_modified = True
+
+ return graph_id, node_map, node_maps_were_modified
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/median/median_graph_estimator_cml.py b/lang/fr/gklearn/ged/median/median_graph_estimator_cml.py
new file mode 100644
index 0000000000..2d5b110868
--- /dev/null
+++ b/lang/fr/gklearn/ged/median/median_graph_estimator_cml.py
@@ -0,0 +1,1676 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Mar 16 18:04:55 2020
+
+@author: ljia
+"""
+import numpy as np
+import time
+from tqdm import tqdm
+import sys
+import networkx as nx
+import multiprocessing
+from multiprocessing import Pool
+from functools import partial
+from gklearn.ged.env import AlgorithmState, NodeMap
+from gklearn.ged.util import misc
+from gklearn.utils import Timer, SpecialLabel
+
+
+class MedianGraphEstimatorCML(object): # @todo: differ dummy_node from undifined node?
+ """Estimate median graphs using the pure Python version of GEDEnv.
+ """
+
+ def __init__(self, ged_env, constant_node_costs):
+ """Constructor.
+
+ Parameters
+ ----------
+ ged_env : gklearn.gedlib.gedlibpy.GEDEnv
+ Initialized GED environment. The edit costs must be set by the user.
+
+ constant_node_costs : Boolean
+ Set to True if the node relabeling costs are constant.
+ """
+ self.__ged_env = ged_env
+ self.__init_method = 'BRANCH_FAST'
+ self.__init_options = ''
+ self.__descent_method = 'BRANCH_FAST'
+ self.__descent_options = ''
+ self.__refine_method = 'IPFP'
+ self.__refine_options = ''
+ self.__constant_node_costs = constant_node_costs
+ self.__labeled_nodes = (ged_env.get_num_node_labels() > 1)
+ self.__node_del_cost = ged_env.get_node_del_cost(ged_env.get_node_label(1, to_dict=False))
+ self.__node_ins_cost = ged_env.get_node_ins_cost(ged_env.get_node_label(1, to_dict=False))
+ self.__labeled_edges = (ged_env.get_num_edge_labels() > 1)
+ self.__edge_del_cost = ged_env.get_edge_del_cost(ged_env.get_edge_label(1, to_dict=False))
+ self.__edge_ins_cost = ged_env.get_edge_ins_cost(ged_env.get_edge_label(1, to_dict=False))
+ self.__init_type = 'RANDOM'
+ self.__num_random_inits = 10
+ self.__desired_num_random_inits = 10
+ self.__use_real_randomness = True
+ self.__seed = 0
+ self.__parallel = True
+ self.__update_order = True
+ self.__sort_graphs = True # sort graphs by size when computing GEDs.
+ self.__refine = True
+ self.__time_limit_in_sec = 0
+ self.__epsilon = 0.0001
+ self.__max_itrs = 100
+ self.__max_itrs_without_update = 3
+ self.__num_inits_increase_order = 10
+ self.__init_type_increase_order = 'K-MEANS++'
+ self.__max_itrs_increase_order = 10
+ self.__print_to_stdout = 2
+ self.__median_id = np.inf # @todo: check
+ self.__node_maps_from_median = {}
+ self.__sum_of_distances = 0
+ self.__best_init_sum_of_distances = np.inf
+ self.__converged_sum_of_distances = np.inf
+ self.__runtime = None
+ self.__runtime_initialized = None
+ self.__runtime_converged = None
+ self.__itrs = [] # @todo: check: {} ?
+ self.__num_decrease_order = 0
+ self.__num_increase_order = 0
+ self.__num_converged_descents = 0
+ self.__state = AlgorithmState.TERMINATED
+ self.__label_names = {}
+
+ if ged_env is None:
+ raise Exception('The GED environment pointer passed to the constructor of MedianGraphEstimator is null.')
+ elif not ged_env.is_initialized():
+ raise Exception('The GED environment is uninitialized. Call gedlibpy.GEDEnv.init() before passing it to the constructor of MedianGraphEstimator.')
+
+
+ def set_options(self, options):
+ """Sets the options of the estimator.
+
+ Parameters
+ ----------
+ options : string
+ String that specifies with which options to run the estimator.
+ """
+ self.__set_default_options()
+ options_map = misc.options_string_to_options_map(options)
+ for opt_name, opt_val in options_map.items():
+ if opt_name == 'init-type':
+ self.__init_type = opt_val
+ if opt_val != 'MEDOID' and opt_val != 'RANDOM' and opt_val != 'MIN' and opt_val != 'MAX' and opt_val != 'MEAN':
+ raise Exception('Invalid argument ' + opt_val + ' for option init-type. Usage: options = "[--init-type RANDOM|MEDOID|EMPTY|MIN|MAX|MEAN] [...]"')
+ elif opt_name == 'random-inits':
+ try:
+ self.__num_random_inits = int(opt_val)
+ self.__desired_num_random_inits = self.__num_random_inits
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option random-inits. Usage: options = "[--random-inits ]"')
+
+ if self.__num_random_inits <= 0:
+ raise Exception('Invalid argument "' + opt_val + '" for option random-inits. Usage: options = "[--random-inits ]"')
+
+ elif opt_name == 'randomness':
+ if opt_val == 'PSEUDO':
+ self.__use_real_randomness = False
+
+ elif opt_val == 'REAL':
+ self.__use_real_randomness = True
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option randomness. Usage: options = "[--randomness REAL|PSEUDO] [...]"')
+
+ elif opt_name == 'stdout':
+ if opt_val == '0':
+ self.__print_to_stdout = 0
+
+ elif opt_val == '1':
+ self.__print_to_stdout = 1
+
+ elif opt_val == '2':
+ self.__print_to_stdout = 2
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option stdout. Usage: options = "[--stdout 0|1|2] [...]"')
+
+ elif opt_name == 'parallel':
+ if opt_val == 'TRUE':
+ self.__parallel = True
+
+ elif opt_val == 'FALSE':
+ self.__parallel = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option parallel. Usage: options = "[--parallel TRUE|FALSE] [...]"')
+
+ elif opt_name == 'update-order':
+ if opt_val == 'TRUE':
+ self.__update_order = True
+
+ elif opt_val == 'FALSE':
+ self.__update_order = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option update-order. Usage: options = "[--update-order TRUE|FALSE] [...]"')
+
+ elif opt_name == 'sort-graphs':
+ if opt_val == 'TRUE':
+ self.__sort_graphs = True
+
+ elif opt_val == 'FALSE':
+ self.__sort_graphs = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option sort-graphs. Usage: options = "[--sort-graphs TRUE|FALSE] [...]"')
+
+ elif opt_name == 'refine':
+ if opt_val == 'TRUE':
+ self.__refine = True
+
+ elif opt_val == 'FALSE':
+ self.__refine = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option refine. Usage: options = "[--refine TRUE|FALSE] [...]"')
+
+ elif opt_name == 'time-limit':
+ try:
+ self.__time_limit_in_sec = float(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option time-limit. Usage: options = "[--time-limit ] [...]')
+
+ elif opt_name == 'max-itrs':
+ try:
+ self.__max_itrs = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option max-itrs. Usage: options = "[--max-itrs ] [...]')
+
+ elif opt_name == 'max-itrs-without-update':
+ try:
+ self.__max_itrs_without_update = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option max-itrs-without-update. Usage: options = "[--max-itrs-without-update ] [...]')
+
+ elif opt_name == 'seed':
+ try:
+ self.__seed = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option seed. Usage: options = "[--seed ] [...]')
+
+ elif opt_name == 'epsilon':
+ try:
+ self.__epsilon = float(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option epsilon. Usage: options = "[--epsilon ] [...]')
+
+ if self.__epsilon <= 0:
+ raise Exception('Invalid argument "' + opt_val + '" for option epsilon. Usage: options = "[--epsilon ] [...]')
+
+ elif opt_name == 'inits-increase-order':
+ try:
+ self.__num_inits_increase_order = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option inits-increase-order. Usage: options = "[--inits-increase-order ]"')
+
+ if self.__num_inits_increase_order <= 0:
+ raise Exception('Invalid argument "' + opt_val + '" for option inits-increase-order. Usage: options = "[--inits-increase-order ]"')
+
+ elif opt_name == 'init-type-increase-order':
+ self.__init_type_increase_order = opt_val
+ if opt_val != 'CLUSTERS' and opt_val != 'K-MEANS++':
+ raise Exception('Invalid argument ' + opt_val + ' for option init-type-increase-order. Usage: options = "[--init-type-increase-order CLUSTERS|K-MEANS++] [...]"')
+
+ elif opt_name == 'max-itrs-increase-order':
+ try:
+ self.__max_itrs_increase_order = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option max-itrs-increase-order. Usage: options = "[--max-itrs-increase-order ] [...]')
+
+ else:
+ valid_options = '[--init-type ] [--random-inits ] [--randomness ] [--seed ] [--stdout ] '
+ valid_options += '[--time-limit ] [--max-itrs ] [--epsilon ] '
+ valid_options += '[--inits-increase-order ] [--init-type-increase-order ] [--max-itrs-increase-order ]'
+ raise Exception('Invalid option "' + opt_name + '". Usage: options = "' + valid_options + '"')
+
+
+ def set_init_method(self, init_method, init_options={}):
+ """Selects method to be used for computing the initial medoid graph.
+
+ Parameters
+ ----------
+ init_method : string
+ The selected method. Default: ged::Options::GEDMethod::BRANCH_UNIFORM.
+
+ init_options : string
+ The options for the selected method. Default: "".
+
+ Notes
+ -----
+ Has no effect unless "--init-type MEDOID" is passed to set_options().
+ """
+ self.__init_method = init_method;
+ self.__init_options = init_options;
+
+
+ def set_descent_method(self, descent_method, descent_options=''):
+ """Selects method to be used for block gradient descent..
+
+ Parameters
+ ----------
+ descent_method : string
+ The selected method. Default: ged::Options::GEDMethod::BRANCH_FAST.
+
+ descent_options : string
+ The options for the selected method. Default: "".
+
+ Notes
+ -----
+ Has no effect unless "--init-type MEDOID" is passed to set_options().
+ """
+ self.__descent_method = descent_method;
+ self.__descent_options = descent_options;
+
+
+ def set_refine_method(self, refine_method, refine_options):
+ """Selects method to be used for improving the sum of distances and the node maps for the converged median.
+
+ Parameters
+ ----------
+ refine_method : string
+ The selected method. Default: "IPFP".
+
+ refine_options : string
+ The options for the selected method. Default: "".
+
+ Notes
+ -----
+ Has no effect if "--refine FALSE" is passed to set_options().
+ """
+ self.__refine_method = refine_method
+ self.__refine_options = refine_options
+
+
+ def run(self, graph_ids, set_median_id, gen_median_id):
+ """Computes a generalized median graph.
+
+ Parameters
+ ----------
+ graph_ids : list[integer]
+ The IDs of the graphs for which the median should be computed. Must have been added to the environment passed to the constructor.
+
+ set_median_id : integer
+ The ID of the computed set-median. A dummy graph with this ID must have been added to the environment passed to the constructor. Upon termination, the computed median can be obtained via gklearn.gedlib.gedlibpy.GEDEnv.get_graph().
+
+
+ gen_median_id : integer
+ The ID of the computed generalized median. Upon termination, the computed median can be obtained via gklearn.gedlib.gedlibpy.GEDEnv.get_graph().
+ """
+ # Sanity checks.
+ if len(graph_ids) == 0:
+ raise Exception('Empty vector of graph IDs, unable to compute median.')
+ all_graphs_empty = True
+ for graph_id in graph_ids:
+ if self.__ged_env.get_graph_num_nodes(graph_id) > 0:
+ all_graphs_empty = False
+ break
+ if all_graphs_empty:
+ raise Exception('All graphs in the collection are empty.')
+
+ # Start timer and record start time.
+ start = time.time()
+ timer = Timer(self.__time_limit_in_sec)
+ self.__median_id = gen_median_id
+ self.__state = AlgorithmState.TERMINATED
+
+ # Get NetworkX graph representations of the input graphs.
+ graphs = {}
+ for graph_id in graph_ids:
+ # @todo: get_nx_graph() function may need to be modified according to the coming code.
+ graphs[graph_id] = self.__ged_env.get_nx_graph(graph_id)
+# print(self.__ged_env.get_graph_internal_id(0))
+# print(graphs[0].graph)
+# print(graphs[0].nodes(data=True))
+# print(graphs[0].edges(data=True))
+# print(nx.adjacency_matrix(graphs[0]))
+
+ # Construct initial medians.
+ medians = []
+ self.__construct_initial_medians(graph_ids, timer, medians)
+ end_init = time.time()
+ self.__runtime_initialized = end_init - start
+# print(medians[0].graph)
+# print(medians[0].nodes(data=True))
+# print(medians[0].edges(data=True))
+# print(nx.adjacency_matrix(medians[0]))
+
+ # Reset information about iterations and number of times the median decreases and increases.
+ self.__itrs = [0] * len(medians)
+ self.__num_decrease_order = 0
+ self.__num_increase_order = 0
+ self.__num_converged_descents = 0
+
+ # Initialize the best median.
+ best_sum_of_distances = np.inf
+ self.__best_init_sum_of_distances = np.inf
+ node_maps_from_best_median = {}
+
+ # Run block gradient descent from all initial medians.
+ self.__ged_env.set_method(self.__descent_method, self.__descent_options)
+ for median_pos in range(0, len(medians)):
+
+ # Terminate if the timer has expired and at least one SOD has been computed.
+ if timer.expired() and median_pos > 0:
+ break
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n===========================================================')
+ print('Block gradient descent for initial median', str(median_pos + 1), 'of', str(len(medians)), '.')
+ print('-----------------------------------------------------------')
+
+ # Get reference to the median.
+ median = medians[median_pos]
+
+ # Load initial median into the environment.
+ self.__ged_env.load_nx_graph(median, gen_median_id)
+ self.__ged_env.init(self.__ged_env.get_init_type())
+
+ # Compute node maps and sum of distances for initial median.
+# xxx = self.__node_maps_from_median
+ self.__compute_init_node_maps(graph_ids, gen_median_id)
+# yyy = self.__node_maps_from_median
+
+ self.__best_init_sum_of_distances = min(self.__best_init_sum_of_distances, self.__sum_of_distances)
+ self.__ged_env.load_nx_graph(median, set_median_id)
+# print(self.__best_init_sum_of_distances)
+
+ # Run block gradient descent from initial median.
+ converged = False
+ itrs_without_update = 0
+ while not self.__termination_criterion_met(converged, timer, self.__itrs[median_pos], itrs_without_update):
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n===========================================================')
+ print('Iteration', str(self.__itrs[median_pos] + 1), 'for initial median', str(median_pos + 1), 'of', str(len(medians)), '.')
+ print('-----------------------------------------------------------')
+
+ # Initialize flags that tell us what happened in the iteration.
+ median_modified = False
+ node_maps_modified = False
+ decreased_order = False
+ increased_order = False
+
+ # Update the median.
+ median_modified = self.__update_median(graphs, median)
+ if self.__update_order:
+ pass # @todo:
+# if not median_modified or self.__itrs[median_pos] == 0:
+# decreased_order = self.__decrease_order(graphs, median)
+# if not decreased_order or self.__itrs[median_pos] == 0:
+# increased_order = self.__increase_order(graphs, median)
+
+ # Update the number of iterations without update of the median.
+ if median_modified or decreased_order or increased_order:
+ itrs_without_update = 0
+ else:
+ itrs_without_update += 1
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Loading median to environment: ... ', end='')
+
+ # Load the median into the environment.
+ # @todo: should this function use the original node label?
+ self.__ged_env.load_nx_graph(median, gen_median_id)
+ self.__ged_env.init(self.__ged_env.get_init_type())
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Updating induced costs: ... ', end='')
+
+ # Compute induced costs of the old node maps w.r.t. the updated median.
+ for graph_id in graph_ids:
+# print(self.__node_maps_from_median[graph_id].induced_cost())
+# xxx = self.__node_maps_from_median[graph_id]
+ self.__ged_env.compute_induced_cost(gen_median_id, graph_id, self.__node_maps_from_median[graph_id])
+# print('---------------------------------------')
+# print(self.__node_maps_from_median[graph_id].induced_cost())
+ # @todo:!!!!!!!!!!!!!!!!!!!!!!!!!!!!This value is a slight different from the c++ program, which might be a bug! Use it very carefully!
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Update the node maps.
+ node_maps_modified = self.__update_node_maps()
+
+ # Update the order of the median if no improvement can be found with the current order.
+
+ # Update the sum of distances.
+ old_sum_of_distances = self.__sum_of_distances
+ self.__sum_of_distances = 0
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ self.__sum_of_distances += node_map.induced_cost()
+# print(self.__sum_of_distances)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Old local SOD: ', old_sum_of_distances)
+ print('New local SOD: ', self.__sum_of_distances)
+ print('Best converged SOD: ', best_sum_of_distances)
+ print('Modified median: ', median_modified)
+ print('Modified node maps: ', node_maps_modified)
+ print('Decreased order: ', decreased_order)
+ print('Increased order: ', increased_order)
+ print('===========================================================\n')
+
+ converged = not (median_modified or node_maps_modified or decreased_order or increased_order)
+
+ self.__itrs[median_pos] += 1
+
+ # Update the best median.
+ if self.__sum_of_distances < best_sum_of_distances:
+ best_sum_of_distances = self.__sum_of_distances
+ node_maps_from_best_median = self.__node_maps_from_median.copy() # @todo: this is a shallow copy, not sure if it is enough.
+ best_median = median
+
+ # Update the number of converged descents.
+ if converged:
+ self.__num_converged_descents += 1
+
+ # Store the best encountered median.
+ self.__sum_of_distances = best_sum_of_distances
+ self.__node_maps_from_median = node_maps_from_best_median
+ self.__ged_env.load_nx_graph(best_median, gen_median_id)
+ self.__ged_env.init(self.__ged_env.get_init_type())
+ end_descent = time.time()
+ self.__runtime_converged = end_descent - start
+
+ # Refine the sum of distances and the node maps for the converged median.
+ self.__converged_sum_of_distances = self.__sum_of_distances
+ if self.__refine:
+ self.__improve_sum_of_distances(timer)
+
+ # Record end time, set runtime and reset the number of initial medians.
+ end = time.time()
+ self.__runtime = end - start
+ self.__num_random_inits = self.__desired_num_random_inits
+
+ # Print global information.
+ if self.__print_to_stdout != 0:
+ print('\n===========================================================')
+ print('Finished computation of generalized median graph.')
+ print('-----------------------------------------------------------')
+ print('Best SOD after initialization: ', self.__best_init_sum_of_distances)
+ print('Converged SOD: ', self.__converged_sum_of_distances)
+ if self.__refine:
+ print('Refined SOD: ', self.__sum_of_distances)
+ print('Overall runtime: ', self.__runtime)
+ print('Runtime of initialization: ', self.__runtime_initialized)
+ print('Runtime of block gradient descent: ', self.__runtime_converged - self.__runtime_initialized)
+ if self.__refine:
+ print('Runtime of refinement: ', self.__runtime - self.__runtime_converged)
+ print('Number of initial medians: ', len(medians))
+ total_itr = 0
+ num_started_descents = 0
+ for itr in self.__itrs:
+ total_itr += itr
+ if itr > 0:
+ num_started_descents += 1
+ print('Size of graph collection: ', len(graph_ids))
+ print('Number of started descents: ', num_started_descents)
+ print('Number of converged descents: ', self.__num_converged_descents)
+ print('Overall number of iterations: ', total_itr)
+ print('Overall number of times the order decreased: ', self.__num_decrease_order)
+ print('Overall number of times the order increased: ', self.__num_increase_order)
+ print('===========================================================\n')
+
+
+ def __improve_sum_of_distances(self, timer): # @todo: go through and test
+ # Use method selected for refinement phase.
+ self.__ged_env.set_method(self.__refine_method, self.__refine_options)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Improving node maps', total=len(self.__node_maps_from_median), file=sys.stdout)
+ print('\n===========================================================')
+ print('Improving node maps and SOD for converged median.')
+ print('-----------------------------------------------------------')
+ progress.update(1)
+
+ # Improving the node maps.
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(self.__gen_median_id)
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ if time.expired():
+ if self.__state == AlgorithmState.TERMINATED:
+ self.__state = AlgorithmState.CONVERGED
+ break
+
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not self.__sort_graphs:
+ self.__ged_env.run_method(self.__gen_median_id, graph_id)
+ if self.__ged_env.get_upper_bound(self.__gen_median_id, graph_id) < node_map.induced_cost():
+ self.__node_maps_from_median[graph_id] = self.__ged_env.get_node_map(self.__gen_median_id, graph_id)
+ else:
+ self.__ged_env.run_method(graph_id, self.__gen_median_id)
+ if self.__ged_env.get_upper_bound(graph_id, self.__gen_median_id) < node_map.induced_cost():
+ node_map_tmp = self.__ged_env.get_node_map(graph_id, self.__gen_median_id)
+ node_map_tmp.forward_map, node_map_tmp.backward_map = node_map_tmp.backward_map, node_map_tmp.forward_map
+ self.__node_maps_from_median[graph_id] = node_map_tmp
+
+ self.__sum_of_distances += self.__node_maps_from_median[graph_id].induced_cost()
+
+ # Print information.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ self.__sum_of_distances = 0.0
+ for key, val in self.__node_maps_from_median.items():
+ self.__sum_of_distances += val.induced_cost()
+
+ # Print information.
+ if self.__print_to_stdout == 2:
+ print('===========================================================\n')
+
+
+ def __median_available(self):
+ return self.__median_id != np.inf
+
+
+ def get_state(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_state().')
+ return self.__state
+
+
+ def get_sum_of_distances(self, state=''):
+ """Returns the sum of distances.
+
+ Parameters
+ ----------
+ state : string
+ The state of the estimator. Can be 'initialized' or 'converged'. Default: ""
+
+ Returns
+ -------
+ float
+ The sum of distances (SOD) of the median when the estimator was in the state `state` during the last call to run(). If `state` is not given, the converged SOD (without refinement) or refined SOD (with refinement) is returned.
+ """
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_sum_of_distances().')
+ if state == 'initialized':
+ return self.__best_init_sum_of_distances
+ if state == 'converged':
+ return self.__converged_sum_of_distances
+ return self.__sum_of_distances
+
+
+ def get_runtime(self, state):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_runtime().')
+ if state == AlgorithmState.INITIALIZED:
+ return self.__runtime_initialized
+ if state == AlgorithmState.CONVERGED:
+ return self.__runtime_converged
+ return self.__runtime
+
+
+ def get_num_itrs(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_itrs().')
+ return self.__itrs
+
+
+ def get_num_times_order_decreased(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_times_order_decreased().')
+ return self.__num_decrease_order
+
+
+ def get_num_times_order_increased(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_times_order_increased().')
+ return self.__num_increase_order
+
+
+ def get_num_converged_descents(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_converged_descents().')
+ return self.__num_converged_descents
+
+
+ def get_ged_env(self):
+ return self.__ged_env
+
+
+ def __set_default_options(self):
+ self.__init_type = 'RANDOM'
+ self.__num_random_inits = 10
+ self.__desired_num_random_inits = 10
+ self.__use_real_randomness = True
+ self.__seed = 0
+ self.__parallel = True
+ self.__update_order = True
+ self.__sort_graphs = True
+ self.__refine = True
+ self.__time_limit_in_sec = 0
+ self.__epsilon = 0.0001
+ self.__max_itrs = 100
+ self.__max_itrs_without_update = 3
+ self.__num_inits_increase_order = 10
+ self.__init_type_increase_order = 'K-MEANS++'
+ self.__max_itrs_increase_order = 10
+ self.__print_to_stdout = 2
+ self.__label_names = {}
+
+
+ def __construct_initial_medians(self, graph_ids, timer, initial_medians):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n===========================================================')
+ print('Constructing initial median(s).')
+ print('-----------------------------------------------------------')
+
+ # Compute or sample the initial median(s).
+ initial_medians.clear()
+ if self.__init_type == 'MEDOID':
+ self.__compute_medoid(graph_ids, timer, initial_medians)
+ elif self.__init_type == 'MAX':
+ pass # @todo
+# compute_max_order_graph_(graph_ids, initial_medians)
+ elif self.__init_type == 'MIN':
+ pass # @todo
+# compute_min_order_graph_(graph_ids, initial_medians)
+ elif self.__init_type == 'MEAN':
+ pass # @todo
+# compute_mean_order_graph_(graph_ids, initial_medians)
+ else:
+ pass # @todo
+# sample_initial_medians_(graph_ids, initial_medians)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('===========================================================')
+
+
+ def __compute_medoid(self, graph_ids, timer, initial_medians):
+ # Use method selected for initialization phase.
+ self.__ged_env.set_method(self.__init_method, self.__init_options)
+
+ # Compute the medoid.
+ if self.__parallel:
+ # @todo: notice when parallel self.__ged_env is not modified.
+ sum_of_distances_list = [np.inf] * len(graph_ids)
+ len_itr = len(graph_ids)
+ itr = zip(graph_ids, range(0, len(graph_ids)))
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(ged_env_toshare):
+ global G_ged_env
+ G_ged_env = ged_env_toshare
+ do_fun = partial(_compute_medoid_parallel, graph_ids, self.__sort_graphs)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(self.__ged_env,))
+ if self.__print_to_stdout == 2:
+ iterator = tqdm(pool.imap_unordered(do_fun, itr, chunksize),
+ desc='Computing medoid', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_fun, itr, chunksize)
+ for i, dis in iterator:
+ sum_of_distances_list[i] = dis
+ pool.close()
+ pool.join()
+
+ medoid_id = np.argmin(sum_of_distances_list)
+ best_sum_of_distances = sum_of_distances_list[medoid_id]
+
+ initial_medians.append(self.__ged_env.get_nx_graph(medoid_id)) # @todo
+
+ else:
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Computing medoid', total=len(graph_ids), file=sys.stdout)
+
+ medoid_id = graph_ids[0]
+ best_sum_of_distances = np.inf
+ for g_id in graph_ids:
+ if timer.expired():
+ self.__state = AlgorithmState.CALLED
+ break
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(g_id)
+ sum_of_distances = 0
+ for h_id in graph_ids: # @todo: this can be faster, only a half is needed.
+ nb_nodes_h = self.__ged_env.get_graph_num_nodes(h_id)
+ if nb_nodes_g <= nb_nodes_h or not self.__sort_graphs:
+ self.__ged_env.run_method(g_id, h_id) # @todo
+ sum_of_distances += self.__ged_env.get_upper_bound(g_id, h_id)
+ else:
+ self.__ged_env.run_method(h_id, g_id)
+ sum_of_distances += self.__ged_env.get_upper_bound(h_id, g_id)
+ if sum_of_distances < best_sum_of_distances:
+ best_sum_of_distances = sum_of_distances
+ medoid_id = g_id
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ initial_medians.append(self.__ged_env.get_nx_graph(medoid_id)) # @todo
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n')
+
+
+ def __compute_init_node_maps(self, graph_ids, gen_median_id):
+ # Compute node maps and sum of distances for initial median.
+ if self.__parallel:
+ # @todo: notice when parallel self.__ged_env is not modified.
+ self.__sum_of_distances = 0
+ self.__node_maps_from_median.clear()
+ sum_of_distances_list = [0] * len(graph_ids)
+
+ len_itr = len(graph_ids)
+ itr = graph_ids
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(ged_env_toshare):
+ global G_ged_env
+ G_ged_env = ged_env_toshare
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(gen_median_id)
+ do_fun = partial(_compute_init_node_maps_parallel, gen_median_id, self.__sort_graphs, nb_nodes_median)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(self.__ged_env,))
+ if self.__print_to_stdout == 2:
+ iterator = tqdm(pool.imap_unordered(do_fun, itr, chunksize),
+ desc='Computing initial node maps', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_fun, itr, chunksize)
+ for g_id, sod, node_maps in iterator:
+ sum_of_distances_list[g_id] = sod
+ self.__node_maps_from_median[g_id] = node_maps
+ pool.close()
+ pool.join()
+
+ self.__sum_of_distances = np.sum(sum_of_distances_list)
+# xxx = self.__node_maps_from_median
+
+ else:
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Computing initial node maps', total=len(graph_ids), file=sys.stdout)
+
+ self.__sum_of_distances = 0
+ self.__node_maps_from_median.clear()
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(gen_median_id)
+ for graph_id in graph_ids:
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not self.__sort_graphs:
+ self.__ged_env.run_method(gen_median_id, graph_id)
+ self.__node_maps_from_median[graph_id] = self.__ged_env.get_node_map(gen_median_id, graph_id)
+ else:
+ self.__ged_env.run_method(graph_id, gen_median_id)
+ node_map_tmp = self.__ged_env.get_node_map(graph_id, gen_median_id)
+ node_map_tmp.forward_map, node_map_tmp.backward_map = node_map_tmp.backward_map, node_map_tmp.forward_map
+ self.__node_maps_from_median[graph_id] = node_map_tmp
+ # print(self.__node_maps_from_median[graph_id])
+ self.__sum_of_distances += self.__node_maps_from_median[graph_id].induced_cost()
+ # print(self.__sum_of_distances)
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n')
+
+
+ def __termination_criterion_met(self, converged, timer, itr, itrs_without_update):
+ if timer.expired() or (itr >= self.__max_itrs if self.__max_itrs >= 0 else False):
+ if self.__state == AlgorithmState.TERMINATED:
+ self.__state = AlgorithmState.INITIALIZED
+ return True
+ return converged or (itrs_without_update > self.__max_itrs_without_update if self.__max_itrs_without_update >= 0 else False)
+
+
+ def __update_median(self, graphs, median):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Updating median: ', end='')
+
+ # Store copy of the old median.
+ old_median = median.copy() # @todo: this is just a shallow copy.
+
+ # Update the node labels.
+ if self.__labeled_nodes:
+ self.__update_node_labels(graphs, median)
+
+ # Update the edges and their labels.
+ self.__update_edges(graphs, median)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ return not self.__are_graphs_equal(median, old_median)
+
+
+ def __update_node_labels(self, graphs, median):
+# print('----------------------------')
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('nodes ... ', end='')
+
+ # Collect all possible node labels.
+ all_labels = self.__ged_env.get_all_node_labels()
+
+ # Iterate through all nodes of the median.
+ for i in range(0, nx.number_of_nodes(median)):
+# print('i: ', i)
+
+ # Collect the labels of the substituted nodes.
+ node_labels = []
+ for graph_id, graph in graphs.items():
+ k = self.__node_maps_from_median[graph_id].image(i)
+ if k != np.inf:
+ node_labels.append(tuple(graph.nodes[k].items())) # @todo: sort
+ else:
+ node_labels.append(SpecialLabel.DUMMY)
+
+ # Compute the median label and update the median.
+ if len(node_labels) > 0:
+ fi_min = np.inf
+ median_label = tuple()
+
+ for label1 in all_labels:
+ fi = 0
+ for label2 in node_labels:
+ fi += self.__ged_env.get_node_cost(label1, label2) # @todo: check inside, this might be slow
+ if fi < fi_min: # @todo: fi is too easy to be zero. use <= or consider multiple optimal labels.
+ fi_min = fi
+ median_label = label1
+
+ median_label = {kv[0]: kv[1] for kv in median_label}
+ nx.set_node_attributes(median, {i: median_label})
+
+# median_label = self.__get_median_node_label(node_labels)
+# if self.__ged_env.get_node_rel_cost(median.nodes[i], median_label) > self.__epsilon:
+# nx.set_node_attributes(median, {i: median_label})
+
+
+ def __update_edges(self, graphs, median):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('edges ... ', end='')
+
+ # Collect all possible edge labels.
+ all_labels = self.__ged_env.get_all_edge_labels()
+
+ # @todo: what if edge is not labeled?
+ # Iterate through all possible edges (i,j) of the median.
+ for i in range(0, nx.number_of_nodes(median)):
+ for j in range(i + 1, nx.number_of_nodes(median)):
+
+ # Collect the labels of the edges to which (i,j) is mapped by the node maps.
+ edge_labels = []
+ for graph_id, graph in graphs.items():
+ k = self.__node_maps_from_median[graph_id].image(i)
+ l = self.__node_maps_from_median[graph_id].image(j)
+ if k != np.inf and l != np.inf and graph.has_edge(k, l):
+ edge_labels.append(tuple(graph.edges[(k, l)].items())) # @todo: sort
+ else:
+ edge_labels.append(SpecialLabel.DUMMY)
+
+ # Compute the median edge label and the overall edge relabeling cost.
+ if self.__labeled_edges and len(edge_labels) > 0:
+ fij1_min = np.inf
+ median_label = tuple()
+
+ # Compute f_ij^0.
+ fij0 = 0
+ for label2 in edge_labels:
+ fij0 += self.__ged_env.get_edge_cost(SpecialLabel.DUMMY, label2)
+
+ for label1 in all_labels:
+ fij1 = 0
+ for label2 in edge_labels:
+ fij1 += self.__ged_env.get_edge_cost(label1, label2)
+
+ if fij1 < fij1_min:
+ fij1_min = fij1
+ median_label = label1
+
+ # Update the median.
+ if median.has_edge(i, j):
+ median.remove_edge(i, j)
+ if fij1_min < fij0: # @todo: this never happens.
+ median_label = {kv[0]: kv[1] for kv in median_label}
+ median.add_edge(i, j, **median_label)
+
+# if self.__ged_env.get_edge_rel_cost(median_label, new_median_label) > self.__epsilon:
+# median_label = new_median_label
+
+
+ def __update_node_maps(self):
+ # Update the node maps.
+ if self.__parallel:
+ # @todo: notice when parallel self.__ged_env is not modified.
+ node_maps_were_modified = False
+# xxx = self.__node_maps_from_median.copy()
+
+ len_itr = len(self.__node_maps_from_median)
+ itr = [item for item in self.__node_maps_from_median.items()]
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(ged_env_toshare):
+ global G_ged_env
+ G_ged_env = ged_env_toshare
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(self.__median_id)
+ do_fun = partial(_update_node_maps_parallel, self.__median_id, self.__epsilon, self.__sort_graphs, nb_nodes_median)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(self.__ged_env,))
+ if self.__print_to_stdout == 2:
+ iterator = tqdm(pool.imap_unordered(do_fun, itr, chunksize),
+ desc='Updating node maps', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_fun, itr, chunksize)
+ for g_id, node_map, nm_modified in iterator:
+ self.__node_maps_from_median[g_id] = node_map
+ if nm_modified:
+ node_maps_were_modified = True
+ pool.close()
+ pool.join()
+# yyy = self.__node_maps_from_median.copy()
+
+ else:
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Updating node maps', total=len(self.__node_maps_from_median), file=sys.stdout)
+
+ node_maps_were_modified = False
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(self.__median_id)
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(graph_id)
+
+ if nb_nodes_median <= nb_nodes_g or not self.__sort_graphs:
+ self.__ged_env.run_method(self.__median_id, graph_id)
+ if self.__ged_env.get_upper_bound(self.__median_id, graph_id) < node_map.induced_cost() - self.__epsilon:
+ # xxx = self.__node_maps_from_median[graph_id]
+ self.__node_maps_from_median[graph_id] = self.__ged_env.get_node_map(self.__median_id, graph_id)
+ node_maps_were_modified = True
+
+ else:
+ self.__ged_env.run_method(graph_id, self.__median_id)
+ if self.__ged_env.get_upper_bound(graph_id, self.__median_id) < node_map.induced_cost() - self.__epsilon:
+ node_map_tmp = self.__ged_env.get_node_map(graph_id, self.__median_id)
+ node_map_tmp.forward_map, node_map_tmp.backward_map = node_map_tmp.backward_map, node_map_tmp.forward_map
+ self.__node_maps_from_median[graph_id] = node_map_tmp
+ node_maps_were_modified = True
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n')
+
+ # Return true if the node maps were modified.
+ return node_maps_were_modified
+
+
+ def __decrease_order(self, graphs, median):
+ # Print information about current iteration
+ if self.__print_to_stdout == 2:
+ print('Trying to decrease order: ... ', end='')
+
+ if nx.number_of_nodes(median) <= 1:
+ if self.__print_to_stdout == 2:
+ print('median graph has only 1 node, skip decrease.')
+ return False
+
+ # Initialize ID of the node that is to be deleted.
+ id_deleted_node = [None] # @todo: or np.inf
+ decreased_order = False
+
+ # Decrease the order as long as the best deletion delta is negative.
+ while self.__compute_best_deletion_delta(graphs, median, id_deleted_node) < -self.__epsilon:
+ decreased_order = True
+ self.__delete_node_from_median(id_deleted_node[0], median)
+ if nx.number_of_nodes(median) <= 1:
+ if self.__print_to_stdout == 2:
+ print('decrease stopped because median graph remains only 1 node. ', end='')
+ break
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Return true iff the order was decreased.
+ return decreased_order
+
+
+ def __compute_best_deletion_delta(self, graphs, median, id_deleted_node):
+ best_delta = 0.0
+
+ # Determine node that should be deleted (if any).
+ for i in range(0, nx.number_of_nodes(median)):
+ # Compute cost delta.
+ delta = 0.0
+ for graph_id, graph in graphs.items():
+ k = self.__node_maps_from_median[graph_id].image(i)
+ if k == np.inf:
+ delta -= self.__node_del_cost
+ else:
+ delta += self.__node_ins_cost - self.__ged_env.get_node_rel_cost(median.nodes[i], graph.nodes[k])
+ for j, j_label in median[i].items():
+ l = self.__node_maps_from_median[graph_id].image(j)
+ if k == np.inf or l == np.inf:
+ delta -= self.__edge_del_cost
+ elif not graph.has_edge(k, l):
+ delta -= self.__edge_del_cost
+ else:
+ delta += self.__edge_ins_cost - self.__ged_env.get_edge_rel_cost(j_label, graph.edges[(k, l)])
+
+ # Update best deletion delta.
+ if delta < best_delta - self.__epsilon:
+ best_delta = delta
+ id_deleted_node[0] = i
+# id_deleted_node[0] = 3 # @todo:
+
+ return best_delta
+
+
+ def __delete_node_from_median(self, id_deleted_node, median):
+ # Update the median.
+ mapping = {}
+ for i in range(0, nx.number_of_nodes(median)):
+ if i != id_deleted_node:
+ new_i = (i if i < id_deleted_node else (i - 1))
+ mapping[i] = new_i
+ median.remove_node(id_deleted_node)
+ nx.relabel_nodes(median, mapping, copy=False)
+
+ # Update the node maps.
+# xxx = self.__node_maps_from_median
+ for key, node_map in self.__node_maps_from_median.items():
+ new_node_map = NodeMap(nx.number_of_nodes(median), node_map.num_target_nodes())
+ is_unassigned_target_node = [True] * node_map.num_target_nodes()
+ for i in range(0, nx.number_of_nodes(median) + 1):
+ if i != id_deleted_node:
+ new_i = (i if i < id_deleted_node else (i - 1))
+ k = node_map.image(i)
+ new_node_map.add_assignment(new_i, k)
+ if k != np.inf:
+ is_unassigned_target_node[k] = False
+ for k in range(0, node_map.num_target_nodes()):
+ if is_unassigned_target_node[k]:
+ new_node_map.add_assignment(np.inf, k)
+# print(self.__node_maps_from_median[key].forward_map, self.__node_maps_from_median[key].backward_map)
+# print(new_node_map.forward_map, new_node_map.backward_map
+ self.__node_maps_from_median[key] = new_node_map
+
+ # Increase overall number of decreases.
+ self.__num_decrease_order += 1
+
+
+ def __increase_order(self, graphs, median):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Trying to increase order: ... ', end='')
+
+ # Initialize the best configuration and the best label of the node that is to be inserted.
+ best_config = {}
+ best_label = self.__ged_env.get_node_label(1, to_dict=True)
+ increased_order = False
+
+ # Increase the order as long as the best insertion delta is negative.
+ while self.__compute_best_insertion_delta(graphs, best_config, best_label) < - self.__epsilon:
+ increased_order = True
+ self.__add_node_to_median(best_config, best_label, median)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Return true iff the order was increased.
+ return increased_order
+
+
+ def __compute_best_insertion_delta(self, graphs, best_config, best_label):
+ # Construct sets of inserted nodes.
+ no_inserted_node = True
+ inserted_nodes = {}
+ for graph_id, graph in graphs.items():
+ inserted_nodes[graph_id] = []
+ best_config[graph_id] = np.inf
+ for k in range(nx.number_of_nodes(graph)):
+ if self.__node_maps_from_median[graph_id].pre_image(k) == np.inf:
+ no_inserted_node = False
+ inserted_nodes[graph_id].append((k, tuple(item for item in graph.nodes[k].items()))) # @todo: can order of label names be garantteed?
+
+ # Return 0.0 if no node is inserted in any of the graphs.
+ if no_inserted_node:
+ return 0.0
+
+ # Compute insertion configuration, label, and delta.
+ best_delta = 0.0 # @todo
+ if len(self.__label_names['node_labels']) == 0 and len(self.__label_names['node_attrs']) == 0: # @todo
+ best_delta = self.__compute_insertion_delta_unlabeled(inserted_nodes, best_config, best_label)
+ elif len(self.__label_names['node_labels']) > 0: # self.__constant_node_costs:
+ best_delta = self.__compute_insertion_delta_constant(inserted_nodes, best_config, best_label)
+ else:
+ best_delta = self.__compute_insertion_delta_generic(inserted_nodes, best_config, best_label)
+
+ # Return the best delta.
+ return best_delta
+
+
+ def __compute_insertion_delta_unlabeled(self, inserted_nodes, best_config, best_label): # @todo: go through and test.
+ # Construct the nest configuration and compute its insertion delta.
+ best_delta = 0.0
+ best_config.clear()
+ for graph_id, node_set in inserted_nodes.items():
+ if len(node_set) == 0:
+ best_config[graph_id] = np.inf
+ best_delta += self.__node_del_cost
+ else:
+ best_config[graph_id] = node_set[0][0]
+ best_delta -= self.__node_ins_cost
+
+ # Return the best insertion delta.
+ return best_delta
+
+
+ def __compute_insertion_delta_constant(self, inserted_nodes, best_config, best_label):
+ # Construct histogram and inverse label maps.
+ hist = {}
+ inverse_label_maps = {}
+ for graph_id, node_set in inserted_nodes.items():
+ inverse_label_maps[graph_id] = {}
+ for node in node_set:
+ k = node[0]
+ label = node[1]
+ if label not in inverse_label_maps[graph_id]:
+ inverse_label_maps[graph_id][label] = k
+ if label not in hist:
+ hist[label] = 1
+ else:
+ hist[label] += 1
+
+ # Determine the best label.
+ best_count = 0
+ for key, val in hist.items():
+ if val > best_count:
+ best_count = val
+ best_label_tuple = key
+
+ # get best label.
+ best_label.clear()
+ for key, val in best_label_tuple:
+ best_label[key] = val
+
+ # Construct the best configuration and compute its insertion delta.
+ best_config.clear()
+ best_delta = 0.0
+ node_rel_cost = self.__ged_env.get_node_rel_cost(self.__ged_env.get_node_label(1, to_dict=False), self.__ged_env.get_node_label(2, to_dict=False))
+ triangle_ineq_holds = (node_rel_cost <= self.__node_del_cost + self.__node_ins_cost)
+ for graph_id, _ in inserted_nodes.items():
+ if best_label_tuple in inverse_label_maps[graph_id]:
+ best_config[graph_id] = inverse_label_maps[graph_id][best_label_tuple]
+ best_delta -= self.__node_ins_cost
+ elif triangle_ineq_holds and not len(inserted_nodes[graph_id]) == 0:
+ best_config[graph_id] = inserted_nodes[graph_id][0][0]
+ best_delta += node_rel_cost - self.__node_ins_cost
+ else:
+ best_config[graph_id] = np.inf
+ best_delta += self.__node_del_cost
+
+ # Return the best insertion delta.
+ return best_delta
+
+
+ def __compute_insertion_delta_generic(self, inserted_nodes, best_config, best_label):
+ # Collect all node labels of inserted nodes.
+ node_labels = []
+ for _, node_set in inserted_nodes.items():
+ for node in node_set:
+ node_labels.append(node[1])
+
+ # Compute node label medians that serve as initial solutions for block gradient descent.
+ initial_node_labels = []
+ self.__compute_initial_node_labels(node_labels, initial_node_labels)
+
+ # Determine best insertion configuration, label, and delta via parallel block gradient descent from all initial node labels.
+ best_delta = 0.0
+ for node_label in initial_node_labels:
+ # Construct local configuration.
+ config = {}
+ for graph_id, _ in inserted_nodes.items():
+ config[graph_id] = tuple((np.inf, self.__ged_env.get_node_label(1, to_dict=False)))
+
+ # Run block gradient descent.
+ converged = False
+ itr = 0
+ while not self.__insertion_termination_criterion_met(converged, itr):
+ converged = not self.__update_config(node_label, inserted_nodes, config, node_labels)
+ node_label_dict = dict(node_label)
+ converged = converged and (not self.__update_node_label([dict(item) for item in node_labels], node_label_dict)) # @todo: the dict is tupled again in the function, can be better.
+ node_label = tuple(item for item in node_label_dict.items()) # @todo: watch out: initial_node_labels[i] is not modified here.
+
+ itr += 1
+
+ # Compute insertion delta of converged solution.
+ delta = 0.0
+ for _, node in config.items():
+ if node[0] == np.inf:
+ delta += self.__node_del_cost
+ else:
+ delta += self.__ged_env.get_node_rel_cost(dict(node_label), dict(node[1])) - self.__node_ins_cost
+
+ # Update best delta and global configuration if improvement has been found.
+ if delta < best_delta - self.__epsilon:
+ best_delta = delta
+ best_label.clear()
+ for key, val in node_label:
+ best_label[key] = val
+ best_config.clear()
+ for graph_id, val in config.items():
+ best_config[graph_id] = val[0]
+
+ # Return the best delta.
+ return best_delta
+
+
+ def __compute_initial_node_labels(self, node_labels, median_labels):
+ median_labels.clear()
+ if self.__use_real_randomness: # @todo: may not work if parallelized.
+ rng = np.random.randint(0, high=2**32 - 1, size=1)
+ urng = np.random.RandomState(seed=rng[0])
+ else:
+ urng = np.random.RandomState(seed=self.__seed)
+
+ # Generate the initial node label medians.
+ if self.__init_type_increase_order == 'K-MEANS++':
+ # Use k-means++ heuristic to generate the initial node label medians.
+ already_selected = [False] * len(node_labels)
+ selected_label_id = urng.randint(low=0, high=len(node_labels), size=1)[0] # c++ test: 23
+ median_labels.append(node_labels[selected_label_id])
+ already_selected[selected_label_id] = True
+# xxx = [41, 0, 18, 9, 6, 14, 21, 25, 33] for c++ test
+# iii = 0 for c++ test
+ while len(median_labels) < self.__num_inits_increase_order:
+ weights = [np.inf] * len(node_labels)
+ for label_id in range(0, len(node_labels)):
+ if already_selected[label_id]:
+ weights[label_id] = 0
+ continue
+ for label in median_labels:
+ weights[label_id] = min(weights[label_id], self.__ged_env.get_node_rel_cost(dict(label), dict(node_labels[label_id])))
+
+ # get non-zero weights.
+ weights_p, idx_p = [], []
+ for i, w in enumerate(weights):
+ if w != 0:
+ weights_p.append(w)
+ idx_p.append(i)
+ if len(weights_p) > 0:
+ p = np.array(weights_p) / np.sum(weights_p)
+ selected_label_id = urng.choice(range(0, len(weights_p)), size=1, p=p)[0] # for c++ test: xxx[iii]
+ selected_label_id = idx_p[selected_label_id]
+# iii += 1 for c++ test
+ median_labels.append(node_labels[selected_label_id])
+ already_selected[selected_label_id] = True
+ else: # skip the loop when all node_labels are selected. This happens when len(node_labels) <= self.__num_inits_increase_order.
+ break
+ else:
+ # Compute the initial node medians as the medians of randomly generated clusters of (roughly) equal size.
+ # @todo: go through and test.
+ shuffled_node_labels = [np.inf] * len(node_labels) #@todo: random?
+ # @todo: std::shuffle(shuffled_node_labels.begin(), shuffled_node_labels.end(), urng);?
+ cluster_size = len(node_labels) / self.__num_inits_increase_order
+ pos = 0.0
+ cluster = []
+ while len(median_labels) < self.__num_inits_increase_order - 1:
+ while pos < (len(median_labels) + 1) * cluster_size:
+ cluster.append(shuffled_node_labels[pos])
+ pos += 1
+ median_labels.append(self.__get_median_node_label(cluster))
+ cluster.clear()
+ while pos < len(shuffled_node_labels):
+ pos += 1
+ cluster.append(shuffled_node_labels[pos])
+ median_labels.append(self.__get_median_node_label(cluster))
+ cluster.clear()
+
+ # Run Lloyd's Algorithm.
+ converged = False
+ closest_median_ids = [np.inf] * len(node_labels)
+ clusters = [[] for _ in range(len(median_labels))]
+ itr = 1
+ while not self.__insertion_termination_criterion_met(converged, itr):
+ converged = not self.__update_clusters(node_labels, median_labels, closest_median_ids)
+ if not converged:
+ for cluster in clusters:
+ cluster.clear()
+ for label_id in range(0, len(node_labels)):
+ clusters[closest_median_ids[label_id]].append(node_labels[label_id])
+ for cluster_id in range(0, len(clusters)):
+ node_label = dict(median_labels[cluster_id])
+ self.__update_node_label([dict(item) for item in clusters[cluster_id]], node_label) # @todo: the dict is tupled again in the function, can be better.
+ median_labels[cluster_id] = tuple(item for item in node_label.items())
+ itr += 1
+
+
+ def __insertion_termination_criterion_met(self, converged, itr):
+ return converged or (itr >= self.__max_itrs_increase_order if self.__max_itrs_increase_order > 0 else False)
+
+
+ def __update_config(self, node_label, inserted_nodes, config, node_labels):
+ # Determine the best configuration.
+ config_modified = False
+ for graph_id, node_set in inserted_nodes.items():
+ best_assignment = config[graph_id]
+ best_cost = 0.0
+ if best_assignment[0] == np.inf:
+ best_cost = self.__node_del_cost
+ else:
+ best_cost = self.__ged_env.get_node_rel_cost(dict(node_label), dict(best_assignment[1])) - self.__node_ins_cost
+ for node in node_set:
+ cost = self.__ged_env.get_node_rel_cost(dict(node_label), dict(node[1])) - self.__node_ins_cost
+ if cost < best_cost - self.__epsilon:
+ best_cost = cost
+ best_assignment = node
+ config_modified = True
+ if self.__node_del_cost < best_cost - self.__epsilon:
+ best_cost = self.__node_del_cost
+ best_assignment = tuple((np.inf, best_assignment[1]))
+ config_modified = True
+ config[graph_id] = best_assignment
+
+ # Collect the node labels contained in the best configuration.
+ node_labels.clear()
+ for key, val in config.items():
+ if val[0] != np.inf:
+ node_labels.append(val[1])
+
+ # Return true if the configuration was modified.
+ return config_modified
+
+
+ def __update_node_label(self, node_labels, node_label):
+ if len(node_labels) == 0: # @todo: check if this is the correct solution. Especially after calling __update_config().
+ return False
+ new_node_label = self.__get_median_node_label(node_labels)
+ if self.__ged_env.get_node_rel_cost(new_node_label, node_label) > self.__epsilon:
+ node_label.clear()
+ for key, val in new_node_label.items():
+ node_label[key] = val
+ return True
+ return False
+
+
+ def __update_clusters(self, node_labels, median_labels, closest_median_ids):
+ # Determine the closest median for each node label.
+ clusters_modified = False
+ for label_id in range(0, len(node_labels)):
+ closest_median_id = np.inf
+ dist_to_closest_median = np.inf
+ for median_id in range(0, len(median_labels)):
+ dist_to_median = self.__ged_env.get_node_rel_cost(dict(median_labels[median_id]), dict(node_labels[label_id]))
+ if dist_to_median < dist_to_closest_median - self.__epsilon:
+ dist_to_closest_median = dist_to_median
+ closest_median_id = median_id
+ if closest_median_id != closest_median_ids[label_id]:
+ closest_median_ids[label_id] = closest_median_id
+ clusters_modified = True
+
+ # Return true if the clusters were modified.
+ return clusters_modified
+
+
+ def __add_node_to_median(self, best_config, best_label, median):
+ # Update the median.
+ nb_nodes_median = nx.number_of_nodes(median)
+ median.add_node(nb_nodes_median, **best_label)
+
+ # Update the node maps.
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ node_map_as_rel = []
+ node_map.as_relation(node_map_as_rel)
+ new_node_map = NodeMap(nx.number_of_nodes(median), node_map.num_target_nodes())
+ for assignment in node_map_as_rel:
+ new_node_map.add_assignment(assignment[0], assignment[1])
+ new_node_map.add_assignment(nx.number_of_nodes(median) - 1, best_config[graph_id])
+ self.__node_maps_from_median[graph_id] = new_node_map
+
+ # Increase overall number of increases.
+ self.__num_increase_order += 1
+
+
+ def __are_graphs_equal(self, g1, g2):
+ """
+ Check if the two graphs are equal.
+
+ Parameters
+ ----------
+ g1 : NetworkX graph object
+ Graph 1 to be compared.
+
+ g2 : NetworkX graph object
+ Graph 2 to be compared.
+
+ Returns
+ -------
+ bool
+ True if the two graph are equal.
+
+ Notes
+ -----
+ This is not an identical check. Here the two graphs are equal if and only if their original_node_ids, nodes, all node labels, edges and all edge labels are equal. This function is specifically designed for class `MedianGraphEstimator` and should not be used elsewhere.
+ """
+ # check original node ids.
+ if not g1.graph['original_node_ids'] == g2.graph['original_node_ids']:
+ return False # @todo: why check this?
+ # check nodes.
+ nlist1 = [n for n in g1.nodes(data=True)] # @todo: shallow?
+ nlist2 = [n for n in g2.nodes(data=True)]
+ if not nlist1 == nlist2:
+ return False
+ # check edges.
+ elist1 = [n for n in g1.edges(data=True)]
+ elist2 = [n for n in g2.edges(data=True)]
+ if not elist1 == elist2:
+ return False
+
+ return True
+
+
+ def compute_my_cost(g, h, node_map):
+ cost = 0.0
+ for node in g.nodes:
+ cost += 0
+
+
+ def set_label_names(self, node_labels=[], edge_labels=[], node_attrs=[], edge_attrs=[]):
+ self.__label_names = {'node_labels': node_labels, 'edge_labels': edge_labels,
+ 'node_attrs': node_attrs, 'edge_attrs': edge_attrs}
+
+
+# def __get_median_node_label(self, node_labels):
+# if len(self.__label_names['node_labels']) > 0:
+# return self.__get_median_label_symbolic(node_labels)
+# elif len(self.__label_names['node_attrs']) > 0:
+# return self.__get_median_label_nonsymbolic(node_labels)
+# else:
+# raise Exception('Node label names are not given.')
+#
+#
+# def __get_median_edge_label(self, edge_labels):
+# if len(self.__label_names['edge_labels']) > 0:
+# return self.__get_median_label_symbolic(edge_labels)
+# elif len(self.__label_names['edge_attrs']) > 0:
+# return self.__get_median_label_nonsymbolic(edge_labels)
+# else:
+# raise Exception('Edge label names are not given.')
+#
+#
+# def __get_median_label_symbolic(self, labels):
+# f_i = np.inf
+#
+# for label in labels:
+# pass
+#
+# # Construct histogram.
+# hist = {}
+# for label in labels:
+# label = tuple([kv for kv in label.items()]) # @todo: this may be slow.
+# if label not in hist:
+# hist[label] = 1
+# else:
+# hist[label] += 1
+#
+# # Return the label that appears most frequently.
+# best_count = 0
+# median_label = {}
+# for label, count in hist.items():
+# if count > best_count:
+# best_count = count
+# median_label = {kv[0]: kv[1] for kv in label}
+#
+# return median_label
+#
+#
+# def __get_median_label_nonsymbolic(self, labels):
+# if len(labels) == 0:
+# return {} # @todo
+# else:
+# # Transform the labels into coordinates and compute mean label as initial solution.
+# labels_as_coords = []
+# sums = {}
+# for key, val in labels[0].items():
+# sums[key] = 0
+# for label in labels:
+# coords = {}
+# for key, val in label.items():
+# label_f = float(val)
+# sums[key] += label_f
+# coords[key] = label_f
+# labels_as_coords.append(coords)
+# median = {}
+# for key, val in sums.items():
+# median[key] = val / len(labels)
+#
+# # Run main loop of Weiszfeld's Algorithm.
+# epsilon = 0.0001
+# delta = 1.0
+# num_itrs = 0
+# all_equal = False
+# while ((delta > epsilon) and (num_itrs < 100) and (not all_equal)):
+# numerator = {}
+# for key, val in sums.items():
+# numerator[key] = 0
+# denominator = 0
+# for label_as_coord in labels_as_coords:
+# norm = 0
+# for key, val in label_as_coord.items():
+# norm += (val - median[key]) ** 2
+# norm = np.sqrt(norm)
+# if norm > 0:
+# for key, val in label_as_coord.items():
+# numerator[key] += val / norm
+# denominator += 1.0 / norm
+# if denominator == 0:
+# all_equal = True
+# else:
+# new_median = {}
+# delta = 0.0
+# for key, val in numerator.items():
+# this_median = val / denominator
+# new_median[key] = this_median
+# delta += np.abs(median[key] - this_median)
+# median = new_median
+#
+# num_itrs += 1
+#
+# # Transform the solution to strings and return it.
+# median_label = {}
+# for key, val in median.items():
+# median_label[key] = str(val)
+# return median_label
+
+
+def _compute_medoid_parallel(graph_ids, sort, itr):
+ g_id = itr[0]
+ i = itr[1]
+ # @todo: timer not considered here.
+# if timer.expired():
+# self.__state = AlgorithmState.CALLED
+# break
+ nb_nodes_g = G_ged_env.get_graph_num_nodes(g_id)
+ sum_of_distances = 0
+ for h_id in graph_ids:
+ nb_nodes_h = G_ged_env.get_graph_num_nodes(h_id)
+ if nb_nodes_g <= nb_nodes_h or not sort:
+ G_ged_env.run_method(g_id, h_id)
+ sum_of_distances += G_ged_env.get_upper_bound(g_id, h_id)
+ else:
+ G_ged_env.run_method(h_id, g_id)
+ sum_of_distances += G_ged_env.get_upper_bound(h_id, g_id)
+ return i, sum_of_distances
+
+
+def _compute_init_node_maps_parallel(gen_median_id, sort, nb_nodes_median, itr):
+ graph_id = itr
+ nb_nodes_g = G_ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not sort:
+ G_ged_env.run_method(gen_median_id, graph_id)
+ node_map = G_ged_env.get_node_map(gen_median_id, graph_id)
+# print(self.__node_maps_from_median[graph_id])
+ else:
+ G_ged_env.run_method(graph_id, gen_median_id)
+ node_map = G_ged_env.get_node_map(graph_id, gen_median_id)
+ node_map.forward_map, node_map.backward_map = node_map.backward_map, node_map.forward_map
+ sum_of_distance = node_map.induced_cost()
+# print(self.__sum_of_distances)
+ return graph_id, sum_of_distance, node_map
+
+
+def _update_node_maps_parallel(median_id, epsilon, sort, nb_nodes_median, itr):
+ graph_id = itr[0]
+ node_map = itr[1]
+
+ node_maps_were_modified = False
+ nb_nodes_g = G_ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not sort:
+ G_ged_env.run_method(median_id, graph_id)
+ if G_ged_env.get_upper_bound(median_id, graph_id) < node_map.induced_cost() - epsilon:
+ node_map = G_ged_env.get_node_map(median_id, graph_id)
+ node_maps_were_modified = True
+ else:
+ G_ged_env.run_method(graph_id, median_id)
+ if G_ged_env.get_upper_bound(graph_id, median_id) < node_map.induced_cost() - epsilon:
+ node_map = G_ged_env.get_node_map(graph_id, median_id)
+ node_map.forward_map, node_map.backward_map = node_map.backward_map, node_map.forward_map
+ node_maps_were_modified = True
+
+ return graph_id, node_map, node_maps_were_modified
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/median/median_graph_estimator_py.py b/lang/fr/gklearn/ged/median/median_graph_estimator_py.py
new file mode 100644
index 0000000000..41dc3c91e3
--- /dev/null
+++ b/lang/fr/gklearn/ged/median/median_graph_estimator_py.py
@@ -0,0 +1,1711 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Mar 16 18:04:55 2020
+
+@author: ljia
+"""
+import numpy as np
+from gklearn.ged.env import AlgorithmState, NodeMap
+from gklearn.ged.util import misc
+from gklearn.utils import Timer
+import time
+from tqdm import tqdm
+import sys
+import networkx as nx
+import multiprocessing
+from multiprocessing import Pool
+from functools import partial
+
+
+class MedianGraphEstimatorPy(object): # @todo: differ dummy_node from undifined node?
+ """Estimate median graphs using the pure Python version of GEDEnv.
+ """
+
+ def __init__(self, ged_env, constant_node_costs):
+ """Constructor.
+
+ Parameters
+ ----------
+ ged_env : gklearn.gedlib.gedlibpy.GEDEnv
+ Initialized GED environment. The edit costs must be set by the user.
+
+ constant_node_costs : Boolean
+ Set to True if the node relabeling costs are constant.
+ """
+ self.__ged_env = ged_env
+ self.__init_method = 'BRANCH_FAST'
+ self.__init_options = ''
+ self.__descent_method = 'BRANCH_FAST'
+ self.__descent_options = ''
+ self.__refine_method = 'IPFP'
+ self.__refine_options = ''
+ self.__constant_node_costs = constant_node_costs
+ self.__labeled_nodes = (ged_env.get_num_node_labels() > 1)
+ self.__node_del_cost = ged_env.get_node_del_cost(ged_env.get_node_label(1, to_dict=False))
+ self.__node_ins_cost = ged_env.get_node_ins_cost(ged_env.get_node_label(1, to_dict=False))
+ self.__labeled_edges = (ged_env.get_num_edge_labels() > 1)
+ self.__edge_del_cost = ged_env.get_edge_del_cost(ged_env.get_edge_label(1, to_dict=False))
+ self.__edge_ins_cost = ged_env.get_edge_ins_cost(ged_env.get_edge_label(1, to_dict=False))
+ self.__init_type = 'RANDOM'
+ self.__num_random_inits = 10
+ self.__desired_num_random_inits = 10
+ self.__use_real_randomness = True
+ self.__seed = 0
+ self.__parallel = True
+ self.__update_order = True
+ self.__sort_graphs = True # sort graphs by size when computing GEDs.
+ self.__refine = True
+ self.__time_limit_in_sec = 0
+ self.__epsilon = 0.0001
+ self.__max_itrs = 100
+ self.__max_itrs_without_update = 3
+ self.__num_inits_increase_order = 10
+ self.__init_type_increase_order = 'K-MEANS++'
+ self.__max_itrs_increase_order = 10
+ self.__print_to_stdout = 2
+ self.__median_id = np.inf # @todo: check
+ self.__node_maps_from_median = {}
+ self.__sum_of_distances = 0
+ self.__best_init_sum_of_distances = np.inf
+ self.__converged_sum_of_distances = np.inf
+ self.__runtime = None
+ self.__runtime_initialized = None
+ self.__runtime_converged = None
+ self.__itrs = [] # @todo: check: {} ?
+ self.__num_decrease_order = 0
+ self.__num_increase_order = 0
+ self.__num_converged_descents = 0
+ self.__state = AlgorithmState.TERMINATED
+ self.__label_names = {}
+
+ if ged_env is None:
+ raise Exception('The GED environment pointer passed to the constructor of MedianGraphEstimator is null.')
+ elif not ged_env.is_initialized():
+ raise Exception('The GED environment is uninitialized. Call gedlibpy.GEDEnv.init() before passing it to the constructor of MedianGraphEstimator.')
+
+
+ def set_options(self, options):
+ """Sets the options of the estimator.
+
+ Parameters
+ ----------
+ options : string
+ String that specifies with which options to run the estimator.
+ """
+ self.__set_default_options()
+ options_map = misc.options_string_to_options_map(options)
+ for opt_name, opt_val in options_map.items():
+ if opt_name == 'init-type':
+ self.__init_type = opt_val
+ if opt_val != 'MEDOID' and opt_val != 'RANDOM' and opt_val != 'MIN' and opt_val != 'MAX' and opt_val != 'MEAN':
+ raise Exception('Invalid argument ' + opt_val + ' for option init-type. Usage: options = "[--init-type RANDOM|MEDOID|EMPTY|MIN|MAX|MEAN] [...]"')
+ elif opt_name == 'random-inits':
+ try:
+ self.__num_random_inits = int(opt_val)
+ self.__desired_num_random_inits = self.__num_random_inits
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option random-inits. Usage: options = "[--random-inits ]"')
+
+ if self.__num_random_inits <= 0:
+ raise Exception('Invalid argument "' + opt_val + '" for option random-inits. Usage: options = "[--random-inits ]"')
+
+ elif opt_name == 'randomness':
+ if opt_val == 'PSEUDO':
+ self.__use_real_randomness = False
+
+ elif opt_val == 'REAL':
+ self.__use_real_randomness = True
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option randomness. Usage: options = "[--randomness REAL|PSEUDO] [...]"')
+
+ elif opt_name == 'stdout':
+ if opt_val == '0':
+ self.__print_to_stdout = 0
+
+ elif opt_val == '1':
+ self.__print_to_stdout = 1
+
+ elif opt_val == '2':
+ self.__print_to_stdout = 2
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option stdout. Usage: options = "[--stdout 0|1|2] [...]"')
+
+ elif opt_name == 'parallel':
+ if opt_val == 'TRUE':
+ self.__parallel = True
+
+ elif opt_val == 'FALSE':
+ self.__parallel = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option parallel. Usage: options = "[--parallel TRUE|FALSE] [...]"')
+
+ elif opt_name == 'update-order':
+ if opt_val == 'TRUE':
+ self.__update_order = True
+
+ elif opt_val == 'FALSE':
+ self.__update_order = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option update-order. Usage: options = "[--update-order TRUE|FALSE] [...]"')
+
+ elif opt_name == 'sort-graphs':
+ if opt_val == 'TRUE':
+ self.__sort_graphs = True
+
+ elif opt_val == 'FALSE':
+ self.__sort_graphs = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option sort-graphs. Usage: options = "[--sort-graphs TRUE|FALSE] [...]"')
+
+ elif opt_name == 'refine':
+ if opt_val == 'TRUE':
+ self.__refine = True
+
+ elif opt_val == 'FALSE':
+ self.__refine = False
+
+ else:
+ raise Exception('Invalid argument "' + opt_val + '" for option refine. Usage: options = "[--refine TRUE|FALSE] [...]"')
+
+ elif opt_name == 'time-limit':
+ try:
+ self.__time_limit_in_sec = float(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option time-limit. Usage: options = "[--time-limit ] [...]')
+
+ elif opt_name == 'max-itrs':
+ try:
+ self.__max_itrs = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option max-itrs. Usage: options = "[--max-itrs ] [...]')
+
+ elif opt_name == 'max-itrs-without-update':
+ try:
+ self.__max_itrs_without_update = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option max-itrs-without-update. Usage: options = "[--max-itrs-without-update ] [...]')
+
+ elif opt_name == 'seed':
+ try:
+ self.__seed = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option seed. Usage: options = "[--seed ] [...]')
+
+ elif opt_name == 'epsilon':
+ try:
+ self.__epsilon = float(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option epsilon. Usage: options = "[--epsilon ] [...]')
+
+ if self.__epsilon <= 0:
+ raise Exception('Invalid argument "' + opt_val + '" for option epsilon. Usage: options = "[--epsilon ] [...]')
+
+ elif opt_name == 'inits-increase-order':
+ try:
+ self.__num_inits_increase_order = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option inits-increase-order. Usage: options = "[--inits-increase-order ]"')
+
+ if self.__num_inits_increase_order <= 0:
+ raise Exception('Invalid argument "' + opt_val + '" for option inits-increase-order. Usage: options = "[--inits-increase-order ]"')
+
+ elif opt_name == 'init-type-increase-order':
+ self.__init_type_increase_order = opt_val
+ if opt_val != 'CLUSTERS' and opt_val != 'K-MEANS++':
+ raise Exception('Invalid argument ' + opt_val + ' for option init-type-increase-order. Usage: options = "[--init-type-increase-order CLUSTERS|K-MEANS++] [...]"')
+
+ elif opt_name == 'max-itrs-increase-order':
+ try:
+ self.__max_itrs_increase_order = int(opt_val)
+
+ except:
+ raise Exception('Invalid argument "' + opt_val + '" for option max-itrs-increase-order. Usage: options = "[--max-itrs-increase-order ] [...]')
+
+ else:
+ valid_options = '[--init-type ] [--random-inits ] [--randomness ] [--seed ] [--stdout ] '
+ valid_options += '[--time-limit ] [--max-itrs ] [--epsilon ] '
+ valid_options += '[--inits-increase-order ] [--init-type-increase-order ] [--max-itrs-increase-order ]'
+ raise Exception('Invalid option "' + opt_name + '". Usage: options = "' + valid_options + '"')
+
+
+ def set_init_method(self, init_method, init_options={}):
+ """Selects method to be used for computing the initial medoid graph.
+
+ Parameters
+ ----------
+ init_method : string
+ The selected method. Default: ged::Options::GEDMethod::BRANCH_UNIFORM.
+
+ init_options : string
+ The options for the selected method. Default: "".
+
+ Notes
+ -----
+ Has no effect unless "--init-type MEDOID" is passed to set_options().
+ """
+ self.__init_method = init_method;
+ self.__init_options = init_options;
+
+
+ def set_descent_method(self, descent_method, descent_options=''):
+ """Selects method to be used for block gradient descent..
+
+ Parameters
+ ----------
+ descent_method : string
+ The selected method. Default: ged::Options::GEDMethod::BRANCH_FAST.
+
+ descent_options : string
+ The options for the selected method. Default: "".
+
+ Notes
+ -----
+ Has no effect unless "--init-type MEDOID" is passed to set_options().
+ """
+ self.__descent_method = descent_method;
+ self.__descent_options = descent_options;
+
+
+ def set_refine_method(self, refine_method, refine_options):
+ """Selects method to be used for improving the sum of distances and the node maps for the converged median.
+
+ Parameters
+ ----------
+ refine_method : string
+ The selected method. Default: "IPFP".
+
+ refine_options : string
+ The options for the selected method. Default: "".
+
+ Notes
+ -----
+ Has no effect if "--refine FALSE" is passed to set_options().
+ """
+ self.__refine_method = refine_method
+ self.__refine_options = refine_options
+
+
+ def run(self, graph_ids, set_median_id, gen_median_id):
+ """Computes a generalized median graph.
+
+ Parameters
+ ----------
+ graph_ids : list[integer]
+ The IDs of the graphs for which the median should be computed. Must have been added to the environment passed to the constructor.
+
+ set_median_id : integer
+ The ID of the computed set-median. A dummy graph with this ID must have been added to the environment passed to the constructor. Upon termination, the computed median can be obtained via gklearn.gedlib.gedlibpy.GEDEnv.get_graph().
+
+
+ gen_median_id : integer
+ The ID of the computed generalized median. Upon termination, the computed median can be obtained via gklearn.gedlib.gedlibpy.GEDEnv.get_graph().
+ """
+ # Sanity checks.
+ if len(graph_ids) == 0:
+ raise Exception('Empty vector of graph IDs, unable to compute median.')
+ all_graphs_empty = True
+ for graph_id in graph_ids:
+ if self.__ged_env.get_graph_num_nodes(graph_id) > 0:
+ all_graphs_empty = False
+ break
+ if all_graphs_empty:
+ raise Exception('All graphs in the collection are empty.')
+
+ # Start timer and record start time.
+ start = time.time()
+ timer = Timer(self.__time_limit_in_sec)
+ self.__median_id = gen_median_id
+ self.__state = AlgorithmState.TERMINATED
+
+ # Get NetworkX graph representations of the input graphs.
+ graphs = {}
+ for graph_id in graph_ids:
+ # @todo: get_nx_graph() function may need to be modified according to the coming code.
+ graphs[graph_id] = self.__ged_env.get_nx_graph(graph_id)
+# print(self.__ged_env.get_graph_internal_id(0))
+# print(graphs[0].graph)
+# print(graphs[0].nodes(data=True))
+# print(graphs[0].edges(data=True))
+# print(nx.adjacency_matrix(graphs[0]))
+
+ # Construct initial medians.
+ medians = []
+ self.__construct_initial_medians(graph_ids, timer, medians)
+ end_init = time.time()
+ self.__runtime_initialized = end_init - start
+# print(medians[0].graph)
+# print(medians[0].nodes(data=True))
+# print(medians[0].edges(data=True))
+# print(nx.adjacency_matrix(medians[0]))
+
+ # Reset information about iterations and number of times the median decreases and increases.
+ self.__itrs = [0] * len(medians)
+ self.__num_decrease_order = 0
+ self.__num_increase_order = 0
+ self.__num_converged_descents = 0
+
+ # Initialize the best median.
+ best_sum_of_distances = np.inf
+ self.__best_init_sum_of_distances = np.inf
+ node_maps_from_best_median = {}
+
+ # Run block gradient descent from all initial medians.
+ self.__ged_env.set_method(self.__descent_method, self.__descent_options)
+ for median_pos in range(0, len(medians)):
+
+ # Terminate if the timer has expired and at least one SOD has been computed.
+ if timer.expired() and median_pos > 0:
+ break
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n===========================================================')
+ print('Block gradient descent for initial median', str(median_pos + 1), 'of', str(len(medians)), '.')
+ print('-----------------------------------------------------------')
+
+ # Get reference to the median.
+ median = medians[median_pos]
+
+ # Load initial median into the environment.
+ self.__ged_env.load_nx_graph(median, gen_median_id)
+ self.__ged_env.init(self.__ged_env.get_init_type())
+
+ # Compute node maps and sum of distances for initial median.
+# xxx = self.__node_maps_from_median
+ self.__compute_init_node_maps(graph_ids, gen_median_id)
+# yyy = self.__node_maps_from_median
+
+ self.__best_init_sum_of_distances = min(self.__best_init_sum_of_distances, self.__sum_of_distances)
+ self.__ged_env.load_nx_graph(median, set_median_id)
+# print(self.__best_init_sum_of_distances)
+
+ # Run block gradient descent from initial median.
+ converged = False
+ itrs_without_update = 0
+ while not self.__termination_criterion_met(converged, timer, self.__itrs[median_pos], itrs_without_update):
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n===========================================================')
+ print('Iteration', str(self.__itrs[median_pos] + 1), 'for initial median', str(median_pos + 1), 'of', str(len(medians)), '.')
+ print('-----------------------------------------------------------')
+
+ # Initialize flags that tell us what happened in the iteration.
+ median_modified = False
+ node_maps_modified = False
+ decreased_order = False
+ increased_order = False
+
+ # Update the median.
+ median_modified = self.__update_median(graphs, median)
+ if self.__update_order:
+ if not median_modified or self.__itrs[median_pos] == 0:
+ decreased_order = self.__decrease_order(graphs, median)
+ if not decreased_order or self.__itrs[median_pos] == 0:
+ increased_order = self.__increase_order(graphs, median)
+
+ # Update the number of iterations without update of the median.
+ if median_modified or decreased_order or increased_order:
+ itrs_without_update = 0
+ else:
+ itrs_without_update += 1
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Loading median to environment: ... ', end='')
+
+ # Load the median into the environment.
+ # @todo: should this function use the original node label?
+ self.__ged_env.load_nx_graph(median, gen_median_id)
+ self.__ged_env.init(self.__ged_env.get_init_type())
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Updating induced costs: ... ', end='')
+
+ # Compute induced costs of the old node maps w.r.t. the updated median.
+ for graph_id in graph_ids:
+# print(self.__node_maps_from_median[graph_id].induced_cost())
+# xxx = self.__node_maps_from_median[graph_id]
+ self.__ged_env.compute_induced_cost(gen_median_id, graph_id, self.__node_maps_from_median[graph_id])
+# print('---------------------------------------')
+# print(self.__node_maps_from_median[graph_id].induced_cost())
+ # @todo:!!!!!!!!!!!!!!!!!!!!!!!!!!!!This value is a slight different from the c++ program, which might be a bug! Use it very carefully!
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Update the node maps.
+ node_maps_modified = self.__update_node_maps()
+
+ # Update the order of the median if no improvement can be found with the current order.
+
+ # Update the sum of distances.
+ old_sum_of_distances = self.__sum_of_distances
+ self.__sum_of_distances = 0
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ self.__sum_of_distances += node_map.induced_cost()
+# print(self.__sum_of_distances)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Old local SOD: ', old_sum_of_distances)
+ print('New local SOD: ', self.__sum_of_distances)
+ print('Best converged SOD: ', best_sum_of_distances)
+ print('Modified median: ', median_modified)
+ print('Modified node maps: ', node_maps_modified)
+ print('Decreased order: ', decreased_order)
+ print('Increased order: ', increased_order)
+ print('===========================================================\n')
+
+ converged = not (median_modified or node_maps_modified or decreased_order or increased_order)
+
+ self.__itrs[median_pos] += 1
+
+ # Update the best median.
+ if self.__sum_of_distances < best_sum_of_distances:
+ best_sum_of_distances = self.__sum_of_distances
+ node_maps_from_best_median = self.__node_maps_from_median.copy() # @todo: this is a shallow copy, not sure if it is enough.
+ best_median = median
+
+ # Update the number of converged descents.
+ if converged:
+ self.__num_converged_descents += 1
+
+ # Store the best encountered median.
+ self.__sum_of_distances = best_sum_of_distances
+ self.__node_maps_from_median = node_maps_from_best_median
+ self.__ged_env.load_nx_graph(best_median, gen_median_id)
+ self.__ged_env.init(self.__ged_env.get_init_type())
+ end_descent = time.time()
+ self.__runtime_converged = end_descent - start
+
+ # Refine the sum of distances and the node maps for the converged median.
+ self.__converged_sum_of_distances = self.__sum_of_distances
+ if self.__refine:
+ self.__improve_sum_of_distances(timer)
+
+ # Record end time, set runtime and reset the number of initial medians.
+ end = time.time()
+ self.__runtime = end - start
+ self.__num_random_inits = self.__desired_num_random_inits
+
+ # Print global information.
+ if self.__print_to_stdout != 0:
+ print('\n===========================================================')
+ print('Finished computation of generalized median graph.')
+ print('-----------------------------------------------------------')
+ print('Best SOD after initialization: ', self.__best_init_sum_of_distances)
+ print('Converged SOD: ', self.__converged_sum_of_distances)
+ if self.__refine:
+ print('Refined SOD: ', self.__sum_of_distances)
+ print('Overall runtime: ', self.__runtime)
+ print('Runtime of initialization: ', self.__runtime_initialized)
+ print('Runtime of block gradient descent: ', self.__runtime_converged - self.__runtime_initialized)
+ if self.__refine:
+ print('Runtime of refinement: ', self.__runtime - self.__runtime_converged)
+ print('Number of initial medians: ', len(medians))
+ total_itr = 0
+ num_started_descents = 0
+ for itr in self.__itrs:
+ total_itr += itr
+ if itr > 0:
+ num_started_descents += 1
+ print('Size of graph collection: ', len(graph_ids))
+ print('Number of started descents: ', num_started_descents)
+ print('Number of converged descents: ', self.__num_converged_descents)
+ print('Overall number of iterations: ', total_itr)
+ print('Overall number of times the order decreased: ', self.__num_decrease_order)
+ print('Overall number of times the order increased: ', self.__num_increase_order)
+ print('===========================================================\n')
+
+
+ def __improve_sum_of_distances(self, timer): # @todo: go through and test
+ # Use method selected for refinement phase.
+ self.__ged_env.set_method(self.__refine_method, self.__refine_options)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Improving node maps', total=len(self.__node_maps_from_median), file=sys.stdout)
+ print('\n===========================================================')
+ print('Improving node maps and SOD for converged median.')
+ print('-----------------------------------------------------------')
+ progress.update(1)
+
+ # Improving the node maps.
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(self.__gen_median_id)
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ if time.expired():
+ if self.__state == AlgorithmState.TERMINATED:
+ self.__state = AlgorithmState.CONVERGED
+ break
+
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not self.__sort_graphs:
+ self.__ged_env.run_method(self.__gen_median_id, graph_id)
+ if self.__ged_env.get_upper_bound(self.__gen_median_id, graph_id) < node_map.induced_cost():
+ self.__node_maps_from_median[graph_id] = self.__ged_env.get_node_map(self.__gen_median_id, graph_id)
+ else:
+ self.__ged_env.run_method(graph_id, self.__gen_median_id)
+ if self.__ged_env.get_upper_bound(graph_id, self.__gen_median_id) < node_map.induced_cost():
+ node_map_tmp = self.__ged_env.get_node_map(graph_id, self.__gen_median_id)
+ node_map_tmp.forward_map, node_map_tmp.backward_map = node_map_tmp.backward_map, node_map_tmp.forward_map
+ self.__node_maps_from_median[graph_id] = node_map_tmp
+
+ self.__sum_of_distances += self.__node_maps_from_median[graph_id].induced_cost()
+
+ # Print information.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ self.__sum_of_distances = 0.0
+ for key, val in self.__node_maps_from_median.items():
+ self.__sum_of_distances += val.induced_cost()
+
+ # Print information.
+ if self.__print_to_stdout == 2:
+ print('===========================================================\n')
+
+
+ def __median_available(self):
+ return self.__median_id != np.inf
+
+
+ def get_state(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_state().')
+ return self.__state
+
+
+ def get_sum_of_distances(self, state=''):
+ """Returns the sum of distances.
+
+ Parameters
+ ----------
+ state : string
+ The state of the estimator. Can be 'initialized' or 'converged'. Default: ""
+
+ Returns
+ -------
+ float
+ The sum of distances (SOD) of the median when the estimator was in the state `state` during the last call to run(). If `state` is not given, the converged SOD (without refinement) or refined SOD (with refinement) is returned.
+ """
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_sum_of_distances().')
+ if state == 'initialized':
+ return self.__best_init_sum_of_distances
+ if state == 'converged':
+ return self.__converged_sum_of_distances
+ return self.__sum_of_distances
+
+
+ def get_runtime(self, state):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_runtime().')
+ if state == AlgorithmState.INITIALIZED:
+ return self.__runtime_initialized
+ if state == AlgorithmState.CONVERGED:
+ return self.__runtime_converged
+ return self.__runtime
+
+
+ def get_num_itrs(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_itrs().')
+ return self.__itrs
+
+
+ def get_num_times_order_decreased(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_times_order_decreased().')
+ return self.__num_decrease_order
+
+
+ def get_num_times_order_increased(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_times_order_increased().')
+ return self.__num_increase_order
+
+
+ def get_num_converged_descents(self):
+ if not self.__median_available():
+ raise Exception('No median has been computed. Call run() before calling get_num_converged_descents().')
+ return self.__num_converged_descents
+
+
+ def get_ged_env(self):
+ return self.__ged_env
+
+
+ def __set_default_options(self):
+ self.__init_type = 'RANDOM'
+ self.__num_random_inits = 10
+ self.__desired_num_random_inits = 10
+ self.__use_real_randomness = True
+ self.__seed = 0
+ self.__parallel = True
+ self.__update_order = True
+ self.__sort_graphs = True
+ self.__refine = True
+ self.__time_limit_in_sec = 0
+ self.__epsilon = 0.0001
+ self.__max_itrs = 100
+ self.__max_itrs_without_update = 3
+ self.__num_inits_increase_order = 10
+ self.__init_type_increase_order = 'K-MEANS++'
+ self.__max_itrs_increase_order = 10
+ self.__print_to_stdout = 2
+ self.__label_names = {}
+
+
+ def __construct_initial_medians(self, graph_ids, timer, initial_medians):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n===========================================================')
+ print('Constructing initial median(s).')
+ print('-----------------------------------------------------------')
+
+ # Compute or sample the initial median(s).
+ initial_medians.clear()
+ if self.__init_type == 'MEDOID':
+ self.__compute_medoid(graph_ids, timer, initial_medians)
+ elif self.__init_type == 'MAX':
+ pass # @todo
+# compute_max_order_graph_(graph_ids, initial_medians)
+ elif self.__init_type == 'MIN':
+ pass # @todo
+# compute_min_order_graph_(graph_ids, initial_medians)
+ elif self.__init_type == 'MEAN':
+ pass # @todo
+# compute_mean_order_graph_(graph_ids, initial_medians)
+ else:
+ pass # @todo
+# sample_initial_medians_(graph_ids, initial_medians)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('===========================================================')
+
+
+ def __compute_medoid(self, graph_ids, timer, initial_medians):
+ # Use method selected for initialization phase.
+ self.__ged_env.set_method(self.__init_method, self.__init_options)
+
+ # Compute the medoid.
+ if self.__parallel:
+ # @todo: notice when parallel self.__ged_env is not modified.
+ sum_of_distances_list = [np.inf] * len(graph_ids)
+ len_itr = len(graph_ids)
+ itr = zip(graph_ids, range(0, len(graph_ids)))
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(ged_env_toshare):
+ global G_ged_env
+ G_ged_env = ged_env_toshare
+ do_fun = partial(_compute_medoid_parallel, graph_ids, self.__sort_graphs)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(self.__ged_env,))
+ if self.__print_to_stdout == 2:
+ iterator = tqdm(pool.imap_unordered(do_fun, itr, chunksize),
+ desc='Computing medoid', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_fun, itr, chunksize)
+ for i, dis in iterator:
+ sum_of_distances_list[i] = dis
+ pool.close()
+ pool.join()
+
+ medoid_id = np.argmin(sum_of_distances_list)
+ best_sum_of_distances = sum_of_distances_list[medoid_id]
+
+ initial_medians.append(self.__ged_env.get_nx_graph(medoid_id)) # @todo
+
+ else:
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Computing medoid', total=len(graph_ids), file=sys.stdout)
+
+ medoid_id = graph_ids[0]
+ best_sum_of_distances = np.inf
+ for g_id in graph_ids:
+ if timer.expired():
+ self.__state = AlgorithmState.CALLED
+ break
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(g_id)
+ sum_of_distances = 0
+ for h_id in graph_ids: # @todo: this can be faster, only a half is needed.
+ nb_nodes_h = self.__ged_env.get_graph_num_nodes(h_id)
+ if nb_nodes_g <= nb_nodes_h or not self.__sort_graphs:
+ self.__ged_env.run_method(g_id, h_id) # @todo
+ sum_of_distances += self.__ged_env.get_upper_bound(g_id, h_id)
+ else:
+ self.__ged_env.run_method(h_id, g_id)
+ sum_of_distances += self.__ged_env.get_upper_bound(h_id, g_id)
+ if sum_of_distances < best_sum_of_distances:
+ best_sum_of_distances = sum_of_distances
+ medoid_id = g_id
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ initial_medians.append(self.__ged_env.get_nx_graph(medoid_id)) # @todo
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n')
+
+
+ def __compute_init_node_maps(self, graph_ids, gen_median_id):
+ # Compute node maps and sum of distances for initial median.
+ if self.__parallel:
+ # @todo: notice when parallel self.__ged_env is not modified.
+ self.__sum_of_distances = 0
+ self.__node_maps_from_median.clear()
+ sum_of_distances_list = [0] * len(graph_ids)
+
+ len_itr = len(graph_ids)
+ itr = graph_ids
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(ged_env_toshare):
+ global G_ged_env
+ G_ged_env = ged_env_toshare
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(gen_median_id)
+ do_fun = partial(_compute_init_node_maps_parallel, gen_median_id, self.__sort_graphs, nb_nodes_median)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(self.__ged_env,))
+ if self.__print_to_stdout == 2:
+ iterator = tqdm(pool.imap_unordered(do_fun, itr, chunksize),
+ desc='Computing initial node maps', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_fun, itr, chunksize)
+ for g_id, sod, node_maps in iterator:
+ sum_of_distances_list[g_id] = sod
+ self.__node_maps_from_median[g_id] = node_maps
+ pool.close()
+ pool.join()
+
+ self.__sum_of_distances = np.sum(sum_of_distances_list)
+# xxx = self.__node_maps_from_median
+
+ else:
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Computing initial node maps', total=len(graph_ids), file=sys.stdout)
+
+ self.__sum_of_distances = 0
+ self.__node_maps_from_median.clear()
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(gen_median_id)
+ for graph_id in graph_ids:
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not self.__sort_graphs:
+ self.__ged_env.run_method(gen_median_id, graph_id)
+ self.__node_maps_from_median[graph_id] = self.__ged_env.get_node_map(gen_median_id, graph_id)
+ else:
+ self.__ged_env.run_method(graph_id, gen_median_id)
+ node_map_tmp = self.__ged_env.get_node_map(graph_id, gen_median_id)
+ node_map_tmp.forward_map, node_map_tmp.backward_map = node_map_tmp.backward_map, node_map_tmp.forward_map
+ self.__node_maps_from_median[graph_id] = node_map_tmp
+ # print(self.__node_maps_from_median[graph_id])
+ self.__sum_of_distances += self.__node_maps_from_median[graph_id].induced_cost()
+ # print(self.__sum_of_distances)
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n')
+
+
+ def __termination_criterion_met(self, converged, timer, itr, itrs_without_update):
+ if timer.expired() or (itr >= self.__max_itrs if self.__max_itrs >= 0 else False):
+ if self.__state == AlgorithmState.TERMINATED:
+ self.__state = AlgorithmState.INITIALIZED
+ return True
+ return converged or (itrs_without_update > self.__max_itrs_without_update if self.__max_itrs_without_update >= 0 else False)
+
+
+ def __update_median(self, graphs, median):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Updating median: ', end='')
+
+ # Store copy of the old median.
+ old_median = median.copy() # @todo: this is just a shallow copy.
+
+ # Update the node labels.
+ if self.__labeled_nodes:
+ self.__update_node_labels(graphs, median)
+
+ # Update the edges and their labels.
+ self.__update_edges(graphs, median)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ return not self.__are_graphs_equal(median, old_median)
+
+
+ def __update_node_labels(self, graphs, median):
+# print('----------------------------')
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('nodes ... ', end='')
+
+ # Iterate through all nodes of the median.
+ for i in range(0, nx.number_of_nodes(median)):
+# print('i: ', i)
+ # Collect the labels of the substituted nodes.
+ node_labels = []
+ for graph_id, graph in graphs.items():
+# print('graph_id: ', graph_id)
+# print(self.__node_maps_from_median[graph_id])
+# print(self.__node_maps_from_median[graph_id].forward_map, self.__node_maps_from_median[graph_id].backward_map)
+ k = self.__node_maps_from_median[graph_id].image(i)
+# print('k: ', k)
+ if k != np.inf:
+ node_labels.append(graph.nodes[k])
+
+ # Compute the median label and update the median.
+ if len(node_labels) > 0:
+# median_label = self.__ged_env.get_median_node_label(node_labels)
+ median_label = self.__get_median_node_label(node_labels)
+ if self.__ged_env.get_node_rel_cost(median.nodes[i], median_label) > self.__epsilon:
+ nx.set_node_attributes(median, {i: median_label})
+
+
+ def __update_edges(self, graphs, median):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('edges ... ', end='')
+
+# # Clear the adjacency lists of the median and reset number of edges to 0.
+# median_edges = list(median.edges)
+# for (head, tail) in median_edges:
+# median.remove_edge(head, tail)
+
+ # @todo: what if edge is not labeled?
+ # Iterate through all possible edges (i,j) of the median.
+ for i in range(0, nx.number_of_nodes(median)):
+ for j in range(i + 1, nx.number_of_nodes(median)):
+
+ # Collect the labels of the edges to which (i,j) is mapped by the node maps.
+ edge_labels = []
+ for graph_id, graph in graphs.items():
+ k = self.__node_maps_from_median[graph_id].image(i)
+ l = self.__node_maps_from_median[graph_id].image(j)
+ if k != np.inf and l != np.inf:
+ if graph.has_edge(k, l):
+ edge_labels.append(graph.edges[(k, l)])
+
+ # Compute the median edge label and the overall edge relabeling cost.
+ rel_cost = 0
+ median_label = self.__ged_env.get_edge_label(1, to_dict=True)
+ if median.has_edge(i, j):
+ median_label = median.edges[(i, j)]
+ if self.__labeled_edges and len(edge_labels) > 0:
+ new_median_label = self.__get_median_edge_label(edge_labels)
+ if self.__ged_env.get_edge_rel_cost(median_label, new_median_label) > self.__epsilon:
+ median_label = new_median_label
+ for edge_label in edge_labels:
+ rel_cost += self.__ged_env.get_edge_rel_cost(median_label, edge_label)
+
+ # Update the median.
+ if median.has_edge(i, j):
+ median.remove_edge(i, j)
+ if rel_cost < (self.__edge_ins_cost + self.__edge_del_cost) * len(edge_labels) - self.__edge_del_cost * len(graphs):
+ median.add_edge(i, j, **median_label)
+# else:
+# if median.has_edge(i, j):
+# median.remove_edge(i, j)
+
+
+ def __update_node_maps(self):
+ # Update the node maps.
+ if self.__parallel:
+ # @todo: notice when parallel self.__ged_env is not modified.
+ node_maps_were_modified = False
+# xxx = self.__node_maps_from_median.copy()
+
+ len_itr = len(self.__node_maps_from_median)
+ itr = [item for item in self.__node_maps_from_median.items()]
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(ged_env_toshare):
+ global G_ged_env
+ G_ged_env = ged_env_toshare
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(self.__median_id)
+ do_fun = partial(_update_node_maps_parallel, self.__median_id, self.__epsilon, self.__sort_graphs, nb_nodes_median)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(self.__ged_env,))
+ if self.__print_to_stdout == 2:
+ iterator = tqdm(pool.imap_unordered(do_fun, itr, chunksize),
+ desc='Updating node maps', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_fun, itr, chunksize)
+ for g_id, node_map, nm_modified in iterator:
+ self.__node_maps_from_median[g_id] = node_map
+ if nm_modified:
+ node_maps_were_modified = True
+ pool.close()
+ pool.join()
+# yyy = self.__node_maps_from_median.copy()
+
+ else:
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress = tqdm(desc='Updating node maps', total=len(self.__node_maps_from_median), file=sys.stdout)
+
+ node_maps_were_modified = False
+ nb_nodes_median = self.__ged_env.get_graph_num_nodes(self.__median_id)
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ nb_nodes_g = self.__ged_env.get_graph_num_nodes(graph_id)
+
+ if nb_nodes_median <= nb_nodes_g or not self.__sort_graphs:
+ self.__ged_env.run_method(self.__median_id, graph_id)
+ if self.__ged_env.get_upper_bound(self.__median_id, graph_id) < node_map.induced_cost() - self.__epsilon:
+ # xxx = self.__node_maps_from_median[graph_id]
+ self.__node_maps_from_median[graph_id] = self.__ged_env.get_node_map(self.__median_id, graph_id)
+ node_maps_were_modified = True
+
+ else:
+ self.__ged_env.run_method(graph_id, self.__median_id)
+ if self.__ged_env.get_upper_bound(graph_id, self.__median_id) < node_map.induced_cost() - self.__epsilon:
+ node_map_tmp = self.__ged_env.get_node_map(graph_id, self.__median_id)
+ node_map_tmp.forward_map, node_map_tmp.backward_map = node_map_tmp.backward_map, node_map_tmp.forward_map
+ self.__node_maps_from_median[graph_id] = node_map_tmp
+ node_maps_were_modified = True
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ progress.update(1)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('\n')
+
+ # Return true if the node maps were modified.
+ return node_maps_were_modified
+
+
+ def __decrease_order(self, graphs, median):
+ # Print information about current iteration
+ if self.__print_to_stdout == 2:
+ print('Trying to decrease order: ... ', end='')
+
+ if nx.number_of_nodes(median) <= 1:
+ if self.__print_to_stdout == 2:
+ print('median graph has only 1 node, skip decrease.')
+ return False
+
+ # Initialize ID of the node that is to be deleted.
+ id_deleted_node = [None] # @todo: or np.inf
+ decreased_order = False
+
+ # Decrease the order as long as the best deletion delta is negative.
+ while self.__compute_best_deletion_delta(graphs, median, id_deleted_node) < -self.__epsilon:
+ decreased_order = True
+ self.__delete_node_from_median(id_deleted_node[0], median)
+ if nx.number_of_nodes(median) <= 1:
+ if self.__print_to_stdout == 2:
+ print('decrease stopped because median graph remains only 1 node. ', end='')
+ break
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Return true iff the order was decreased.
+ return decreased_order
+
+
+ def __compute_best_deletion_delta(self, graphs, median, id_deleted_node):
+ best_delta = 0.0
+
+ # Determine node that should be deleted (if any).
+ for i in range(0, nx.number_of_nodes(median)):
+ # Compute cost delta.
+ delta = 0.0
+ for graph_id, graph in graphs.items():
+ k = self.__node_maps_from_median[graph_id].image(i)
+ if k == np.inf:
+ delta -= self.__node_del_cost
+ else:
+ delta += self.__node_ins_cost - self.__ged_env.get_node_rel_cost(median.nodes[i], graph.nodes[k])
+ for j, j_label in median[i].items():
+ l = self.__node_maps_from_median[graph_id].image(j)
+ if k == np.inf or l == np.inf:
+ delta -= self.__edge_del_cost
+ elif not graph.has_edge(k, l):
+ delta -= self.__edge_del_cost
+ else:
+ delta += self.__edge_ins_cost - self.__ged_env.get_edge_rel_cost(j_label, graph.edges[(k, l)])
+
+ # Update best deletion delta.
+ if delta < best_delta - self.__epsilon:
+ best_delta = delta
+ id_deleted_node[0] = i
+# id_deleted_node[0] = 3 # @todo:
+
+ return best_delta
+
+
+ def __delete_node_from_median(self, id_deleted_node, median):
+ # Update the median.
+ mapping = {}
+ for i in range(0, nx.number_of_nodes(median)):
+ if i != id_deleted_node:
+ new_i = (i if i < id_deleted_node else (i - 1))
+ mapping[i] = new_i
+ median.remove_node(id_deleted_node)
+ nx.relabel_nodes(median, mapping, copy=False)
+
+ # Update the node maps.
+# xxx = self.__node_maps_from_median
+ for key, node_map in self.__node_maps_from_median.items():
+ new_node_map = NodeMap(nx.number_of_nodes(median), node_map.num_target_nodes())
+ is_unassigned_target_node = [True] * node_map.num_target_nodes()
+ for i in range(0, nx.number_of_nodes(median) + 1):
+ if i != id_deleted_node:
+ new_i = (i if i < id_deleted_node else (i - 1))
+ k = node_map.image(i)
+ new_node_map.add_assignment(new_i, k)
+ if k != np.inf:
+ is_unassigned_target_node[k] = False
+ for k in range(0, node_map.num_target_nodes()):
+ if is_unassigned_target_node[k]:
+ new_node_map.add_assignment(np.inf, k)
+# print(self.__node_maps_from_median[key].forward_map, self.__node_maps_from_median[key].backward_map)
+# print(new_node_map.forward_map, new_node_map.backward_map
+ self.__node_maps_from_median[key] = new_node_map
+
+ # Increase overall number of decreases.
+ self.__num_decrease_order += 1
+
+
+ def __increase_order(self, graphs, median):
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('Trying to increase order: ... ', end='')
+
+ # Initialize the best configuration and the best label of the node that is to be inserted.
+ best_config = {}
+ best_label = self.__ged_env.get_node_label(1, to_dict=True)
+ increased_order = False
+
+ # Increase the order as long as the best insertion delta is negative.
+ while self.__compute_best_insertion_delta(graphs, best_config, best_label) < - self.__epsilon:
+ increased_order = True
+ self.__add_node_to_median(best_config, best_label, median)
+
+ # Print information about current iteration.
+ if self.__print_to_stdout == 2:
+ print('done.')
+
+ # Return true iff the order was increased.
+ return increased_order
+
+
+ def __compute_best_insertion_delta(self, graphs, best_config, best_label):
+ # Construct sets of inserted nodes.
+ no_inserted_node = True
+ inserted_nodes = {}
+ for graph_id, graph in graphs.items():
+ inserted_nodes[graph_id] = []
+ best_config[graph_id] = np.inf
+ for k in range(nx.number_of_nodes(graph)):
+ if self.__node_maps_from_median[graph_id].pre_image(k) == np.inf:
+ no_inserted_node = False
+ inserted_nodes[graph_id].append((k, tuple(item for item in graph.nodes[k].items()))) # @todo: can order of label names be garantteed?
+
+ # Return 0.0 if no node is inserted in any of the graphs.
+ if no_inserted_node:
+ return 0.0
+
+ # Compute insertion configuration, label, and delta.
+ best_delta = 0.0 # @todo
+ if len(self.__label_names['node_labels']) == 0 and len(self.__label_names['node_attrs']) == 0: # @todo
+ best_delta = self.__compute_insertion_delta_unlabeled(inserted_nodes, best_config, best_label)
+ elif len(self.__label_names['node_labels']) > 0: # self.__constant_node_costs:
+ best_delta = self.__compute_insertion_delta_constant(inserted_nodes, best_config, best_label)
+ else:
+ best_delta = self.__compute_insertion_delta_generic(inserted_nodes, best_config, best_label)
+
+ # Return the best delta.
+ return best_delta
+
+
+ def __compute_insertion_delta_unlabeled(self, inserted_nodes, best_config, best_label): # @todo: go through and test.
+ # Construct the nest configuration and compute its insertion delta.
+ best_delta = 0.0
+ best_config.clear()
+ for graph_id, node_set in inserted_nodes.items():
+ if len(node_set) == 0:
+ best_config[graph_id] = np.inf
+ best_delta += self.__node_del_cost
+ else:
+ best_config[graph_id] = node_set[0][0]
+ best_delta -= self.__node_ins_cost
+
+ # Return the best insertion delta.
+ return best_delta
+
+
+ def __compute_insertion_delta_constant(self, inserted_nodes, best_config, best_label):
+ # Construct histogram and inverse label maps.
+ hist = {}
+ inverse_label_maps = {}
+ for graph_id, node_set in inserted_nodes.items():
+ inverse_label_maps[graph_id] = {}
+ for node in node_set:
+ k = node[0]
+ label = node[1]
+ if label not in inverse_label_maps[graph_id]:
+ inverse_label_maps[graph_id][label] = k
+ if label not in hist:
+ hist[label] = 1
+ else:
+ hist[label] += 1
+
+ # Determine the best label.
+ best_count = 0
+ for key, val in hist.items():
+ if val > best_count:
+ best_count = val
+ best_label_tuple = key
+
+ # get best label.
+ best_label.clear()
+ for key, val in best_label_tuple:
+ best_label[key] = val
+
+ # Construct the best configuration and compute its insertion delta.
+ best_config.clear()
+ best_delta = 0.0
+ node_rel_cost = self.__ged_env.get_node_rel_cost(self.__ged_env.get_node_label(1, to_dict=False), self.__ged_env.get_node_label(2, to_dict=False))
+ triangle_ineq_holds = (node_rel_cost <= self.__node_del_cost + self.__node_ins_cost)
+ for graph_id, _ in inserted_nodes.items():
+ if best_label_tuple in inverse_label_maps[graph_id]:
+ best_config[graph_id] = inverse_label_maps[graph_id][best_label_tuple]
+ best_delta -= self.__node_ins_cost
+ elif triangle_ineq_holds and not len(inserted_nodes[graph_id]) == 0:
+ best_config[graph_id] = inserted_nodes[graph_id][0][0]
+ best_delta += node_rel_cost - self.__node_ins_cost
+ else:
+ best_config[graph_id] = np.inf
+ best_delta += self.__node_del_cost
+
+ # Return the best insertion delta.
+ return best_delta
+
+
+ def __compute_insertion_delta_generic(self, inserted_nodes, best_config, best_label):
+ # Collect all node labels of inserted nodes.
+ node_labels = []
+ for _, node_set in inserted_nodes.items():
+ for node in node_set:
+ node_labels.append(node[1])
+
+ # Compute node label medians that serve as initial solutions for block gradient descent.
+ initial_node_labels = []
+ self.__compute_initial_node_labels(node_labels, initial_node_labels)
+
+ # Determine best insertion configuration, label, and delta via parallel block gradient descent from all initial node labels.
+ best_delta = 0.0
+ for node_label in initial_node_labels:
+ # Construct local configuration.
+ config = {}
+ for graph_id, _ in inserted_nodes.items():
+ config[graph_id] = tuple((np.inf, self.__ged_env.get_node_label(1, to_dict=False)))
+
+ # Run block gradient descent.
+ converged = False
+ itr = 0
+ while not self.__insertion_termination_criterion_met(converged, itr):
+ converged = not self.__update_config(node_label, inserted_nodes, config, node_labels)
+ node_label_dict = dict(node_label)
+ converged = converged and (not self.__update_node_label([dict(item) for item in node_labels], node_label_dict)) # @todo: the dict is tupled again in the function, can be better.
+ node_label = tuple(item for item in node_label_dict.items()) # @todo: watch out: initial_node_labels[i] is not modified here.
+
+ itr += 1
+
+ # Compute insertion delta of converged solution.
+ delta = 0.0
+ for _, node in config.items():
+ if node[0] == np.inf:
+ delta += self.__node_del_cost
+ else:
+ delta += self.__ged_env.get_node_rel_cost(dict(node_label), dict(node[1])) - self.__node_ins_cost
+
+ # Update best delta and global configuration if improvement has been found.
+ if delta < best_delta - self.__epsilon:
+ best_delta = delta
+ best_label.clear()
+ for key, val in node_label:
+ best_label[key] = val
+ best_config.clear()
+ for graph_id, val in config.items():
+ best_config[graph_id] = val[0]
+
+ # Return the best delta.
+ return best_delta
+
+
+ def __compute_initial_node_labels(self, node_labels, median_labels):
+ median_labels.clear()
+ if self.__use_real_randomness: # @todo: may not work if parallelized.
+ rng = np.random.randint(0, high=2**32 - 1, size=1)
+ urng = np.random.RandomState(seed=rng[0])
+ else:
+ urng = np.random.RandomState(seed=self.__seed)
+
+ # Generate the initial node label medians.
+ if self.__init_type_increase_order == 'K-MEANS++':
+ # Use k-means++ heuristic to generate the initial node label medians.
+ already_selected = [False] * len(node_labels)
+ selected_label_id = urng.randint(low=0, high=len(node_labels), size=1)[0] # c++ test: 23
+ median_labels.append(node_labels[selected_label_id])
+ already_selected[selected_label_id] = True
+# xxx = [41, 0, 18, 9, 6, 14, 21, 25, 33] for c++ test
+# iii = 0 for c++ test
+ while len(median_labels) < self.__num_inits_increase_order:
+ weights = [np.inf] * len(node_labels)
+ for label_id in range(0, len(node_labels)):
+ if already_selected[label_id]:
+ weights[label_id] = 0
+ continue
+ for label in median_labels:
+ weights[label_id] = min(weights[label_id], self.__ged_env.get_node_rel_cost(dict(label), dict(node_labels[label_id])))
+
+ # get non-zero weights.
+ weights_p, idx_p = [], []
+ for i, w in enumerate(weights):
+ if w != 0:
+ weights_p.append(w)
+ idx_p.append(i)
+ if len(weights_p) > 0:
+ p = np.array(weights_p) / np.sum(weights_p)
+ selected_label_id = urng.choice(range(0, len(weights_p)), size=1, p=p)[0] # for c++ test: xxx[iii]
+ selected_label_id = idx_p[selected_label_id]
+# iii += 1 for c++ test
+ median_labels.append(node_labels[selected_label_id])
+ already_selected[selected_label_id] = True
+ else: # skip the loop when all node_labels are selected. This happens when len(node_labels) <= self.__num_inits_increase_order.
+ break
+ else:
+ # Compute the initial node medians as the medians of randomly generated clusters of (roughly) equal size.
+ # @todo: go through and test.
+ shuffled_node_labels = [np.inf] * len(node_labels) #@todo: random?
+ # @todo: std::shuffle(shuffled_node_labels.begin(), shuffled_node_labels.end(), urng);?
+ cluster_size = len(node_labels) / self.__num_inits_increase_order
+ pos = 0.0
+ cluster = []
+ while len(median_labels) < self.__num_inits_increase_order - 1:
+ while pos < (len(median_labels) + 1) * cluster_size:
+ cluster.append(shuffled_node_labels[pos])
+ pos += 1
+ median_labels.append(self.__get_median_node_label(cluster))
+ cluster.clear()
+ while pos < len(shuffled_node_labels):
+ pos += 1
+ cluster.append(shuffled_node_labels[pos])
+ median_labels.append(self.__get_median_node_label(cluster))
+ cluster.clear()
+
+ # Run Lloyd's Algorithm.
+ converged = False
+ closest_median_ids = [np.inf] * len(node_labels)
+ clusters = [[] for _ in range(len(median_labels))]
+ itr = 1
+ while not self.__insertion_termination_criterion_met(converged, itr):
+ converged = not self.__update_clusters(node_labels, median_labels, closest_median_ids)
+ if not converged:
+ for cluster in clusters:
+ cluster.clear()
+ for label_id in range(0, len(node_labels)):
+ clusters[closest_median_ids[label_id]].append(node_labels[label_id])
+ for cluster_id in range(0, len(clusters)):
+ node_label = dict(median_labels[cluster_id])
+ self.__update_node_label([dict(item) for item in clusters[cluster_id]], node_label) # @todo: the dict is tupled again in the function, can be better.
+ median_labels[cluster_id] = tuple(item for item in node_label.items())
+ itr += 1
+
+
+ def __insertion_termination_criterion_met(self, converged, itr):
+ return converged or (itr >= self.__max_itrs_increase_order if self.__max_itrs_increase_order > 0 else False)
+
+
+ def __update_config(self, node_label, inserted_nodes, config, node_labels):
+ # Determine the best configuration.
+ config_modified = False
+ for graph_id, node_set in inserted_nodes.items():
+ best_assignment = config[graph_id]
+ best_cost = 0.0
+ if best_assignment[0] == np.inf:
+ best_cost = self.__node_del_cost
+ else:
+ best_cost = self.__ged_env.get_node_rel_cost(dict(node_label), dict(best_assignment[1])) - self.__node_ins_cost
+ for node in node_set:
+ cost = self.__ged_env.get_node_rel_cost(dict(node_label), dict(node[1])) - self.__node_ins_cost
+ if cost < best_cost - self.__epsilon:
+ best_cost = cost
+ best_assignment = node
+ config_modified = True
+ if self.__node_del_cost < best_cost - self.__epsilon:
+ best_cost = self.__node_del_cost
+ best_assignment = tuple((np.inf, best_assignment[1]))
+ config_modified = True
+ config[graph_id] = best_assignment
+
+ # Collect the node labels contained in the best configuration.
+ node_labels.clear()
+ for key, val in config.items():
+ if val[0] != np.inf:
+ node_labels.append(val[1])
+
+ # Return true if the configuration was modified.
+ return config_modified
+
+
+ def __update_node_label(self, node_labels, node_label):
+ if len(node_labels) == 0: # @todo: check if this is the correct solution. Especially after calling __update_config().
+ return False
+ new_node_label = self.__get_median_node_label(node_labels)
+ if self.__ged_env.get_node_rel_cost(new_node_label, node_label) > self.__epsilon:
+ node_label.clear()
+ for key, val in new_node_label.items():
+ node_label[key] = val
+ return True
+ return False
+
+
+ def __update_clusters(self, node_labels, median_labels, closest_median_ids):
+ # Determine the closest median for each node label.
+ clusters_modified = False
+ for label_id in range(0, len(node_labels)):
+ closest_median_id = np.inf
+ dist_to_closest_median = np.inf
+ for median_id in range(0, len(median_labels)):
+ dist_to_median = self.__ged_env.get_node_rel_cost(dict(median_labels[median_id]), dict(node_labels[label_id]))
+ if dist_to_median < dist_to_closest_median - self.__epsilon:
+ dist_to_closest_median = dist_to_median
+ closest_median_id = median_id
+ if closest_median_id != closest_median_ids[label_id]:
+ closest_median_ids[label_id] = closest_median_id
+ clusters_modified = True
+
+ # Return true if the clusters were modified.
+ return clusters_modified
+
+
+ def __add_node_to_median(self, best_config, best_label, median):
+ # Update the median.
+ nb_nodes_median = nx.number_of_nodes(median)
+ median.add_node(nb_nodes_median, **best_label)
+
+ # Update the node maps.
+ for graph_id, node_map in self.__node_maps_from_median.items():
+ node_map_as_rel = []
+ node_map.as_relation(node_map_as_rel)
+ new_node_map = NodeMap(nx.number_of_nodes(median), node_map.num_target_nodes())
+ for assignment in node_map_as_rel:
+ new_node_map.add_assignment(assignment[0], assignment[1])
+ new_node_map.add_assignment(nx.number_of_nodes(median) - 1, best_config[graph_id])
+ self.__node_maps_from_median[graph_id] = new_node_map
+
+ # Increase overall number of increases.
+ self.__num_increase_order += 1
+
+
+ def __are_graphs_equal(self, g1, g2):
+ """
+ Check if the two graphs are equal.
+
+ Parameters
+ ----------
+ g1 : NetworkX graph object
+ Graph 1 to be compared.
+
+ g2 : NetworkX graph object
+ Graph 2 to be compared.
+
+ Returns
+ -------
+ bool
+ True if the two graph are equal.
+
+ Notes
+ -----
+ This is not an identical check. Here the two graphs are equal if and only if their original_node_ids, nodes, all node labels, edges and all edge labels are equal. This function is specifically designed for class `MedianGraphEstimator` and should not be used elsewhere.
+ """
+ # check original node ids.
+ if not g1.graph['original_node_ids'] == g2.graph['original_node_ids']:
+ return False # @todo: why check this?
+ # check nodes.
+ nlist1 = [n for n in g1.nodes(data=True)] # @todo: shallow?
+ nlist2 = [n for n in g2.nodes(data=True)]
+ if not nlist1 == nlist2:
+ return False
+ # check edges.
+ elist1 = [n for n in g1.edges(data=True)]
+ elist2 = [n for n in g2.edges(data=True)]
+ if not elist1 == elist2:
+ return False
+
+ return True
+
+
+ def compute_my_cost(g, h, node_map):
+ cost = 0.0
+ for node in g.nodes:
+ cost += 0
+
+
+ def set_label_names(self, node_labels=[], edge_labels=[], node_attrs=[], edge_attrs=[]):
+ self.__label_names = {'node_labels': node_labels, 'edge_labels': edge_labels,
+ 'node_attrs': node_attrs, 'edge_attrs': edge_attrs}
+
+
+ def __get_median_node_label(self, node_labels):
+ if len(self.__label_names['node_labels']) > 0:
+ return self.__get_median_label_symbolic(node_labels)
+ elif len(self.__label_names['node_attrs']) > 0:
+ return self.__get_median_label_nonsymbolic(node_labels)
+ else:
+ raise Exception('Node label names are not given.')
+
+
+ def __get_median_edge_label(self, edge_labels):
+ if len(self.__label_names['edge_labels']) > 0:
+ return self.__get_median_label_symbolic(edge_labels)
+ elif len(self.__label_names['edge_attrs']) > 0:
+ return self.__get_median_label_nonsymbolic(edge_labels)
+ else:
+ raise Exception('Edge label names are not given.')
+
+
+ def __get_median_label_symbolic(self, labels):
+ # Construct histogram.
+ hist = {}
+ for label in labels:
+ label = tuple([kv for kv in label.items()]) # @todo: this may be slow.
+ if label not in hist:
+ hist[label] = 1
+ else:
+ hist[label] += 1
+
+ # Return the label that appears most frequently.
+ best_count = 0
+ median_label = {}
+ for label, count in hist.items():
+ if count > best_count:
+ best_count = count
+ median_label = {kv[0]: kv[1] for kv in label}
+
+ return median_label
+
+
+ def __get_median_label_nonsymbolic(self, labels):
+ if len(labels) == 0:
+ return {} # @todo
+ else:
+ # Transform the labels into coordinates and compute mean label as initial solution.
+ labels_as_coords = []
+ sums = {}
+ for key, val in labels[0].items():
+ sums[key] = 0
+ for label in labels:
+ coords = {}
+ for key, val in label.items():
+ label_f = float(val)
+ sums[key] += label_f
+ coords[key] = label_f
+ labels_as_coords.append(coords)
+ median = {}
+ for key, val in sums.items():
+ median[key] = val / len(labels)
+
+ # Run main loop of Weiszfeld's Algorithm.
+ epsilon = 0.0001
+ delta = 1.0
+ num_itrs = 0
+ all_equal = False
+ while ((delta > epsilon) and (num_itrs < 100) and (not all_equal)):
+ numerator = {}
+ for key, val in sums.items():
+ numerator[key] = 0
+ denominator = 0
+ for label_as_coord in labels_as_coords:
+ norm = 0
+ for key, val in label_as_coord.items():
+ norm += (val - median[key]) ** 2
+ norm = np.sqrt(norm)
+ if norm > 0:
+ for key, val in label_as_coord.items():
+ numerator[key] += val / norm
+ denominator += 1.0 / norm
+ if denominator == 0:
+ all_equal = True
+ else:
+ new_median = {}
+ delta = 0.0
+ for key, val in numerator.items():
+ this_median = val / denominator
+ new_median[key] = this_median
+ delta += np.abs(median[key] - this_median)
+ median = new_median
+
+ num_itrs += 1
+
+ # Transform the solution to strings and return it.
+ median_label = {}
+ for key, val in median.items():
+ median_label[key] = str(val)
+ return median_label
+
+
+# def __get_median_edge_label_symbolic(self, edge_labels):
+# pass
+
+
+# def __get_median_edge_label_nonsymbolic(self, edge_labels):
+# if len(edge_labels) == 0:
+# return {}
+# else:
+# # Transform the labels into coordinates and compute mean label as initial solution.
+# edge_labels_as_coords = []
+# sums = {}
+# for key, val in edge_labels[0].items():
+# sums[key] = 0
+# for edge_label in edge_labels:
+# coords = {}
+# for key, val in edge_label.items():
+# label = float(val)
+# sums[key] += label
+# coords[key] = label
+# edge_labels_as_coords.append(coords)
+# median = {}
+# for key, val in sums.items():
+# median[key] = val / len(edge_labels)
+#
+# # Run main loop of Weiszfeld's Algorithm.
+# epsilon = 0.0001
+# delta = 1.0
+# num_itrs = 0
+# all_equal = False
+# while ((delta > epsilon) and (num_itrs < 100) and (not all_equal)):
+# numerator = {}
+# for key, val in sums.items():
+# numerator[key] = 0
+# denominator = 0
+# for edge_label_as_coord in edge_labels_as_coords:
+# norm = 0
+# for key, val in edge_label_as_coord.items():
+# norm += (val - median[key]) ** 2
+# norm += np.sqrt(norm)
+# if norm > 0:
+# for key, val in edge_label_as_coord.items():
+# numerator[key] += val / norm
+# denominator += 1.0 / norm
+# if denominator == 0:
+# all_equal = True
+# else:
+# new_median = {}
+# delta = 0.0
+# for key, val in numerator.items():
+# this_median = val / denominator
+# new_median[key] = this_median
+# delta += np.abs(median[key] - this_median)
+# median = new_median
+#
+# num_itrs += 1
+#
+# # Transform the solution to ged::GXLLabel and return it.
+# median_label = {}
+# for key, val in median.items():
+# median_label[key] = str(val)
+# return median_label
+
+
+def _compute_medoid_parallel(graph_ids, sort, itr):
+ g_id = itr[0]
+ i = itr[1]
+ # @todo: timer not considered here.
+# if timer.expired():
+# self.__state = AlgorithmState.CALLED
+# break
+ nb_nodes_g = G_ged_env.get_graph_num_nodes(g_id)
+ sum_of_distances = 0
+ for h_id in graph_ids:
+ nb_nodes_h = G_ged_env.get_graph_num_nodes(h_id)
+ if nb_nodes_g <= nb_nodes_h or not sort:
+ G_ged_env.run_method(g_id, h_id)
+ sum_of_distances += G_ged_env.get_upper_bound(g_id, h_id)
+ else:
+ G_ged_env.run_method(h_id, g_id)
+ sum_of_distances += G_ged_env.get_upper_bound(h_id, g_id)
+ return i, sum_of_distances
+
+
+def _compute_init_node_maps_parallel(gen_median_id, sort, nb_nodes_median, itr):
+ graph_id = itr
+ nb_nodes_g = G_ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not sort:
+ G_ged_env.run_method(gen_median_id, graph_id)
+ node_map = G_ged_env.get_node_map(gen_median_id, graph_id)
+# print(self.__node_maps_from_median[graph_id])
+ else:
+ G_ged_env.run_method(graph_id, gen_median_id)
+ node_map = G_ged_env.get_node_map(graph_id, gen_median_id)
+ node_map.forward_map, node_map.backward_map = node_map.backward_map, node_map.forward_map
+ sum_of_distance = node_map.induced_cost()
+# print(self.__sum_of_distances)
+ return graph_id, sum_of_distance, node_map
+
+
+def _update_node_maps_parallel(median_id, epsilon, sort, nb_nodes_median, itr):
+ graph_id = itr[0]
+ node_map = itr[1]
+
+ node_maps_were_modified = False
+ nb_nodes_g = G_ged_env.get_graph_num_nodes(graph_id)
+ if nb_nodes_median <= nb_nodes_g or not sort:
+ G_ged_env.run_method(median_id, graph_id)
+ if G_ged_env.get_upper_bound(median_id, graph_id) < node_map.induced_cost() - epsilon:
+ node_map = G_ged_env.get_node_map(median_id, graph_id)
+ node_maps_were_modified = True
+ else:
+ G_ged_env.run_method(graph_id, median_id)
+ if G_ged_env.get_upper_bound(graph_id, median_id) < node_map.induced_cost() - epsilon:
+ node_map = G_ged_env.get_node_map(graph_id, median_id)
+ node_map.forward_map, node_map.backward_map = node_map.backward_map, node_map.forward_map
+ node_maps_were_modified = True
+
+ return graph_id, node_map, node_maps_were_modified
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/median/test_median_graph_estimator.py b/lang/fr/gklearn/ged/median/test_median_graph_estimator.py
new file mode 100644
index 0000000000..60bce83260
--- /dev/null
+++ b/lang/fr/gklearn/ged/median/test_median_graph_estimator.py
@@ -0,0 +1,159 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Mar 16 17:26:40 2020
+
+@author: ljia
+"""
+
+def test_median_graph_estimator():
+ from gklearn.utils import load_dataset
+ from gklearn.ged.median import MedianGraphEstimator, constant_node_costs
+ from gklearn.gedlib import librariesImport, gedlibpy
+ from gklearn.preimage.utils import get_same_item_indices
+ import multiprocessing
+
+ # estimator parameters.
+ init_type = 'MEDOID'
+ num_inits = 1
+ threads = multiprocessing.cpu_count()
+ time_limit = 60000
+
+ # algorithm parameters.
+ algo = 'IPFP'
+ initial_solutions = 1
+ algo_options_suffix = ' --initial-solutions ' + str(initial_solutions) + ' --ratio-runs-from-initial-solutions 1 --initialization-method NODE '
+
+ edit_cost_name = 'LETTER2'
+ edit_cost_constants = [0.02987291, 0.0178211, 0.01431966, 0.001, 0.001]
+ ds_name = 'Letter_high'
+
+ # Load dataset.
+ # dataset = '../../datasets/COIL-DEL/COIL-DEL_A.txt'
+ dataset = '../../../datasets/Letter-high/Letter-high_A.txt'
+ Gn, y_all, label_names = load_dataset(dataset)
+ y_idx = get_same_item_indices(y_all)
+ for i, (y, values) in enumerate(y_idx.items()):
+ Gn_i = [Gn[val] for val in values]
+ break
+
+ # Set up the environment.
+ ged_env = gedlibpy.GEDEnv()
+ # gedlibpy.restart_env()
+ ged_env.set_edit_cost(edit_cost_name, edit_cost_constant=edit_cost_constants)
+ for G in Gn_i:
+ ged_env.add_nx_graph(G, '')
+ graph_ids = ged_env.get_all_graph_ids()
+ set_median_id = ged_env.add_graph('set_median')
+ gen_median_id = ged_env.add_graph('gen_median')
+ ged_env.init(init_option='EAGER_WITHOUT_SHUFFLED_COPIES')
+
+ # Set up the estimator.
+ mge = MedianGraphEstimator(ged_env, constant_node_costs(edit_cost_name))
+ mge.set_refine_method(algo, '--threads ' + str(threads) + ' --initial-solutions ' + str(initial_solutions) + ' --ratio-runs-from-initial-solutions 1')
+
+ mge_options = '--time-limit ' + str(time_limit) + ' --stdout 2 --init-type ' + init_type
+ mge_options += ' --random-inits ' + str(num_inits) + ' --seed ' + '1' + ' --update-order TRUE --refine FALSE --randomness PSEUDO --parallel TRUE '# @todo: std::to_string(rng())
+
+ # Select the GED algorithm.
+ algo_options = '--threads ' + str(threads) + algo_options_suffix
+ mge.set_options(mge_options)
+ mge.set_label_names(node_labels=label_names['node_labels'],
+ edge_labels=label_names['edge_labels'],
+ node_attrs=label_names['node_attrs'],
+ edge_attrs=label_names['edge_attrs'])
+ mge.set_init_method(algo, algo_options)
+ mge.set_descent_method(algo, algo_options)
+
+ # Run the estimator.
+ mge.run(graph_ids, set_median_id, gen_median_id)
+
+ # Get SODs.
+ sod_sm = mge.get_sum_of_distances('initialized')
+ sod_gm = mge.get_sum_of_distances('converged')
+ print('sod_sm, sod_gm: ', sod_sm, sod_gm)
+
+ # Get median graphs.
+ set_median = ged_env.get_nx_graph(set_median_id)
+ gen_median = ged_env.get_nx_graph(gen_median_id)
+
+ return set_median, gen_median
+
+
+def test_median_graph_estimator_symb():
+ from gklearn.utils import load_dataset
+ from gklearn.ged.median import MedianGraphEstimator, constant_node_costs
+ from gklearn.gedlib import librariesImport, gedlibpy
+ from gklearn.preimage.utils import get_same_item_indices
+ import multiprocessing
+
+ # estimator parameters.
+ init_type = 'MEDOID'
+ num_inits = 1
+ threads = multiprocessing.cpu_count()
+ time_limit = 60000
+
+ # algorithm parameters.
+ algo = 'IPFP'
+ initial_solutions = 1
+ algo_options_suffix = ' --initial-solutions ' + str(initial_solutions) + ' --ratio-runs-from-initial-solutions 1 --initialization-method NODE '
+
+ edit_cost_name = 'CONSTANT'
+ edit_cost_constants = [4, 4, 2, 1, 1, 1]
+ ds_name = 'MUTAG'
+
+ # Load dataset.
+ dataset = '../../../datasets/MUTAG/MUTAG_A.txt'
+ Gn, y_all, label_names = load_dataset(dataset)
+ y_idx = get_same_item_indices(y_all)
+ for i, (y, values) in enumerate(y_idx.items()):
+ Gn_i = [Gn[val] for val in values]
+ break
+ Gn_i = Gn_i[0:10]
+
+ # Set up the environment.
+ ged_env = gedlibpy.GEDEnv()
+ # gedlibpy.restart_env()
+ ged_env.set_edit_cost(edit_cost_name, edit_cost_constant=edit_cost_constants)
+ for G in Gn_i:
+ ged_env.add_nx_graph(G, '')
+ graph_ids = ged_env.get_all_graph_ids()
+ set_median_id = ged_env.add_graph('set_median')
+ gen_median_id = ged_env.add_graph('gen_median')
+ ged_env.init(init_option='EAGER_WITHOUT_SHUFFLED_COPIES')
+
+ # Set up the estimator.
+ mge = MedianGraphEstimator(ged_env, constant_node_costs(edit_cost_name))
+ mge.set_refine_method(algo, '--threads ' + str(threads) + ' --initial-solutions ' + str(initial_solutions) + ' --ratio-runs-from-initial-solutions 1')
+
+ mge_options = '--time-limit ' + str(time_limit) + ' --stdout 2 --init-type ' + init_type
+ mge_options += ' --random-inits ' + str(num_inits) + ' --seed ' + '1' + ' --update-order TRUE --refine FALSE --randomness PSEUDO --parallel TRUE '# @todo: std::to_string(rng())
+
+ # Select the GED algorithm.
+ algo_options = '--threads ' + str(threads) + algo_options_suffix
+ mge.set_options(mge_options)
+ mge.set_label_names(node_labels=label_names['node_labels'],
+ edge_labels=label_names['edge_labels'],
+ node_attrs=label_names['node_attrs'],
+ edge_attrs=label_names['edge_attrs'])
+ mge.set_init_method(algo, algo_options)
+ mge.set_descent_method(algo, algo_options)
+
+ # Run the estimator.
+ mge.run(graph_ids, set_median_id, gen_median_id)
+
+ # Get SODs.
+ sod_sm = mge.get_sum_of_distances('initialized')
+ sod_gm = mge.get_sum_of_distances('converged')
+ print('sod_sm, sod_gm: ', sod_sm, sod_gm)
+
+ # Get median graphs.
+ set_median = ged_env.get_nx_graph(set_median_id)
+ gen_median = ged_env.get_nx_graph(gen_median_id)
+
+ return set_median, gen_median
+
+
+if __name__ == '__main__':
+ # set_median, gen_median = test_median_graph_estimator()
+ set_median, gen_median = test_median_graph_estimator_symb()
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/median/utils.py b/lang/fr/gklearn/ged/median/utils.py
new file mode 100644
index 0000000000..d27c86da51
--- /dev/null
+++ b/lang/fr/gklearn/ged/median/utils.py
@@ -0,0 +1,63 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Wed Apr 1 15:12:31 2020
+
+@author: ljia
+"""
+
+def constant_node_costs(edit_cost_name):
+ if edit_cost_name == 'NON_SYMBOLIC' or edit_cost_name == 'LETTER2' or edit_cost_name == 'LETTER':
+ return False
+ elif edit_cost_name == 'CONSTANT':
+ return True
+ else:
+ raise Exception('Can not recognize the given edit cost. Possible edit costs include: "NON_SYMBOLIC", "LETTER", "LETTER2", "CONSTANT".')
+# elif edit_cost_name != '':
+# # throw ged::Error("Invalid dataset " + dataset + ". Usage: ./median_tests ");
+# return False
+ # return True
+
+
+def mge_options_to_string(options):
+ opt_str = ' '
+ for key, val in options.items():
+ if key == 'init_type':
+ opt_str += '--init-type ' + str(val) + ' '
+ elif key == 'random_inits':
+ opt_str += '--random-inits ' + str(val) + ' '
+ elif key == 'randomness':
+ opt_str += '--randomness ' + str(val) + ' '
+ elif key == 'verbose':
+ opt_str += '--stdout ' + str(val) + ' '
+ elif key == 'parallel':
+ opt_str += '--parallel ' + ('TRUE' if val else 'FALSE') + ' '
+ elif key == 'update_order':
+ opt_str += '--update-order ' + ('TRUE' if val else 'FALSE') + ' '
+ elif key == 'sort_graphs':
+ opt_str += '--sort-graphs ' + ('TRUE' if val else 'FALSE') + ' '
+ elif key == 'refine':
+ opt_str += '--refine ' + ('TRUE' if val else 'FALSE') + ' '
+ elif key == 'time_limit':
+ opt_str += '--time-limit ' + str(val) + ' '
+ elif key == 'max_itrs':
+ opt_str += '--max-itrs ' + str(val) + ' '
+ elif key == 'max_itrs_without_update':
+ opt_str += '--max-itrs-without-update ' + str(val) + ' '
+ elif key == 'seed':
+ opt_str += '--seed ' + str(val) + ' '
+ elif key == 'epsilon':
+ opt_str += '--epsilon ' + str(val) + ' '
+ elif key == 'inits_increase_order':
+ opt_str += '--inits-increase-order ' + str(val) + ' '
+ elif key == 'init_type_increase_order':
+ opt_str += '--init-type-increase-order ' + str(val) + ' '
+ elif key == 'max_itrs_increase_order':
+ opt_str += '--max-itrs-increase-order ' + str(val) + ' '
+# else:
+# valid_options = '[--init-type ] [--random_inits ] [--randomness ] [--seed ] [--verbose ] '
+# valid_options += '[--time_limit ] [--max_itrs ] [--epsilon ] '
+# valid_options += '[--inits_increase_order ] [--init_type_increase_order ] [--max_itrs_increase_order ]'
+# raise Exception('Invalid option "' + key + '". Options available = "' + valid_options + '"')
+
+ return opt_str
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/methods/__init__.py b/lang/fr/gklearn/ged/methods/__init__.py
new file mode 100644
index 0000000000..5879b9c54e
--- /dev/null
+++ b/lang/fr/gklearn/ged/methods/__init__.py
@@ -0,0 +1,3 @@
+from gklearn.ged.methods.ged_method import GEDMethod
+from gklearn.ged.methods.lsape_based_method import LSAPEBasedMethod
+from gklearn.ged.methods.bipartite import Bipartite
diff --git a/lang/fr/gklearn/ged/methods/bipartite.py b/lang/fr/gklearn/ged/methods/bipartite.py
new file mode 100644
index 0000000000..aa295c4cba
--- /dev/null
+++ b/lang/fr/gklearn/ged/methods/bipartite.py
@@ -0,0 +1,117 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Thu Jun 18 16:09:29 2020
+
+@author: ljia
+"""
+import numpy as np
+import networkx as nx
+from gklearn.ged.methods import LSAPEBasedMethod
+from gklearn.ged.util import LSAPESolver
+from gklearn.utils import SpecialLabel
+
+
+class Bipartite(LSAPEBasedMethod):
+
+
+ def __init__(self, ged_data):
+ super().__init__(ged_data)
+ self._compute_lower_bound = False
+
+
+ ###########################################################################
+ # Inherited member functions from LSAPEBasedMethod.
+ ###########################################################################
+
+
+ def _lsape_populate_instance(self, g, h, master_problem):
+ # #ifdef _OPENMP
+ for row_in_master in range(0, nx.number_of_nodes(g)):
+ for col_in_master in range(0, nx.number_of_nodes(h)):
+ master_problem[row_in_master, col_in_master] = self._compute_substitution_cost(g, h, row_in_master, col_in_master)
+ for row_in_master in range(0, nx.number_of_nodes(g)):
+ master_problem[row_in_master, nx.number_of_nodes(h) + row_in_master] = self._compute_deletion_cost(g, row_in_master)
+ for col_in_master in range(0, nx.number_of_nodes(h)):
+ master_problem[nx.number_of_nodes(g) + col_in_master, col_in_master] = self._compute_insertion_cost(h, col_in_master)
+
+# for row_in_master in range(0, master_problem.shape[0]):
+# for col_in_master in range(0, master_problem.shape[1]):
+# if row_in_master < nx.number_of_nodes(g) and col_in_master < nx.number_of_nodes(h):
+# master_problem[row_in_master, col_in_master] = self._compute_substitution_cost(g, h, row_in_master, col_in_master)
+# elif row_in_master < nx.number_of_nodes(g):
+# master_problem[row_in_master, nx.number_of_nodes(h)] = self._compute_deletion_cost(g, row_in_master)
+# elif col_in_master < nx.number_of_nodes(h):
+# master_problem[nx.number_of_nodes(g), col_in_master] = self._compute_insertion_cost(h, col_in_master)
+
+
+ ###########################################################################
+ # Helper member functions.
+ ###########################################################################
+
+
+ def _compute_substitution_cost(self, g, h, u, v):
+ # Collect node substitution costs.
+ cost = self._ged_data.node_cost(g.nodes[u]['label'], h.nodes[v]['label'])
+
+ # Initialize subproblem.
+ d1, d2 = g.degree[u], h.degree[v]
+ subproblem = np.ones((d1 + d2, d1 + d2)) * np.inf
+ subproblem[d1:, d2:] = 0
+# subproblem = np.empty((g.degree[u] + 1, h.degree[v] + 1))
+
+ # Collect edge deletion costs.
+ i = 0 # @todo: should directed graphs be considered?
+ for label in g[u].values(): # all u's neighbor
+ subproblem[i, d2 + i] = self._ged_data.edge_cost(label['label'], SpecialLabel.DUMMY)
+# subproblem[i, h.degree[v]] = self._ged_data.edge_cost(label['label'], SpecialLabel.DUMMY)
+ i += 1
+
+ # Collect edge insertion costs.
+ i = 0 # @todo: should directed graphs be considered?
+ for label in h[v].values(): # all u's neighbor
+ subproblem[d1 + i, i] = self._ged_data.edge_cost(SpecialLabel.DUMMY, label['label'])
+# subproblem[g.degree[u], i] = self._ged_data.edge_cost(SpecialLabel.DUMMY, label['label'])
+ i += 1
+
+ # Collect edge relabelling costs.
+ i = 0
+ for label1 in g[u].values():
+ j = 0
+ for label2 in h[v].values():
+ subproblem[i, j] = self._ged_data.edge_cost(label1['label'], label2['label'])
+ j += 1
+ i += 1
+
+ # Solve subproblem.
+ subproblem_solver = LSAPESolver(subproblem)
+ subproblem_solver.set_model(self._lsape_model)
+ subproblem_solver.solve()
+
+ # Update and return overall substitution cost.
+ cost += subproblem_solver.minimal_cost()
+ return cost
+
+
+ def _compute_deletion_cost(self, g, v):
+ # Collect node deletion cost.
+ cost = self._ged_data.node_cost(g.nodes[v]['label'], SpecialLabel.DUMMY)
+
+ # Collect edge deletion costs.
+ for label in g[v].values():
+ cost += self._ged_data.edge_cost(label['label'], SpecialLabel.DUMMY)
+
+ # Return overall deletion cost.
+ return cost
+
+
+ def _compute_insertion_cost(self, g, v):
+ # Collect node insertion cost.
+ cost = self._ged_data.node_cost(SpecialLabel.DUMMY, g.nodes[v]['label'])
+
+ # Collect edge insertion costs.
+ for label in g[v].values():
+ cost += self._ged_data.edge_cost(SpecialLabel.DUMMY, label['label'])
+
+ # Return overall insertion cost.
+ return cost
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/methods/ged_method.py b/lang/fr/gklearn/ged/methods/ged_method.py
new file mode 100644
index 0000000000..aecd16b5e2
--- /dev/null
+++ b/lang/fr/gklearn/ged/methods/ged_method.py
@@ -0,0 +1,195 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Thu Jun 18 15:52:35 2020
+
+@author: ljia
+"""
+import numpy as np
+import time
+import networkx as nx
+
+
+class GEDMethod(object):
+
+
+ def __init__(self, ged_data):
+ self._initialized = False
+ self._ged_data = ged_data
+ self._options = None
+ self._lower_bound = 0
+ self._upper_bound = np.inf
+ self._node_map = [0, 0] # @todo
+ self._runtime = None
+ self._init_time = None
+
+
+ def init(self):
+ """Initializes the method with options specified by set_options().
+ """
+ start = time.time()
+ self._ged_init()
+ end = time.time()
+ self._init_time = end - start
+ self._initialized = True
+
+
+ def set_options(self, options):
+ """
+ /*!
+ * @brief Sets the options of the method.
+ * @param[in] options String of the form [--@ @] [...] , where @p option contains neither spaces nor single quotes,
+ * and @p arg contains neither spaces nor single quotes or is of the form '[--@ @] [...]' ,
+ * where both @p sub-option and @p sub-arg contain neither spaces nor single quotes.
+ */
+ """
+ self._ged_set_default_options()
+ for key, val in options.items():
+ if not self._ged_parse_option(key, val):
+ raise Exception('Invalid option "', key, '". Usage: options = "' + self._ged_valid_options_string() + '".') # @todo: not implemented.
+ self._initialized = False
+
+
+ def run(self, g_id, h_id):
+ """
+ /*!
+ * @brief Runs the method with options specified by set_options().
+ * @param[in] g_id ID of input graph.
+ * @param[in] h_id ID of input graph.
+ */
+ """
+ start = time.time()
+ result = self.run_as_util(self._ged_data._graphs[g_id], self._ged_data._graphs[h_id])
+ end = time.time()
+ self._lower_bound = result['lower_bound']
+ self._upper_bound = result['upper_bound']
+ if len(result['node_maps']) > 0:
+ self._node_map = result['node_maps'][0]
+ self._runtime = end - start
+
+
+ def run_as_util(self, g, h):
+ """
+ /*!
+ * @brief Runs the method with options specified by set_options().
+ * @param[in] g Input graph.
+ * @param[in] h Input graph.
+ * @param[out] result Result variable.
+ */
+ """
+ # Compute optimal solution and return if at least one of the two graphs is empty.
+ if nx.number_of_nodes(g) == 0 or nx.number_of_nodes(h) == 0:
+ print('This is not implemented.')
+ pass # @todo:
+
+ # Run the method.
+ return self._ged_run(g, h)
+
+
+ def get_upper_bound(self):
+ """
+ /*!
+ * @brief Returns an upper bound.
+ * @return Upper bound for graph edit distance provided by last call to run() or -1 if the method does not yield an upper bound.
+ */
+ """
+ return self._upper_bound
+
+
+ def get_lower_bound(self):
+ """
+ /*!
+ * @brief Returns a lower bound.
+ * @return Lower bound for graph edit distance provided by last call to run() or -1 if the method does not yield a lower bound.
+ */
+ """
+ return self._lower_bound
+
+
+ def get_runtime(self):
+ """
+ /*!
+ * @brief Returns the runtime.
+ * @return Runtime of last call to run() in seconds.
+ */
+ """
+ return self._runtime
+
+
+ def get_init_time(self):
+ """
+ /*!
+ * @brief Returns the initialization time.
+ * @return Runtime of last call to init() in seconds.
+ */
+ """
+ return self._init_time
+
+
+ def get_node_map(self):
+ """
+ /*!
+ * @brief Returns a graph matching.
+ * @return Constant reference to graph matching provided by last call to run() or to an empty matching if the method does not yield a matching.
+ */
+ """
+ return self._node_map
+
+
+ def _ged_init(self):
+ """
+ /*!
+ * @brief Initializes the method.
+ * @note Must be overridden by derived classes that require initialization.
+ */
+ """
+ pass
+
+
+ def _ged_parse_option(self, option, arg):
+ """
+ /*!
+ * @brief Parses one option.
+ * @param[in] option The name of the option.
+ * @param[in] arg The argument of the option.
+ * @return Boolean @p true if @p option is a valid option name for the method and @p false otherwise.
+ * @note Must be overridden by derived classes that have options.
+ */
+ """
+ return False
+
+
+ def _ged_run(self, g, h):
+ """
+ /*!
+ * @brief Runs the method with options specified by set_options().
+ * @param[in] g Input graph.
+ * @param[in] h Input graph.
+ * @param[out] result Result variable.
+ * @note Must be overridden by derived classes.
+ */
+ """
+ return {}
+
+
+
+ def _ged_valid_options_string(self):
+ """
+ /*!
+ * @brief Returns string of all valid options.
+ * @return String of the form [--@ @] [...] .
+ * @note Must be overridden by derived classes that have options.
+ */
+ """
+ return ''
+
+
+ def _ged_set_default_options(self):
+ """
+ /*!
+ * @brief Sets all options to default values.
+ * @note Must be overridden by derived classes that have options.
+ */
+ """
+ pass
+
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/methods/lsape_based_method.py b/lang/fr/gklearn/ged/methods/lsape_based_method.py
new file mode 100644
index 0000000000..79f7b9c662
--- /dev/null
+++ b/lang/fr/gklearn/ged/methods/lsape_based_method.py
@@ -0,0 +1,254 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Thu Jun 18 16:01:24 2020
+
+@author: ljia
+"""
+import numpy as np
+import networkx as nx
+from gklearn.ged.methods import GEDMethod
+from gklearn.ged.util import LSAPESolver, misc
+from gklearn.ged.env import NodeMap
+
+
+class LSAPEBasedMethod(GEDMethod):
+
+
+ def __init__(self, ged_data):
+ super().__init__(ged_data)
+ self._lsape_model = None # @todo: LSAPESolver::ECBP
+ self._greedy_method = None # @todo: LSAPESolver::BASIC
+ self._compute_lower_bound = True
+ self._solve_optimally = True
+ self._num_threads = 1
+ self._centrality_method = 'NODE' # @todo
+ self._centrality_weight = 0.7
+ self._centralities = {}
+ self._max_num_solutions = 1
+
+
+ def populate_instance_and_run_as_util(self, g, h): #, lsape_instance):
+ """
+ /*!
+ * @brief Runs the method with options specified by set_options() and provides access to constructed LSAPE instance.
+ * @param[in] g Input graph.
+ * @param[in] h Input graph.
+ * @param[out] result Result variable.
+ * @param[out] lsape_instance LSAPE instance.
+ */
+ """
+ result = {'node_maps': [], 'lower_bound': 0, 'upper_bound': np.inf}
+
+ # Populate the LSAPE instance and set up the solver.
+ nb1, nb2 = nx.number_of_nodes(g), nx.number_of_nodes(h)
+ lsape_instance = np.ones((nb1 + nb2, nb1 + nb2)) * np.inf
+# lsape_instance = np.empty((nx.number_of_nodes(g) + 1, nx.number_of_nodes(h) + 1))
+ self.populate_instance(g, h, lsape_instance)
+
+# nb1, nb2 = nx.number_of_nodes(g), nx.number_of_nodes(h)
+# lsape_instance_new = np.empty((nb1 + nb2, nb1 + nb2)) * np.inf
+# lsape_instance_new[nb1:, nb2:] = 0
+# lsape_instance_new[0:nb1, 0:nb2] = lsape_instance[0:nb1, 0:nb2]
+# for i in range(nb1): # all u's neighbor
+# lsape_instance_new[i, nb2 + i] = lsape_instance[i, nb2]
+# for i in range(nb2): # all u's neighbor
+# lsape_instance_new[nb1 + i, i] = lsape_instance[nb2, i]
+# lsape_solver = LSAPESolver(lsape_instance_new)
+
+ lsape_solver = LSAPESolver(lsape_instance)
+
+ # Solve the LSAPE instance.
+ if self._solve_optimally:
+ lsape_solver.set_model(self._lsape_model)
+ else:
+ lsape_solver.set_greedy_method(self._greedy_method)
+ lsape_solver.solve(self._max_num_solutions)
+
+ # Compute and store lower and upper bound.
+ if self._compute_lower_bound and self._solve_optimally:
+ result['lower_bound'] = lsape_solver.minimal_cost() * self._lsape_lower_bound_scaling_factor(g, h) # @todo: test
+
+ for solution_id in range(0, lsape_solver.num_solutions()):
+ result['node_maps'].append(NodeMap(nx.number_of_nodes(g), nx.number_of_nodes(h)))
+ misc.construct_node_map_from_solver(lsape_solver, result['node_maps'][-1], solution_id)
+ self._ged_data.compute_induced_cost(g, h, result['node_maps'][-1])
+
+ # Add centralities and reoptimize.
+ if self._centrality_weight > 0 and self._centrality_method != 'NODE':
+ print('This is not implemented.')
+ pass # @todo
+
+ # Sort the node maps and set the upper bound.
+ if len(result['node_maps']) > 1 or len(result['node_maps']) > self._max_num_solutions:
+ print('This is not implemented.') # @todo:
+ pass
+ if len(result['node_maps']) == 0:
+ result['upper_bound'] = np.inf
+ else:
+ result['upper_bound'] = result['node_maps'][0].induced_cost()
+
+ return result
+
+
+
+ def populate_instance(self, g, h, lsape_instance):
+ """
+ /*!
+ * @brief Populates the LSAPE instance.
+ * @param[in] g Input graph.
+ * @param[in] h Input graph.
+ * @param[out] lsape_instance LSAPE instance.
+ */
+ """
+ if not self._initialized:
+ pass
+ # @todo: if (not this->initialized_) {
+ self._lsape_populate_instance(g, h, lsape_instance)
+ lsape_instance[nx.number_of_nodes(g):, nx.number_of_nodes(h):] = 0
+# lsape_instance[nx.number_of_nodes(g), nx.number_of_nodes(h)] = 0
+
+
+ ###########################################################################
+ # Member functions inherited from GEDMethod.
+ ###########################################################################
+
+
+ def _ged_init(self):
+ self._lsape_pre_graph_init(False)
+ for graph in self._ged_data._graphs:
+ self._init_graph(graph)
+ self._lsape_init()
+
+
+ def _ged_run(self, g, h):
+# lsape_instance = np.empty((0, 0))
+ result = self.populate_instance_and_run_as_util(g, h) # , lsape_instance)
+ return result
+
+
+ def _ged_parse_option(self, option, arg):
+ is_valid_option = False
+
+ if option == 'threads': # @todo: try.. catch...
+ self._num_threads = arg
+ is_valid_option = True
+ elif option == 'lsape_model':
+ self._lsape_model = arg # @todo
+ is_valid_option = True
+ elif option == 'greedy_method':
+ self._greedy_method = arg # @todo
+ is_valid_option = True
+ elif option == 'optimal':
+ self._solve_optimally = arg # @todo
+ is_valid_option = True
+ elif option == 'centrality_method':
+ self._centrality_method = arg # @todo
+ is_valid_option = True
+ elif option == 'centrality_weight':
+ self._centrality_weight = arg # @todo
+ is_valid_option = True
+ elif option == 'max_num_solutions':
+ if arg == 'ALL':
+ self._max_num_solutions = -1
+ else:
+ self._max_num_solutions = arg # @todo
+ is_valid_option = True
+
+ is_valid_option = is_valid_option or self._lsape_parse_option(option, arg)
+ is_valid_option = True # @todo: this is not in the C++ code.
+ return is_valid_option
+
+
+ def _ged_set_default_options(self):
+ self._lsape_model = None # @todo: LSAPESolver::ECBP
+ self._greedy_method = None # @todo: LSAPESolver::BASIC
+ self._solve_optimally = True
+ self._num_threads = 1
+ self._centrality_method = 'NODE' # @todo
+ self._centrality_weight = 0.7
+ self._max_num_solutions = 1
+
+
+ ###########################################################################
+ # Private helper member functions.
+ ###########################################################################
+
+
+ def _init_graph(self, graph):
+ if self._centrality_method != 'NODE':
+ self._init_centralities(graph) # @todo
+ self._lsape_init_graph(graph)
+
+
+ ###########################################################################
+ # Virtual member functions to be overridden by derived classes.
+ ###########################################################################
+
+
+ def _lsape_init(self):
+ """
+ /*!
+ * @brief Initializes the method after initializing the global variables for the graphs.
+ * @note Must be overridden by derived classes of ged::LSAPEBasedMethod that require custom initialization.
+ */
+ """
+ pass
+
+
+ def _lsape_parse_option(self, option, arg):
+ """
+ /*!
+ * @brief Parses one option that is not among the ones shared by all derived classes of ged::LSAPEBasedMethod.
+ * @param[in] option The name of the option.
+ * @param[in] arg The argument of the option.
+ * @return Returns true if @p option is a valid option name for the method and false otherwise.
+ * @note Must be overridden by derived classes of ged::LSAPEBasedMethod that have options that are not among the ones shared by all derived classes of ged::LSAPEBasedMethod.
+ */
+ """
+ return False
+
+
+ def _lsape_set_default_options(self):
+ """
+ /*!
+ * @brief Sets all options that are not among the ones shared by all derived classes of ged::LSAPEBasedMethod to default values.
+ * @note Must be overridden by derived classes of ged::LSAPEBasedMethod that have options that are not among the ones shared by all derived classes of ged::LSAPEBasedMethod.
+ */
+ """
+ pass
+
+
+ def _lsape_populate_instance(self, g, h, lsape_instance):
+ """
+ /*!
+ * @brief Populates the LSAPE instance.
+ * @param[in] g Input graph.
+ * @param[in] h Input graph.
+ * @param[out] lsape_instance LSAPE instance of size (n + 1) x (m + 1), where n and m are the number of nodes in @p g and @p h. The last row and the last column represent insertion and deletion.
+ * @note Must be overridden by derived classes of ged::LSAPEBasedMethod.
+ */
+ """
+ pass
+
+
+ def _lsape_init_graph(self, graph):
+ """
+ /*!
+ * @brief Initializes global variables for one graph.
+ * @param[in] graph Graph for which the global variables have to be initialized.
+ * @note Must be overridden by derived classes of ged::LSAPEBasedMethod that require to initialize custom global variables.
+ */
+ """
+ pass
+
+
+ def _lsape_pre_graph_init(self, called_at_runtime):
+ """
+ /*!
+ * @brief Initializes the method at runtime or during initialization before initializing the global variables for the graphs.
+ * @param[in] called_at_runtime Equals @p true if called at runtime and @p false if called during initialization.
+ * @brief Must be overridden by derived classes of ged::LSAPEBasedMethod that require default initialization at runtime before initializing the global variables for the graphs.
+ */
+ """
+ pass
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/util/__init__.py b/lang/fr/gklearn/ged/util/__init__.py
new file mode 100644
index 0000000000..f885b181a7
--- /dev/null
+++ b/lang/fr/gklearn/ged/util/__init__.py
@@ -0,0 +1,3 @@
+from gklearn.ged.util.lsape_solver import LSAPESolver
+from gklearn.ged.util.util import compute_geds, ged_options_to_string
+from gklearn.ged.util.util import compute_geds_cml, label_costs_to_matrix
diff --git a/lang/fr/gklearn/ged/util/cpp2python.py b/lang/fr/gklearn/ged/util/cpp2python.py
new file mode 100644
index 0000000000..9d63026dec
--- /dev/null
+++ b/lang/fr/gklearn/ged/util/cpp2python.py
@@ -0,0 +1,134 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Fri Mar 20 11:09:04 2020
+
+@author: ljia
+"""
+import re
+
+def convert_function(cpp_code):
+# f_cpp = open('cpp_code.cpp', 'r')
+# # f_cpp = open('cpp_ext/src/median_graph_estimator.ipp', 'r')
+# cpp_code = f_cpp.read()
+ python_code = cpp_code.replace('else if (', 'elif ')
+ python_code = python_code.replace('if (', 'if ')
+ python_code = python_code.replace('else {', 'else:')
+ python_code = python_code.replace(') {', ':')
+ python_code = python_code.replace(';\n', '\n')
+ python_code = re.sub('\n(.*)}\n', '\n\n', python_code)
+ # python_code = python_code.replace('}\n', '')
+ python_code = python_code.replace('throw', 'raise')
+ python_code = python_code.replace('error', 'Exception')
+ python_code = python_code.replace('"', '\'')
+ python_code = python_code.replace('\\\'', '"')
+ python_code = python_code.replace('try {', 'try:')
+ python_code = python_code.replace('true', 'True')
+ python_code = python_code.replace('false', 'False')
+ python_code = python_code.replace('catch (...', 'except')
+ # python_code = re.sub('std::string\(\'(.*)\'\)', '$1', python_code)
+
+ return python_code
+
+
+
+# # python_code = python_code.replace('}\n', '')
+
+
+
+
+# python_code = python_code.replace('option.first', 'opt_name')
+# python_code = python_code.replace('option.second', 'opt_val')
+# python_code = python_code.replace('ged::Error', 'Exception')
+# python_code = python_code.replace('std::string(\'Invalid argument "\')', '\'Invalid argument "\'')
+
+
+# f_cpp.close()
+# f_python = open('python_code.py', 'w')
+# f_python.write(python_code)
+# f_python.close()
+
+
+def convert_function_comment(cpp_fun_cmt, param_types):
+ cpp_fun_cmt = cpp_fun_cmt.replace('\t', '')
+ cpp_fun_cmt = cpp_fun_cmt.replace('\n * ', ' ')
+ # split the input comment according to key words.
+ param_split = None
+ note = None
+ cmt_split = cpp_fun_cmt.split('@brief')[1]
+ brief = cmt_split
+ if '@param' in cmt_split:
+ cmt_split = cmt_split.split('@param')
+ brief = cmt_split[0]
+ param_split = cmt_split[1:]
+ if '@note' in cmt_split[-1]:
+ note_split = cmt_split[-1].split('@note')
+ if param_split is not None:
+ param_split.pop()
+ param_split.append(note_split[0])
+ else:
+ brief = note_split[0]
+ note = note_split[1]
+
+ # get parameters.
+ if param_split is not None:
+ for idx, param in enumerate(param_split):
+ _, param_name, param_desc = param.split(' ', 2)
+ param_name = function_comment_strip(param_name, ' *\n\t/')
+ param_desc = function_comment_strip(param_desc, ' *\n\t/')
+ param_split[idx] = (param_name, param_desc)
+
+ # strip comments.
+ brief = function_comment_strip(brief, ' *\n\t/')
+ if note is not None:
+ note = function_comment_strip(note, ' *\n\t/')
+
+ # construct the Python function comment.
+ python_fun_cmt = '"""'
+ python_fun_cmt += brief + '\n'
+ if param_split is not None and len(param_split) > 0:
+ python_fun_cmt += '\nParameters\n----------'
+ for idx, param in enumerate(param_split):
+ python_fun_cmt += '\n' + param[0] + ' : ' + param_types[idx]
+ python_fun_cmt += '\n\t' + param[1] + '\n'
+ if note is not None:
+ python_fun_cmt += '\nNote\n----\n' + note + '\n'
+ python_fun_cmt += '"""'
+
+ return python_fun_cmt
+
+
+def function_comment_strip(comment, bad_chars):
+ head_removed, tail_removed = False, False
+ while not head_removed or not tail_removed:
+ if comment[0] in bad_chars:
+ comment = comment[1:]
+ head_removed = False
+ else:
+ head_removed = True
+ if comment[-1] in bad_chars:
+ comment = comment[:-1]
+ tail_removed = False
+ else:
+ tail_removed = True
+
+ return comment
+
+
+if __name__ == '__main__':
+# python_code = convert_function("""
+# if (print_to_stdout_ == 2) {
+# std::cout << "\n===========================================================\n";
+# std::cout << "Block gradient descent for initial median " << median_pos + 1 << " of " << medians.size() << ".\n";
+# std::cout << "-----------------------------------------------------------\n";
+# }
+# """)
+
+
+ python_fun_cmt = convert_function_comment("""
+ /*!
+ * @brief Returns the sum of distances.
+ * @param[in] state The state of the estimator.
+ * @return The sum of distances of the median when the estimator was in the state @p state during the last call to run().
+ */
+ """, ['string', 'string'])
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/util/cpp_code.cpp b/lang/fr/gklearn/ged/util/cpp_code.cpp
new file mode 100644
index 0000000000..acbe22a1f6
--- /dev/null
+++ b/lang/fr/gklearn/ged/util/cpp_code.cpp
@@ -0,0 +1,122 @@
+ else if (option.first == "random-inits") {
+ try {
+ num_random_inits_ = std::stoul(option.second);
+ desired_num_random_inits_ = num_random_inits_;
+ }
+ catch (...) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option random-inits. Usage: options = \"[--random-inits ]\"");
+ }
+ if (num_random_inits_ <= 0) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option random-inits. Usage: options = \"[--random-inits ]\"");
+ }
+ }
+ else if (option.first == "randomness") {
+ if (option.second == "PSEUDO") {
+ use_real_randomness_ = false;
+ }
+ else if (option.second == "REAL") {
+ use_real_randomness_ = true;
+ }
+ else {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option randomness. Usage: options = \"[--randomness REAL|PSEUDO] [...]\"");
+ }
+ }
+ else if (option.first == "stdout") {
+ if (option.second == "0") {
+ print_to_stdout_ = 0;
+ }
+ else if (option.second == "1") {
+ print_to_stdout_ = 1;
+ }
+ else if (option.second == "2") {
+ print_to_stdout_ = 2;
+ }
+ else {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option stdout. Usage: options = \"[--stdout 0|1|2] [...]\"");
+ }
+ }
+ else if (option.first == "refine") {
+ if (option.second == "TRUE") {
+ refine_ = true;
+ }
+ else if (option.second == "FALSE") {
+ refine_ = false;
+ }
+ else {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option refine. Usage: options = \"[--refine TRUE|FALSE] [...]\"");
+ }
+ }
+ else if (option.first == "time-limit") {
+ try {
+ time_limit_in_sec_ = std::stod(option.second);
+ }
+ catch (...) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option time-limit. Usage: options = \"[--time-limit ] [...]");
+ }
+ }
+ else if (option.first == "max-itrs") {
+ try {
+ max_itrs_ = std::stoi(option.second);
+ }
+ catch (...) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option max-itrs. Usage: options = \"[--max-itrs ] [...]");
+ }
+ }
+ else if (option.first == "max-itrs-without-update") {
+ try {
+ max_itrs_without_update_ = std::stoi(option.second);
+ }
+ catch (...) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option max-itrs-without-update. Usage: options = \"[--max-itrs-without-update ] [...]");
+ }
+ }
+ else if (option.first == "seed") {
+ try {
+ seed_ = std::stoul(option.second);
+ }
+ catch (...) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option seed. Usage: options = \"[--seed ] [...]");
+ }
+ }
+ else if (option.first == "epsilon") {
+ try {
+ epsilon_ = std::stod(option.second);
+ }
+ catch (...) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option epsilon. Usage: options = \"[--epsilon ] [...]");
+ }
+ if (epsilon_ <= 0) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option epsilon. Usage: options = \"[--epsilon ] [...]");
+ }
+ }
+ else if (option.first == "inits-increase-order") {
+ try {
+ num_inits_increase_order_ = std::stoul(option.second);
+ }
+ catch (...) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option inits-increase-order. Usage: options = \"[--inits-increase-order ]\"");
+ }
+ if (num_inits_increase_order_ <= 0) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option inits-increase-order. Usage: options = \"[--inits-increase-order ]\"");
+ }
+ }
+ else if (option.first == "init-type-increase-order") {
+ init_type_increase_order_ = option.second;
+ if (option.second != "CLUSTERS" and option.second != "K-MEANS++") {
+ throw ged::Error(std::string("Invalid argument ") + option.second + " for option init-type-increase-order. Usage: options = \"[--init-type-increase-order CLUSTERS|K-MEANS++] [...]\"");
+ }
+ }
+ else if (option.first == "max-itrs-increase-order") {
+ try {
+ max_itrs_increase_order_ = std::stoi(option.second);
+ }
+ catch (...) {
+ throw Error(std::string("Invalid argument \"") + option.second + "\" for option max-itrs-increase-order. Usage: options = \"[--max-itrs-increase-order ] [...]");
+ }
+ }
+ else {
+ std::string valid_options("[--init-type ] [--random-inits ] [--randomness ] [--seed ] [--stdout ] ");
+ valid_options += "[--time-limit ] [--max-itrs ] [--epsilon ] ";
+ valid_options += "[--inits-increase-order ] [--init-type-increase-order ] [--max-itrs-increase-order ]";
+ throw Error(std::string("Invalid option \"") + option.first + "\". Usage: options = \"" + valid_options + "\"");
+ }
diff --git a/lang/fr/gklearn/ged/util/lsape_solver.py b/lang/fr/gklearn/ged/util/lsape_solver.py
new file mode 100644
index 0000000000..71739e7ef5
--- /dev/null
+++ b/lang/fr/gklearn/ged/util/lsape_solver.py
@@ -0,0 +1,122 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Mon Jun 22 15:37:36 2020
+
+@author: ljia
+"""
+import numpy as np
+from scipy.optimize import linear_sum_assignment
+
+
+class LSAPESolver(object):
+
+
+ def __init__(self, cost_matrix=None):
+ """
+ /*!
+ * @brief Constructs solver for LSAPE problem instance.
+ * @param[in] cost_matrix Pointer to the LSAPE problem instance that should be solved.
+ */
+ """
+ self._cost_matrix = cost_matrix
+ self._model = 'ECBP'
+ self._greedy_method = 'BASIC'
+ self._solve_optimally = True
+ self._minimal_cost = 0
+ self._row_to_col_assignments = []
+ self._col_to_row_assignments = []
+ self._dual_var_rows = [] # @todo
+ self._dual_var_cols = [] # @todo
+
+
+ def clear_solution(self):
+ """Clears a previously computed solution.
+ """
+ self._minimal_cost = 0
+ self._row_to_col_assignments.clear()
+ self._col_to_row_assignments.clear()
+ self._row_to_col_assignments.append([]) # @todo
+ self._col_to_row_assignments.append([])
+ self._dual_var_rows = [] # @todo
+ self._dual_var_cols = [] # @todo
+
+
+ def set_model(self, model):
+ """
+ /*!
+ * @brief Makes the solver use a specific model for optimal solving.
+ * @param[in] model The model that should be used.
+ */
+ """
+ self._solve_optimally = True
+ self._model = model
+
+
+ def solve(self, num_solutions=1):
+ """
+ /*!
+ * @brief Solves the LSAPE problem instance.
+ * @param[in] num_solutions The maximal number of solutions that should be computed.
+ */
+ """
+ self.clear_solution()
+ if self._solve_optimally:
+ row_ind, col_ind = linear_sum_assignment(self._cost_matrix) # @todo: only hungarianLSAPE ('ECBP') can be used.
+ self._row_to_col_assignments[0] = col_ind
+ self._col_to_row_assignments[0] = np.argsort(col_ind) # @todo: might be slow, can use row_ind
+ self._compute_cost_from_assignments()
+ if num_solutions > 1:
+ pass # @todo:
+ else:
+ print('here is non op.')
+ pass # @todo: greedy.
+# self._
+
+
+ def minimal_cost(self):
+ """
+ /*!
+ * @brief Returns the cost of the computed solutions.
+ * @return Cost of computed solutions.
+ */
+ """
+ return self._minimal_cost
+
+
+ def get_assigned_col(self, row, solution_id=0):
+ """
+ /*!
+ * @brief Returns the assigned column.
+ * @param[in] row Row whose assigned column should be returned.
+ * @param[in] solution_id ID of the solution where the assignment should be looked up.
+ * @returns Column to which @p row is assigned to in solution with ID @p solution_id or ged::undefined() if @p row is not assigned to any column.
+ */
+ """
+ return self._row_to_col_assignments[solution_id][row]
+
+
+ def get_assigned_row(self, col, solution_id=0):
+ """
+ /*!
+ * @brief Returns the assigned row.
+ * @param[in] col Column whose assigned row should be returned.
+ * @param[in] solution_id ID of the solution where the assignment should be looked up.
+ * @returns Row to which @p col is assigned to in solution with ID @p solution_id or ged::undefined() if @p col is not assigned to any row.
+ */
+ """
+ return self._col_to_row_assignments[solution_id][col]
+
+
+ def num_solutions(self):
+ """
+ /*!
+ * @brief Returns the number of solutions.
+ * @returns Actual number of solutions computed by solve(). Might be smaller than @p num_solutions.
+ */
+ """
+ return len(self._row_to_col_assignments)
+
+
+ def _compute_cost_from_assignments(self): # @todo
+ self._minimal_cost = np.sum(self._cost_matrix[range(0, len(self._row_to_col_assignments[0])), self._row_to_col_assignments[0]])
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/util/misc.py b/lang/fr/gklearn/ged/util/misc.py
new file mode 100644
index 0000000000..457d2766a8
--- /dev/null
+++ b/lang/fr/gklearn/ged/util/misc.py
@@ -0,0 +1,129 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Thu Mar 19 18:13:56 2020
+
+@author: ljia
+"""
+from gklearn.utils import dummy_node
+
+
+def construct_node_map_from_solver(solver, node_map, solution_id):
+ node_map.clear()
+ num_nodes_g = node_map.num_source_nodes()
+ num_nodes_h = node_map.num_target_nodes()
+
+ # add deletions and substitutions
+ for row in range(0, num_nodes_g):
+ col = solver.get_assigned_col(row, solution_id)
+ if col >= num_nodes_h:
+ node_map.add_assignment(row, dummy_node())
+ else:
+ node_map.add_assignment(row, col)
+
+ # insertions.
+ for col in range(0, num_nodes_h):
+ if solver.get_assigned_row(col, solution_id) >= num_nodes_g:
+ node_map.add_assignment(dummy_node(), col)
+
+
+def options_string_to_options_map(options_string):
+ """Transforms an options string into an options map.
+
+ Parameters
+ ----------
+ options_string : string
+ Options string of the form "[-- ] [...]".
+
+ Return
+ ------
+ options_map : dict{string : string}
+ Map with one key-value pair (, ) for each option contained in the string.
+ """
+ if options_string == '':
+ return
+ options_map = {}
+ words = []
+ tokenize(options_string, ' ', words)
+ expect_option_name = True
+ for word in words:
+ if expect_option_name:
+ is_opt_name, word = is_option_name(word)
+ if is_opt_name:
+ option_name = word
+ if option_name in options_map:
+ raise Exception('Multiple specification of option "' + option_name + '".')
+ options_map[option_name] = ''
+ else:
+ raise Exception('Invalid options "' + options_string + '". Usage: options = "[-- ] [...]"')
+ else:
+ is_opt_name, word = is_option_name(word)
+ if is_opt_name:
+ raise Exception('Invalid options "' + options_string + '". Usage: options = "[-- ] [...]"')
+ else:
+ options_map[option_name] = word
+ expect_option_name = not expect_option_name
+ return options_map
+
+
+def tokenize(sentence, sep, words):
+ """Separates a sentence into words separated by sep (unless contained in single quotes).
+
+ Parameters
+ ----------
+ sentence : string
+ The sentence that should be tokenized.
+
+ sep : string
+ The separator. Must be different from "'".
+
+ words : list[string]
+ The obtained words.
+ """
+ outside_quotes = True
+ word_length = 0
+ pos_word_start = 0
+ for pos in range(0, len(sentence)):
+ if sentence[pos] == '\'':
+ if not outside_quotes and pos < len(sentence) - 1:
+ if sentence[pos + 1] != sep:
+ raise Exception('Sentence contains closing single quote which is followed by a char different from ' + sep + '.')
+ word_length += 1
+ outside_quotes = not outside_quotes
+ elif outside_quotes and sentence[pos] == sep:
+ if word_length > 0:
+ words.append(sentence[pos_word_start:pos_word_start + word_length])
+ pos_word_start = pos + 1
+ word_length = 0
+ else:
+ word_length += 1
+ if not outside_quotes:
+ raise Exception('Sentence contains unbalanced single quotes.')
+ if word_length > 0:
+ words.append(sentence[pos_word_start:pos_word_start + word_length])
+
+
+def is_option_name(word):
+ """Checks whether a word is an option name and, if so, removes the leading dashes.
+
+ Parameters
+ ----------
+ word : string
+ Word.
+
+ return
+ ------
+ True if word is of the form "--".
+
+ word : string
+ The word without the leading dashes.
+ """
+ if word[0] == '\'':
+ word = word[1:len(word) - 2]
+ return False, word
+ if len(word) < 3:
+ return False, word
+ if word[0] == '-' and word[1] == '-' and word[2] != '-':
+ word = word[2:]
+ return True, word
+ return False, word
\ No newline at end of file
diff --git a/lang/fr/gklearn/ged/util/util.py b/lang/fr/gklearn/ged/util/util.py
new file mode 100644
index 0000000000..05985a5dc9
--- /dev/null
+++ b/lang/fr/gklearn/ged/util/util.py
@@ -0,0 +1,620 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Tue Mar 31 17:06:22 2020
+
+@author: ljia
+"""
+import numpy as np
+from itertools import combinations
+import multiprocessing
+from multiprocessing import Pool
+from functools import partial
+import sys
+from tqdm import tqdm
+import networkx as nx
+from gklearn.ged.env import GEDEnv
+
+
+def compute_ged(g1, g2, options):
+ from gklearn.gedlib import librariesImport, gedlibpy
+
+ ged_env = gedlibpy.GEDEnv()
+ ged_env.set_edit_cost(options['edit_cost'], edit_cost_constant=options['edit_cost_constants'])
+ ged_env.add_nx_graph(g1, '')
+ ged_env.add_nx_graph(g2, '')
+ listID = ged_env.get_all_graph_ids()
+ ged_env.init(init_type=options['init_option'])
+ ged_env.set_method(options['method'], ged_options_to_string(options))
+ ged_env.init_method()
+
+ g = listID[0]
+ h = listID[1]
+ ged_env.run_method(g, h)
+ pi_forward = ged_env.get_forward_map(g, h)
+ pi_backward = ged_env.get_backward_map(g, h)
+ upper = ged_env.get_upper_bound(g, h)
+ dis = upper
+
+ # make the map label correct (label remove map as np.inf)
+ nodes1 = [n for n in g1.nodes()]
+ nodes2 = [n for n in g2.nodes()]
+ nb1 = nx.number_of_nodes(g1)
+ nb2 = nx.number_of_nodes(g2)
+ pi_forward = [nodes2[pi] if pi < nb2 else np.inf for pi in pi_forward]
+ pi_backward = [nodes1[pi] if pi < nb1 else np.inf for pi in pi_backward]
+# print(pi_forward)
+
+ return dis, pi_forward, pi_backward
+
+
+def compute_geds_cml(graphs, options={}, sort=True, parallel=False, verbose=True):
+
+ # initialize ged env.
+ ged_env = GEDEnv()
+ ged_env.set_edit_cost(options['edit_cost'], edit_cost_constants=options['edit_cost_constants'])
+ for g in graphs:
+ ged_env.add_nx_graph(g, '')
+ listID = ged_env.get_all_graph_ids()
+
+ node_labels = ged_env.get_all_node_labels()
+ edge_labels = ged_env.get_all_edge_labels()
+ node_label_costs = label_costs_to_matrix(options['node_label_costs'], len(node_labels)) if 'node_label_costs' in options else None
+ edge_label_costs = label_costs_to_matrix(options['edge_label_costs'], len(edge_labels)) if 'edge_label_costs' in options else None
+ ged_env.set_label_costs(node_label_costs, edge_label_costs)
+ ged_env.init(init_type=options['init_option'])
+ if parallel:
+ options['threads'] = 1
+ ged_env.set_method(options['method'], options)
+ ged_env.init_method()
+
+ # compute ged.
+ # options used to compute numbers of edit operations.
+ if node_label_costs is None and edge_label_costs is None:
+ neo_options = {'edit_cost': options['edit_cost'],
+ 'is_cml': False,
+ 'node_labels': options['node_labels'], 'edge_labels': options['edge_labels'],
+ 'node_attrs': options['node_attrs'], 'edge_attrs': options['edge_attrs']}
+ else:
+ neo_options = {'edit_cost': options['edit_cost'],
+ 'is_cml': True,
+ 'node_labels': node_labels,
+ 'edge_labels': edge_labels}
+ ged_mat = np.zeros((len(graphs), len(graphs)))
+ if parallel:
+ len_itr = int(len(graphs) * (len(graphs) - 1) / 2)
+ ged_vec = [0 for i in range(len_itr)]
+ n_edit_operations = [0 for i in range(len_itr)]
+ itr = combinations(range(0, len(graphs)), 2)
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(graphs_toshare, ged_env_toshare, listID_toshare):
+ global G_graphs, G_ged_env, G_listID
+ G_graphs = graphs_toshare
+ G_ged_env = ged_env_toshare
+ G_listID = listID_toshare
+ do_partial = partial(_wrapper_compute_ged_parallel, neo_options, sort)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(graphs, ged_env, listID))
+ if verbose:
+ iterator = tqdm(pool.imap_unordered(do_partial, itr, chunksize),
+ desc='computing GEDs', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_partial, itr, chunksize)
+# iterator = pool.imap_unordered(do_partial, itr, chunksize)
+ for i, j, dis, n_eo_tmp in iterator:
+ idx_itr = int(len(graphs) * i + j - (i + 1) * (i + 2) / 2)
+ ged_vec[idx_itr] = dis
+ ged_mat[i][j] = dis
+ ged_mat[j][i] = dis
+ n_edit_operations[idx_itr] = n_eo_tmp
+# print('\n-------------------------------------------')
+# print(i, j, idx_itr, dis)
+ pool.close()
+ pool.join()
+
+ else:
+ ged_vec = []
+ n_edit_operations = []
+ if verbose:
+ iterator = tqdm(range(len(graphs)), desc='computing GEDs', file=sys.stdout)
+ else:
+ iterator = range(len(graphs))
+ for i in iterator:
+# for i in range(len(graphs)):
+ for j in range(i + 1, len(graphs)):
+ if nx.number_of_nodes(graphs[i]) <= nx.number_of_nodes(graphs[j]) or not sort:
+ dis, pi_forward, pi_backward = _compute_ged(ged_env, listID[i], listID[j], graphs[i], graphs[j])
+ else:
+ dis, pi_backward, pi_forward = _compute_ged(ged_env, listID[j], listID[i], graphs[j], graphs[i])
+ ged_vec.append(dis)
+ ged_mat[i][j] = dis
+ ged_mat[j][i] = dis
+ n_eo_tmp = get_nb_edit_operations(graphs[i], graphs[j], pi_forward, pi_backward, **neo_options)
+ n_edit_operations.append(n_eo_tmp)
+
+ return ged_vec, ged_mat, n_edit_operations
+
+
+def compute_geds(graphs, options={}, sort=True, parallel=False, verbose=True):
+ from gklearn.gedlib import librariesImport, gedlibpy
+
+ # initialize ged env.
+ ged_env = gedlibpy.GEDEnv()
+ ged_env.set_edit_cost(options['edit_cost'], edit_cost_constant=options['edit_cost_constants'])
+ for g in graphs:
+ ged_env.add_nx_graph(g, '')
+ listID = ged_env.get_all_graph_ids()
+ ged_env.init()
+ if parallel:
+ options['threads'] = 1
+ ged_env.set_method(options['method'], ged_options_to_string(options))
+ ged_env.init_method()
+
+ # compute ged.
+ neo_options = {'edit_cost': options['edit_cost'],
+ 'node_labels': options['node_labels'], 'edge_labels': options['edge_labels'],
+ 'node_attrs': options['node_attrs'], 'edge_attrs': options['edge_attrs']}
+ ged_mat = np.zeros((len(graphs), len(graphs)))
+ if parallel:
+ len_itr = int(len(graphs) * (len(graphs) - 1) / 2)
+ ged_vec = [0 for i in range(len_itr)]
+ n_edit_operations = [0 for i in range(len_itr)]
+ itr = combinations(range(0, len(graphs)), 2)
+ n_jobs = multiprocessing.cpu_count()
+ if len_itr < 100 * n_jobs:
+ chunksize = int(len_itr / n_jobs) + 1
+ else:
+ chunksize = 100
+ def init_worker(graphs_toshare, ged_env_toshare, listID_toshare):
+ global G_graphs, G_ged_env, G_listID
+ G_graphs = graphs_toshare
+ G_ged_env = ged_env_toshare
+ G_listID = listID_toshare
+ do_partial = partial(_wrapper_compute_ged_parallel, neo_options, sort)
+ pool = Pool(processes=n_jobs, initializer=init_worker, initargs=(graphs, ged_env, listID))
+ if verbose:
+ iterator = tqdm(pool.imap_unordered(do_partial, itr, chunksize),
+ desc='computing GEDs', file=sys.stdout)
+ else:
+ iterator = pool.imap_unordered(do_partial, itr, chunksize)
+# iterator = pool.imap_unordered(do_partial, itr, chunksize)
+ for i, j, dis, n_eo_tmp in iterator:
+ idx_itr = int(len(graphs) * i + j - (i + 1) * (i + 2) / 2)
+ ged_vec[idx_itr] = dis
+ ged_mat[i][j] = dis
+ ged_mat[j][i] = dis
+ n_edit_operations[idx_itr] = n_eo_tmp
+# print('\n-------------------------------------------')
+# print(i, j, idx_itr, dis)
+ pool.close()
+ pool.join()
+
+ else:
+ ged_vec = []
+ n_edit_operations = []
+ if verbose:
+ iterator = tqdm(range(len(graphs)), desc='computing GEDs', file=sys.stdout)
+ else:
+ iterator = range(len(graphs))
+ for i in iterator:
+# for i in range(len(graphs)):
+ for j in range(i + 1, len(graphs)):
+ if nx.number_of_nodes(graphs[i]) <= nx.number_of_nodes(graphs[j]) or not sort:
+ dis, pi_forward, pi_backward = _compute_ged(ged_env, listID[i], listID[j], graphs[i], graphs[j])
+ else:
+ dis, pi_backward, pi_forward = _compute_ged(ged_env, listID[j], listID[i], graphs[j], graphs[i])
+ ged_vec.append(dis)
+ ged_mat[i][j] = dis
+ ged_mat[j][i] = dis
+ n_eo_tmp = get_nb_edit_operations(graphs[i], graphs[j], pi_forward, pi_backward, **neo_options)
+ n_edit_operations.append(n_eo_tmp)
+
+ return ged_vec, ged_mat, n_edit_operations
+
+
+def _wrapper_compute_ged_parallel(options, sort, itr):
+ i = itr[0]
+ j = itr[1]
+ dis, n_eo_tmp = _compute_ged_parallel(G_ged_env, G_listID[i], G_listID[j], G_graphs[i], G_graphs[j], options, sort)
+ return i, j, dis, n_eo_tmp
+
+
+def _compute_ged_parallel(env, gid1, gid2, g1, g2, options, sort):
+ if nx.number_of_nodes(g1) <= nx.number_of_nodes(g2) or not sort:
+ dis, pi_forward, pi_backward = _compute_ged(env, gid1, gid2, g1, g2)
+ else:
+ dis, pi_backward, pi_forward = _compute_ged(env, gid2, gid1, g2, g1)
+ n_eo_tmp = get_nb_edit_operations(g1, g2, pi_forward, pi_backward, **options) # [0,0,0,0,0,0]
+ return dis, n_eo_tmp
+
+
+def _compute_ged(env, gid1, gid2, g1, g2):
+ env.run_method(gid1, gid2)
+ pi_forward = env.get_forward_map(gid1, gid2)
+ pi_backward = env.get_backward_map(gid1, gid2)
+ upper = env.get_upper_bound(gid1, gid2)
+ dis = upper
+
+ # make the map label correct (label remove map as np.inf)
+ nodes1 = [n for n in g1.nodes()]
+ nodes2 = [n for n in g2.nodes()]
+ nb1 = nx.number_of_nodes(g1)
+ nb2 = nx.number_of_nodes(g2)
+ pi_forward = [nodes2[pi] if pi < nb2 else np.inf for pi in pi_forward]
+ pi_backward = [nodes1[pi] if pi < nb1 else np.inf for pi in pi_backward]
+
+ return dis, pi_forward, pi_backward
+
+
+def label_costs_to_matrix(costs, nb_labels):
+ """Reform a label cost vector to a matrix.
+
+ Parameters
+ ----------
+ costs : numpy.array
+ The vector containing costs between labels, in the order of node insertion costs, node deletion costs, node substitition costs, edge insertion costs, edge deletion costs, edge substitition costs.
+ nb_labels : integer
+ Number of labels.
+
+ Returns
+ -------
+ cost_matrix : numpy.array.
+ The reformed label cost matrix of size (nb_labels, nb_labels). Each row/column of cost_matrix corresponds to a label, and the first label is the dummy label. This is the same setting as in GEDData.
+ """
+ # Initialize label cost matrix.
+ cost_matrix = np.zeros((nb_labels + 1, nb_labels + 1))
+ i = 0
+ # Costs of insertions.
+ for col in range(1, nb_labels + 1):
+ cost_matrix[0, col] = costs[i]
+ i += 1
+ # Costs of deletions.
+ for row in range(1, nb_labels + 1):
+ cost_matrix[row, 0] = costs[i]
+ i += 1
+ # Costs of substitutions.
+ for row in range(1, nb_labels + 1):
+ for col in range(row + 1, nb_labels + 1):
+ cost_matrix[row, col] = costs[i]
+ cost_matrix[col, row] = costs[i]
+ i += 1
+
+ return cost_matrix
+
+
+def get_nb_edit_operations(g1, g2, forward_map, backward_map, edit_cost=None, is_cml=False, **kwargs):
+ if is_cml:
+ if edit_cost == 'CONSTANT':
+ node_labels = kwargs.get('node_labels', [])
+ edge_labels = kwargs.get('edge_labels', [])
+ return get_nb_edit_operations_symbolic_cml(g1, g2, forward_map, backward_map,
+ node_labels=node_labels, edge_labels=edge_labels)
+ else:
+ raise Exception('Edit cost "', edit_cost, '" is not supported.')
+ else:
+ if edit_cost == 'LETTER' or edit_cost == 'LETTER2':
+ return get_nb_edit_operations_letter(g1, g2, forward_map, backward_map)
+ elif edit_cost == 'NON_SYMBOLIC':
+ node_attrs = kwargs.get('node_attrs', [])
+ edge_attrs = kwargs.get('edge_attrs', [])
+ return get_nb_edit_operations_nonsymbolic(g1, g2, forward_map, backward_map,
+ node_attrs=node_attrs, edge_attrs=edge_attrs)
+ elif edit_cost == 'CONSTANT':
+ node_labels = kwargs.get('node_labels', [])
+ edge_labels = kwargs.get('edge_labels', [])
+ return get_nb_edit_operations_symbolic(g1, g2, forward_map, backward_map,
+ node_labels=node_labels, edge_labels=edge_labels)
+ else:
+ return get_nb_edit_operations_symbolic(g1, g2, forward_map, backward_map)
+
+
+def get_nb_edit_operations_symbolic_cml(g1, g2, forward_map, backward_map,
+ node_labels=[], edge_labels=[]):
+ """Compute times that edit operations are used in an edit path for symbolic-labeled graphs, where the costs are different for each pair of nodes.
+
+ Returns
+ -------
+ list
+ A vector of numbers of times that costs bewteen labels are used in an edit path, formed in the order of node insertion costs, node deletion costs, node substitition costs, edge insertion costs, edge deletion costs, edge substitition costs. The dummy label is the first label, and the self label costs are not included.
+ """
+ # Initialize.
+ nb_ops_node = np.zeros((1 + len(node_labels), 1 + len(node_labels)))
+ nb_ops_edge = np.zeros((1 + len(edge_labels), 1 + len(edge_labels)))
+
+ # For nodes.
+ nodes1 = [n for n in g1.nodes()]
+ for i, map_i in enumerate(forward_map):
+ label1 = tuple(g1.nodes[nodes1[i]].items()) # @todo: order and faster
+ idx_label1 = node_labels.index(label1) # @todo: faster
+ if map_i == np.inf: # deletions.
+ nb_ops_node[idx_label1 + 1, 0] += 1
+ else: # substitutions.
+ label2 = tuple(g2.nodes[map_i].items())
+ if label1 != label2:
+ idx_label2 = node_labels.index(label2) # @todo: faster
+ nb_ops_node[idx_label1 + 1, idx_label2 + 1] += 1
+ # insertions.
+ nodes2 = [n for n in g2.nodes()]
+ for i, map_i in enumerate(backward_map):
+ if map_i == np.inf:
+ label = tuple(g2.nodes[nodes2[i]].items())
+ idx_label = node_labels.index(label) # @todo: faster
+ nb_ops_node[0, idx_label + 1] += 1
+
+ # For edges.
+ edges1 = [e for e in g1.edges()]
+ edges2_marked = []
+ for nf1, nt1 in edges1:
+ label1 = tuple(g1.edges[(nf1, nt1)].items())
+ idx_label1 = edge_labels.index(label1) # @todo: faster
+ idxf1 = nodes1.index(nf1) # @todo: faster
+ idxt1 = nodes1.index(nt1) # @todo: faster
+ # At least one of the nodes is removed, thus the edge is removed.
+ if forward_map[idxf1] == np.inf or forward_map[idxt1] == np.inf:
+ nb_ops_edge[idx_label1 + 1, 0] += 1
+ # corresponding edge is in g2.
+ else:
+ nf2, nt2 = forward_map[idxf1], forward_map[idxt1]
+ if (nf2, nt2) in g2.edges():
+ edges2_marked.append((nf2, nt2))
+ # If edge labels are different.
+ label2 = tuple(g2.edges[(nf2, nt2)].items())
+ if label1 != label2:
+ idx_label2 = edge_labels.index(label2) # @todo: faster
+ nb_ops_edge[idx_label1 + 1, idx_label2 + 1] += 1
+ # Switch nf2 and nt2, for directed graphs.
+ elif (nt2, nf2) in g2.edges():
+ edges2_marked.append((nt2, nf2))
+ # If edge labels are different.
+ label2 = tuple(g2.edges[(nt2, nf2)].items())
+ if label1 != label2:
+ idx_label2 = edge_labels.index(label2) # @todo: faster
+ nb_ops_edge[idx_label1 + 1, idx_label2 + 1] += 1
+ # Corresponding nodes are in g2, however the edge is removed.
+ else:
+ nb_ops_edge[idx_label1 + 1, 0] += 1
+ # insertions.
+ for nt, nf in g2.edges():
+ if (nt, nf) not in edges2_marked and (nf, nt) not in edges2_marked: # @todo: for directed.
+ label = tuple(g2.edges[(nt, nf)].items())
+ idx_label = edge_labels.index(label) # @todo: faster
+ nb_ops_edge[0, idx_label + 1] += 1
+
+ # Reform the numbers of edit oeprations into a vector.
+ nb_eo_vector = []
+ # node insertion.
+ for i in range(1, len(nb_ops_node)):
+ nb_eo_vector.append(nb_ops_node[0, i])
+ # node deletion.
+ for i in range(1, len(nb_ops_node)):
+ nb_eo_vector.append(nb_ops_node[i, 0])
+ # node substitution.
+ for i in range(1, len(nb_ops_node)):
+ for j in range(i + 1, len(nb_ops_node)):
+ nb_eo_vector.append(nb_ops_node[i, j])
+ # edge insertion.
+ for i in range(1, len(nb_ops_edge)):
+ nb_eo_vector.append(nb_ops_edge[0, i])
+ # edge deletion.
+ for i in range(1, len(nb_ops_edge)):
+ nb_eo_vector.append(nb_ops_edge[i, 0])
+ # edge substitution.
+ for i in range(1, len(nb_ops_edge)):
+ for j in range(i + 1, len(nb_ops_edge)):
+ nb_eo_vector.append(nb_ops_edge[i, j])
+
+ return nb_eo_vector
+
+
+def get_nb_edit_operations_symbolic(g1, g2, forward_map, backward_map,
+ node_labels=[], edge_labels=[]):
+ """Compute the number of each edit operations for symbolic-labeled graphs.
+ """
+ n_vi = 0
+ n_vr = 0
+ n_vs = 0
+ n_ei = 0
+ n_er = 0
+ n_es = 0
+
+ nodes1 = [n for n in g1.nodes()]
+ for i, map_i in enumerate(forward_map):
+ if map_i == np.inf:
+ n_vr += 1
+ else:
+ for nl in node_labels:
+ label1 = g1.nodes[nodes1[i]][nl]
+ label2 = g2.nodes[map_i][nl]
+ if label1 != label2:
+ n_vs += 1
+ break
+ for map_i in backward_map:
+ if map_i == np.inf:
+ n_vi += 1
+
+# idx_nodes1 = range(0, len(node1))
+
+ edges1 = [e for e in g1.edges()]
+ nb_edges2_cnted = 0
+ for n1, n2 in edges1:
+ idx1 = nodes1.index(n1)
+ idx2 = nodes1.index(n2)
+ # one of the nodes is removed, thus the edge is removed.
+ if forward_map[idx1] == np.inf or forward_map[idx2] == np.inf:
+ n_er += 1
+ # corresponding edge is in g2.
+ elif (forward_map[idx1], forward_map[idx2]) in g2.edges():
+ nb_edges2_cnted += 1
+ # edge labels are different.
+ for el in edge_labels:
+ label1 = g2.edges[((forward_map[idx1], forward_map[idx2]))][el]
+ label2 = g1.edges[(n1, n2)][el]
+ if label1 != label2:
+ n_es += 1
+ break
+ elif (forward_map[idx2], forward_map[idx1]) in g2.edges():
+ nb_edges2_cnted += 1
+ # edge labels are different.
+ for el in edge_labels:
+ label1 = g2.edges[((forward_map[idx2], forward_map[idx1]))][el]
+ label2 = g1.edges[(n1, n2)][el]
+ if label1 != label2:
+ n_es += 1
+ break
+ # corresponding nodes are in g2, however the edge is removed.
+ else:
+ n_er += 1
+ n_ei = nx.number_of_edges(g2) - nb_edges2_cnted
+
+ return n_vi, n_vr, n_vs, n_ei, n_er, n_es
+
+
+def get_nb_edit_operations_letter(g1, g2, forward_map, backward_map):
+ """Compute the number of each edit operations.
+ """
+ n_vi = 0
+ n_vr = 0
+ n_vs = 0
+ sod_vs = 0
+ n_ei = 0
+ n_er = 0
+
+ nodes1 = [n for n in g1.nodes()]
+ for i, map_i in enumerate(forward_map):
+ if map_i == np.inf:
+ n_vr += 1
+ else:
+ n_vs += 1
+ diff_x = float(g1.nodes[nodes1[i]]['x']) - float(g2.nodes[map_i]['x'])
+ diff_y = float(g1.nodes[nodes1[i]]['y']) - float(g2.nodes[map_i]['y'])
+ sod_vs += np.sqrt(np.square(diff_x) + np.square(diff_y))
+ for map_i in backward_map:
+ if map_i == np.inf:
+ n_vi += 1
+
+# idx_nodes1 = range(0, len(node1))
+
+ edges1 = [e for e in g1.edges()]
+ nb_edges2_cnted = 0
+ for n1, n2 in edges1:
+ idx1 = nodes1.index(n1)
+ idx2 = nodes1.index(n2)
+ # one of the nodes is removed, thus the edge is removed.
+ if forward_map[idx1] == np.inf or forward_map[idx2] == np.inf:
+ n_er += 1
+ # corresponding edge is in g2. Edge label is not considered.
+ elif (forward_map[idx1], forward_map[idx2]) in g2.edges() or \
+ (forward_map[idx2], forward_map[idx1]) in g2.edges():
+ nb_edges2_cnted += 1
+ # corresponding nodes are in g2, however the edge is removed.
+ else:
+ n_er += 1
+ n_ei = nx.number_of_edges(g2) - nb_edges2_cnted
+
+ return n_vi, n_vr, n_vs, sod_vs, n_ei, n_er
+
+
+def get_nb_edit_operations_nonsymbolic(g1, g2, forward_map, backward_map,
+ node_attrs=[], edge_attrs=[]):
+ """Compute the number of each edit operations.
+ """
+ n_vi = 0
+ n_vr = 0
+ n_vs = 0
+ sod_vs = 0
+ n_ei = 0
+ n_er = 0
+ n_es = 0
+ sod_es = 0
+
+ nodes1 = [n for n in g1.nodes()]
+ for i, map_i in enumerate(forward_map):
+ if map_i == np.inf:
+ n_vr += 1
+ else:
+ n_vs += 1
+ sum_squares = 0
+ for a_name in node_attrs:
+ diff = float(g1.nodes[nodes1[i]][a_name]) - float(g2.nodes[map_i][a_name])
+ sum_squares += np.square(diff)
+ sod_vs += np.sqrt(sum_squares)
+ for map_i in backward_map:
+ if map_i == np.inf:
+ n_vi += 1
+
+# idx_nodes1 = range(0, len(node1))
+
+ edges1 = [e for e in g1.edges()]
+ for n1, n2 in edges1:
+ idx1 = nodes1.index(n1)
+ idx2 = nodes1.index(n2)
+ n1_g2 = forward_map[idx1]
+ n2_g2 = forward_map[idx2]
+ # one of the nodes is removed, thus the edge is removed.
+ if n1_g2 == np.inf or n2_g2 == np.inf:
+ n_er += 1
+ # corresponding edge is in g2.
+ elif (n1_g2, n2_g2) in g2.edges():
+ n_es += 1
+ sum_squares = 0
+ for a_name in edge_attrs:
+ diff = float(g1.edges[n1, n2][a_name]) - float(g2.edges[n1_g2, n2_g2][a_name])
+ sum_squares += np.square(diff)
+ sod_es += np.sqrt(sum_squares)
+ elif (n2_g2, n1_g2) in g2.edges():
+ n_es += 1
+ sum_squares = 0
+ for a_name in edge_attrs:
+ diff = float(g1.edges[n2, n1][a_name]) - float(g2.edges[n2_g2, n1_g2][a_name])
+ sum_squares += np.square(diff)
+ sod_es += np.sqrt(sum_squares)
+ # corresponding nodes are in g2, however the edge is removed.
+ else:
+ n_er += 1
+ n_ei = nx.number_of_edges(g2) - n_es
+
+ return n_vi, n_vr, sod_vs, n_ei, n_er, sod_es
+
+
+def ged_options_to_string(options):
+ opt_str = ' '
+ for key, val in options.items():
+ if key == 'initialization_method':
+ opt_str += '--initialization-method ' + str(val) + ' '
+ elif key == 'initialization_options':
+ opt_str += '--initialization-options ' + str(val) + ' '
+ elif key == 'lower_bound_method':
+ opt_str += '--lower-bound-method ' + str(val) + ' '
+ elif key == 'random_substitution_ratio':
+ opt_str += '--random-substitution-ratio ' + str(val) + ' '
+ elif key == 'initial_solutions':
+ opt_str += '--initial-solutions ' + str(val) + ' '
+ elif key == 'ratio_runs_from_initial_solutions':
+ opt_str += '--ratio-runs-from-initial-solutions ' + str(val) + ' '
+ elif key == 'threads':
+ opt_str += '--threads ' + str(val) + ' '
+ elif key == 'num_randpost_loops':
+ opt_str += '--num-randpost-loops ' + str(val) + ' '
+ elif key == 'max_randpost_retrials':
+ opt_str += '--maxrandpost-retrials ' + str(val) + ' '
+ elif key == 'randpost_penalty':
+ opt_str += '--randpost-penalty ' + str(val) + ' '
+ elif key == 'randpost_decay':
+ opt_str += '--randpost-decay ' + str(val) + ' '
+ elif key == 'log':
+ opt_str += '--log ' + str(val) + ' '
+ elif key == 'randomness':
+ opt_str += '--randomness ' + str(val) + ' '
+
+# if not isinstance(val, list):
+# opt_str += '--' + key.replace('_', '-') + ' '
+# if val == False:
+# val_str = 'FALSE'
+# else:
+# val_str = str(val)
+# opt_str += val_str + ' '
+
+ return opt_str
\ No newline at end of file
diff --git a/lang/fr/gklearn/gedlib/README.rst b/lang/fr/gklearn/gedlib/README.rst
new file mode 100644
index 0000000000..7a44bbfe7e
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/README.rst
@@ -0,0 +1,97 @@
+GEDLIBPY
+====================================
+
+Please Read https://dbblumenthal.github.io/gedlib/ before using Python code.
+You can also find this module documentation in documentation/build/html folder.
+
+Make sure you have numpy installed (and Cython if you have to recompile the library). You can use pip for this.
+
+
+Running the script
+-------------------
+
+After donwloading the entire folder, you can run test.py to ensure the library works.
+
+For your code, you have to make two imports::
+
+ import librariesImport
+ import gedlibpy
+
+You can call each function in the library with this. You can't move any folder or files on the library, please make sure that the architecture remains the same.
+
+This library is compiled for Python3 only. If you want to use it with Python 2, you have to recompile it with setup.py. You have to use this command on your favorite shell::
+
+ python setup.py build_ext --inplace
+
+After this step, you can use the same lines as Python3 for import, it will be ok. Check the documentation inside the documentation/build/html folder before using function. You can also copy the tests examples for basic use.
+
+
+A problem with the library ?
+-------------------------------
+
+If the library isn't found, you can recompile the Python library because your Linux is different to mine. Please delete gedlibpy.so, gedlibpy.cpp and build folder. Then use this command on a linux shell ::
+
+ python3 setup.py build_ext --inplace
+
+You can make it with Python 2 but make sure you use the same version with your code and the compilation.
+
+If it's doesn't work, maybe the version of GedLib or another library can be a problem. If it is, you can re-install GedLib for your computer. You can download it on this git : https://dbblumenthal.github.io/gedlib/
+
+You have to install Gedlib with the Python installer after that.
+Just call::
+
+ python3 install.py
+
+Make the links like indicate on the documentation. Use the same architecture like this library, but just change the .so and folders with your installation. You can recompile the Python library with setup command, after that.
+
+If you use Mac OS, you have to follow all this part, and install the external libraries with this command::
+
+ install_name_tool -change //
+
+For an example, you have to write these lines::
+
+ install_name_tool -change libdoublefann.2.dylib lib/fann/libdoublefann.2.dylib gedlibpy.so
+ install_name_tool -change libsvm.so lib/libsvm.3.22/libsvm.so gedlibpy.so
+ install_name_tool -change libnomad.so lib/nomad/libnomad.so gedlibpy.so
+ install_name_tool -change libsgtelib.so lib/nomad/libsgtelib.so gedlibpy.so
+
+The name of the library gedlibpy can be different if you use Python 3.
+
+If your problem is still here, you can contact me on : natacha.lambert@unicaen.fr
+
+How to use this library
+-------------------------
+
+This library allow to compute edit distance between two graphs. You have to follow these steps to use it :
+
+- Add your graphs (GXL files, NX Structures or your structure, make sure that the internal type is the same)
+- Choose your cost function
+- Init your environnment (After that, the cost function and your graphs can't be modified)
+- Choose your method computation
+- Run the computation with the IDs of the two graphs. You can have the ID when you add the graph or with some functions
+- Find the result with differents functions (NodeMap, edit distance, etc)
+
+Here is an example of code with GXL graphs::
+
+ gedlibpy.load_GXL_graphs('include/gedlib-master/data/datasets/Mutagenicity/data/', 'collections/MUTA_10.xml')
+ listID = gedlibpy.get_all_graph_ids()
+ gedlibpy.set_edit_cost("CHEM_1")
+ gedlibpy.init()
+ gedlibpy.set_method("IPFP", "")
+ gedlibpy.init_method()
+ g = listID[0]
+ h = listID[1]
+
+ gedlibpy.run_method(g,h)
+
+ print("Node Map : ", gedlibpy.get_node_map(g,h))
+ print ("Upper Bound = " + str(gedlibpy.get_upper_bound(g,h)) + ", Lower Bound = " + str(gedlibpy.get_lower_bound(g,h)) + ", Runtime = " + str(gedlibpy.get_runtime(g,h)))
+
+
+Please read the documentation for more examples and functions.
+
+
+An advice if you don't code in a shell
+---------------------------------------
+
+Python library don't indicate each C++ error. If you have a restart causing by an error in your code, please use on a linux shell for having C++ errors.
diff --git a/lang/fr/gklearn/gedlib/__init__.py b/lang/fr/gklearn/gedlib/__init__.py
new file mode 100644
index 0000000000..1289a2c44b
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/__init__.py
@@ -0,0 +1,10 @@
+# -*-coding:utf-8 -*-
+"""
+gedlib
+
+"""
+
+# info
+__version__ = "0.1"
+__author__ = "Linlin Jia"
+__date__ = "March 2020"
diff --git a/lang/fr/gklearn/gedlib/documentation/Makefile b/lang/fr/gklearn/gedlib/documentation/Makefile
new file mode 100644
index 0000000000..d42ba399da
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/documentation/Makefile
@@ -0,0 +1,20 @@
+# Minimal makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS =
+SPHINXBUILD = sphinx-build
+SPHINXPROJ = Cython_GedLib
+SOURCEDIR = source
+BUILDDIR = build
+
+# Put it first so that "make" without argument is like "make help".
+help:
+ @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
+
+.PHONY: help Makefile
+
+# Catch-all target: route all unknown targets to Sphinx using the new
+# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
+%: Makefile
+ @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
\ No newline at end of file
diff --git a/lang/fr/gklearn/gedlib/documentation/gedlibpy.pdf b/lang/fr/gklearn/gedlib/documentation/gedlibpy.pdf
new file mode 100644
index 0000000000..3637365a66
Binary files /dev/null and b/lang/fr/gklearn/gedlib/documentation/gedlibpy.pdf differ
diff --git a/lang/fr/gklearn/gedlib/documentation/make.bat b/lang/fr/gklearn/gedlib/documentation/make.bat
new file mode 100644
index 0000000000..16f31bbd35
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/documentation/make.bat
@@ -0,0 +1,36 @@
+@ECHO OFF
+
+pushd %~dp0
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+ set SPHINXBUILD=sphinx-build
+)
+set SOURCEDIR=source
+set BUILDDIR=build
+set SPHINXPROJ=Cython_GedLib
+
+if "%1" == "" goto help
+
+%SPHINXBUILD% >NUL 2>NUL
+if errorlevel 9009 (
+ echo.
+ echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
+ echo.installed, then set the SPHINXBUILD environment variable to point
+ echo.to the full path of the 'sphinx-build' executable. Alternatively you
+ echo.may add the Sphinx directory to PATH.
+ echo.
+ echo.If you don't have Sphinx installed, grab it from
+ echo.http://sphinx-doc.org/
+ exit /b 1
+)
+
+%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
+goto end
+
+:help
+%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
+
+:end
+popd
diff --git a/lang/fr/gklearn/gedlib/documentation/source/conf.py b/lang/fr/gklearn/gedlib/documentation/source/conf.py
new file mode 100644
index 0000000000..d1836bcf27
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/documentation/source/conf.py
@@ -0,0 +1,199 @@
+# -*- coding: utf-8 -*-
+#
+# Python_GedLib documentation build configuration file, created by
+# sphinx-quickstart on Thu Jun 13 16:10:06 2019.
+#
+# This file is execfile()d with the current directory set to its
+# containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+#sys.path.insert(0, os.path.abspath('.'))
+sys.path.insert(0, os.path.abspath('../../'))
+sys.path.append("../../lib/fann")
+#,"lib/gedlib", "lib/libsvm.3.22","lib/nomad"
+
+
+# -- General configuration ------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#
+# needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = ['sphinx.ext.autodoc',
+ 'sphinx.ext.intersphinx',
+ 'sphinx.ext.coverage',
+ 'sphinx.ext.mathjax',
+ 'sphinx.ext.githubpages']
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+#
+# source_suffix = ['.rst', '.md']
+source_suffix = '.rst'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'GeDLiBPy'
+copyright = u'2019, Natacha Lambert'
+author = u'Natacha Lambert'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = u'1.0'
+# The full version, including alpha/beta/rc tags.
+release = u'1.0'
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = None
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This patterns also effect to html_static_path and html_extra_path
+exclude_patterns = []
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# If true, `todo` and `todoList` produce output, else they produce nothing.
+todo_include_todos = False
+
+
+# -- Options for HTML output ----------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'alabaster'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#
+# html_theme_options = {}
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# Custom sidebar templates, must be a dictionary that maps document names
+# to template names.
+#
+# This is required for the alabaster theme
+# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
+html_sidebars = {
+ '**': [
+ 'relations.html', # needs 'show_related': True theme option to display
+ 'searchbox.html',
+ ]
+}
+
+
+# -- Options for HTMLHelp output ------------------------------------------
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'gedlibpydoc'
+
+
+# -- Options for LaTeX output ---------------------------------------------
+
+latex_elements = {
+ # The paper size ('letterpaper' or 'a4paper').
+ #
+ # 'papersize': 'letterpaper',
+
+ # The font size ('10pt', '11pt' or '12pt').
+ #
+ # 'pointsize': '10pt',
+
+ # Additional stuff for the LaTeX preamble.
+ #
+ # 'preamble': '',
+
+ # Latex figure (float) alignment
+ #
+ # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+# author, documentclass [howto, manual, or own class]).
+latex_documents = [
+ (master_doc, 'gedlibpy.tex', u'gedlibpy Documentation',
+ u'Natacha Lambert', 'manual'),
+]
+
+
+# -- Options for manual page output ---------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ (master_doc, 'gedlibpy', u'gedlibpy Documentation',
+ [author], 1)
+]
+
+
+# -- Options for Texinfo output -------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ (master_doc, 'gedlibpy', u'gedlibpy Documentation',
+ author, 'gedlibpy', 'One line description of project.',
+ 'Miscellaneous'),
+]
+
+
+
+# -- Options for Epub output ----------------------------------------------
+
+# Bibliographic Dublin Core info.
+epub_title = project
+epub_author = author
+epub_publisher = author
+epub_copyright = copyright
+
+# The unique identifier of the text. This can be a ISBN number
+# or the project homepage.
+#
+# epub_identifier = ''
+
+# A unique identification for the text.
+#
+# epub_uid = ''
+
+# A list of files that should not be packed into the epub file.
+epub_exclude_files = ['search.html']
+
+
+
+# Example configuration for intersphinx: refer to the Python standard library.
+intersphinx_mapping = {'https://docs.python.org/': None}
diff --git a/lang/fr/gklearn/gedlib/documentation/source/doc.rst b/lang/fr/gklearn/gedlib/documentation/source/doc.rst
new file mode 100644
index 0000000000..07ec991061
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/documentation/source/doc.rst
@@ -0,0 +1,2 @@
+.. automodule:: gedlibpy
+ :members:
diff --git a/lang/fr/gklearn/gedlib/documentation/source/editcost.rst b/lang/fr/gklearn/gedlib/documentation/source/editcost.rst
new file mode 100644
index 0000000000..ea3d36be3e
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/documentation/source/editcost.rst
@@ -0,0 +1,42 @@
+How to add your own editCost class
+=========================================
+
+When you choose your cost function, you can decide some parameters to personalize the function. But if you have some graphs which its type doesn't correpond to the choices, you can create your edit cost function.
+
+For this, you have to write it in C++.
+
+C++ side
+-------------
+
+You class must inherit to EditCost class, which is an asbtract class. You can find it here : include/gedlib-master/src/edit_costs
+
+You can inspire you to the others to understand how to use it. You have to override these functions :
+
+- virtual double node_ins_cost_fun(const UserNodeLabel & node_label) const final;
+- virtual double node_del_cost_fun(const UserNodeLabel & node_label) const final;
+- virtual double node_rel_cost_fun(const UserNodeLabel & node_label_1, const UserNodeLabel & node_label_2) const final;
+- virtual double edge_ins_cost_fun(const UserEdgeLabel & edge_label) const final;
+- virtual double edge_del_cost_fun(const UserEdgeLabel & edge_label) const final;
+- virtual double edge_rel_cost_fun(const UserEdgeLabel & edge_label_1, const UserEdgeLabel & edge_label_2) const final;
+
+You can add some attributes for parameters use or more functions, but these are unavoidable.
+
+When your class is ready, please go to the C++ Bind here : src/GedLibBind.cpp . The function is :
+
+ void setPersonalEditCost(std::vector editCostConstants){env.set_edit_costs(Your EditCost Class(editCostConstants));}
+
+You have just to initialize your class. Parameters aren't mandatory, empty by default. If your class doesn't have one, you can skip this. After that, you have to recompile the project.
+
+Python side
+----------------
+
+For this, use setup.py with this command in a linux shell::
+
+ python3 setup.py build_ext --inplace
+
+You can also make it in Python 2.
+
+Now you can use your edit cost function with the Python function set_personal_edit_cost(edit_cost_constant).
+
+If you want more informations on C++, you can check the documentation of the original library here : https://github.com/dbblumenthal/gedlib
+
diff --git a/lang/fr/gklearn/gedlib/documentation/source/examples.rst b/lang/fr/gklearn/gedlib/documentation/source/examples.rst
new file mode 100644
index 0000000000..c3496b9472
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/documentation/source/examples.rst
@@ -0,0 +1,165 @@
+Examples
+==============
+
+Before using each example, please make sure to put these lines on the beginnig of your code :
+
+.. code-block:: python
+
+ import librariesImport
+ import gedlibpy
+
+Use your path to access it, without changing the library architecture. After that, you are ready to use the library.
+
+When you want to make new computation, please use this function :
+
+.. code-block:: python
+
+ gedlibpy.restart_env()
+
+All the graphs and results will be delete so make sure you don't need it.
+
+Classique case with GXL graphs
+------------------------------------
+.. code-block:: python
+
+ gedlibpy.load_GXL_graphs('include/gedlib-master/data/datasets/Mutagenicity/data/', 'collections/MUTA_10.xml')
+ listID = gedlibpy.get_all_graph_ids()
+ gedlibpy.set_edit_cost("CHEM_1")
+
+ gedlibpy.init()
+
+ gedlibpy.set_method("IPFP", "")
+ gedlibpy.init_method()
+
+ g = listID[0]
+ h = listID[1]
+
+ gedlibpy.run_method(g,h)
+
+ print("Node Map : ", gedlibpy.get_node_map(g,h))
+ print ("Upper Bound = " + str(gedlibpy.get_upper_bound(g,h)) + ", Lower Bound = " + str(gedlibpy.get_lower_bound(g,h)) + ", Runtime = " + str(gedlibpy.get_runtime(g,h)))
+
+
+You can also use this function :
+
+.. code-block:: python
+
+ compute_edit_distance_on_GXl_graphs(path_folder, path_XML, edit_cost, method, options="", init_option = "EAGER_WITHOUT_SHUFFLED_COPIES")
+
+This function compute all edit distance between all graphs, even itself. You can see the result with some functions and graphs IDs. Please see the documentation of the function for more informations.
+
+Classique case with NX graphs
+------------------------------------
+.. code-block:: python
+
+ for graph in dataset :
+ gedlibpy.add_nx_graph(graph, classe)
+ listID = gedlibpy.get_all_graph_ids()
+ gedlibpy.set_edit_cost("CHEM_1")
+
+ gedlibpy.init()
+
+ gedlibpy.set_method("IPFP", "")
+ gedlibpy.init_method()
+
+ g = listID[0]
+ h = listID[1]
+
+ gedlibpy.run_method(g,h)
+
+ print("Node Map : ", gedlibpy.get_node_map(g,h))
+ print ("Upper Bound = " + str(gedlibpy.get_upper_bound(g,h)) + ", Lower Bound = " + str(gedlibpy.get_lower_bound(g,h)) + ", Runtime = " + str(gedlibpy.get_runtime(g,h)))
+
+You can also use this function :
+
+.. code-block:: python
+
+ compute_edit_distance_on_nx_graphs(dataset, classes, edit_cost, method, options, init_option = "EAGER_WITHOUT_SHUFFLED_COPIES")
+
+This function compute all edit distance between all graphs, even itself. You can see the result in the return and with some functions and graphs IDs. Please see the documentation of the function for more informations.
+
+Or this function :
+
+.. code-block:: python
+
+ compute_ged_on_two_graphs(g1,g2, edit_cost, method, options, init_option = "EAGER_WITHOUT_SHUFFLED_COPIES")
+
+This function allow to compute the edit distance just for two graphs. Please see the documentation of the function for more informations.
+
+Add a graph from scratch
+------------------------------------
+.. code-block:: python
+
+ currentID = gedlibpy.add_graph()
+ gedlibpy.add_node(currentID, "_1", {"chem" : "C"})
+ gedlibpy.add_node(currentID, "_2", {"chem" : "O"})
+ gedlibpy.add_edge(currentID,"_1", "_2", {"valence": "1"} )
+
+Please make sure as the type are the same (string for Ids and a dictionnary for labels). If you want a symmetrical graph, you can use this function to ensure the symmetry :
+
+.. code-block:: python
+
+ add_symmetrical_edge(graph_id, tail, head, edge_label)
+
+If you have a Nx structure, you can use directly this function :
+
+.. code-block:: python
+
+ add_nx_graph(g, classe, ignore_duplicates=True)
+
+Even if you have another structure, you can use this function :
+
+.. code-block:: python
+
+ add_random_graph(name, classe, list_of_nodes, list_of_edges, ignore_duplicates=True)
+
+Please read the documentation before using and respect the types.
+
+Median computation
+------------------------------------
+
+An example is available in the Median_Example folder. It contains the necessary to compute a median graph. You can launch xp-letter-gbr.py to compute median graph on all letters in the dataset, or median.py for le letter Z.
+
+To summarize the use, you can follow this example :
+
+.. code-block:: python
+
+ import pygraph #Available with the median example
+ from median import draw_Letter_graph, compute_median, compute_median_set
+
+ gedlibpy.load_GXL_graphs('../include/gedlib-master/data/datasets/Letter/HIGH/', '../include/gedlib-master/data/collections/Letter_Z.xml')
+ gedlibpy.set_edit_cost("LETTER")
+ gedlibpy.init()
+ gedlibpy.set_method("IPFP", "")
+ gedlibpy.init_method()
+ listID = gedlibpy.get_all_graph_ids()
+
+ dataset,my_y = pygraph.utils.graphfiles.loadDataset("../include/gedlib-master/data/datasets/Letter/HIGH/Letter_Z.cxl")
+ median, sod, sods_path,set_median = compute_median(gedlibpy,listID,dataset,verbose=True)
+ draw_Letter_graph(median)
+
+Please use the function in the median.py code to simplify your use. You can adapt this example to your case. Also, some function in the PythonGedLib module can make the work easier. Ask Benoît Gauzere if you want more information.
+
+Hungarian algorithm
+------------------------------------
+
+
+LSAPE
+~~~~~~
+
+.. code-block:: python
+
+ result = gedlibpy.hungarian_LSAPE(matrixCost)
+ print("Rho = ", result[0], " Varrho = ", result[1], " u = ", result[2], " v = ", result[3])
+
+
+LSAP
+~~~~~~
+
+.. code-block:: python
+
+ result = gedlibpy.hungarian_LSAP(matrixCost)
+ print("Rho = ", result[0], " Varrho = ", result[1], " u = ", result[2], " v = ", result[3])
+
+
+
diff --git a/lang/fr/gklearn/gedlib/documentation/source/index.rst b/lang/fr/gklearn/gedlib/documentation/source/index.rst
new file mode 100644
index 0000000000..42b70672a5
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/documentation/source/index.rst
@@ -0,0 +1,36 @@
+.. Python_GedLib documentation master file, created by
+ sphinx-quickstart on Thu Jun 13 16:10:06 2019.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+
+Welcome to GEDLIBPY's documentation!
+=========================================
+
+This module allow to use a C++ library for edit distance between graphs (GedLib) with Python.
+
+Before using, please read the first section to ensure a good start with the library. Then, you can follow some examples or informations about each method.
+
+.. toctree::
+ :maxdepth: 2
+ :caption: Contents:
+
+ readme
+ editcost
+ examples
+ doc
+
+Authors
+~~~~~~~~
+
+* David Blumenthal for C++ module
+* Natacha Lambert for Python module
+
+Copyright (C) 2019 by all the authors
+
+Indices and tables
+~~~~~~~~~~~~~~~~~~~~~
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/lang/fr/gklearn/gedlib/documentation/source/readme.rst b/lang/fr/gklearn/gedlib/documentation/source/readme.rst
new file mode 100644
index 0000000000..bafe5ea95a
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/documentation/source/readme.rst
@@ -0,0 +1,97 @@
+How to install this library
+====================================
+
+Please Read https://dbblumenthal.github.io/gedlib/ before using Python code.
+You can also find this module documentation in documentation/build/html folder.
+
+Make sure you have numpy installed (and Cython if you have to recompile the library). You can use pip for this.
+
+
+Running the script
+-------------------
+
+After donwloading the entire folder, you can run test.py to ensure the library works.
+
+For your code, you have to make two imports::
+
+ import librariesImport
+ import gedlibpy
+
+You can call each function in the library with this. You can't move any folder or files on the library, please make sure that the architecture remains the same.
+
+This library is compiled for Python3 only. If you want to use it with Python 2, you have to recompile it with setup.py. You have to use this command on your favorite shell::
+
+ python setup.py build_ext --inplace
+
+After this step, you can use the same lines as Python3 for import, it will be ok. Check the documentation inside the documentation/build/html folder before using function. You can also copy the tests examples for basic use.
+
+
+A problem with the library ?
+-------------------------------
+
+If the library isn't found, you can recompile the Python library because your Linux is different to mine. Please delete gedlibpy.so, gedlibpy.cpp and build folder. Then use this command on a linux shell ::
+
+ python3 setup.py build_ext --inplace
+
+You can make it with Python 2 but make sure you use the same version with your code and the compilation.
+
+If it's doesn't work, maybe the version of GedLib or another library can be a problem. If it is, you can re-install GedLib for your computer. You can download it on this git : https://dbblumenthal.github.io/gedlib/
+
+You have to install Gedlib with the Python installer after that.
+Just call::
+
+ python3 install.py
+
+Make the links like indicate on the documentation. Use the same architecture like this library, but just change the .so and folders with your installation. You can recompile the Python library with setup command, after that.
+
+If you use Mac OS, you have to follow all this part, and install the external libraries with this command::
+
+ install_name_tool -change //
+
+For an example, you have to write these lines::
+
+ install_name_tool -change libdoublefann.2.dylib lib/fann/libdoublefann.2.dylib gedlibpy.so
+ install_name_tool -change libsvm.so lib/libsvm.3.22/libsvm.so gedlibpy.so
+ install_name_tool -change libnomad.so lib/nomad/libnomad.so gedlibpy.so
+ install_name_tool -change libsgtelib.so lib/nomad/libsgtelib.so gedlibpy.so
+
+The name of the library gedlibpy can be different if you use Python 3.
+
+If your problem is still here, you can contact me on : natacha.lambert@unicaen.fr
+
+How to use this library
+-------------------------
+
+This library allow to compute edit distance between two graphs. You have to follow these steps to use it :
+
+- Add your graphs (GXL files, NX Structures or your structure, make sure that the internal type is the same)
+- Choose your cost function
+- Init your environnment (After that, the cost function and your graphs can't be modified)
+- Choose your method computation
+- Run the computation with the IDs of the two graphs. You can have the ID when you add the graph or with some functions
+- Find the result with differents functions (NodeMap, edit distance, etc)
+
+Here is an example of code with GXL graphs::
+
+ gedlibpy.load_GXL_graphs('include/gedlib-master/data/datasets/Mutagenicity/data/', 'collections/MUTA_10.xml')
+ listID = gedlibpy.get_all_graph_ids()
+ gedlibpy.set_edit_cost("CHEM_1")
+ gedlibpy.init()
+ gedlibpy.set_method("IPFP", "")
+ gedlibpy.init_method()
+ g = listID[0]
+ h = listID[1]
+
+ gedlibpy.run_method(g,h)
+
+ print("Node Map : ", gedlibpy.get_node_map(g,h))
+ print ("Upper Bound = " + str(gedlibpy.get_upper_bound(g,h)) + ", Lower Bound = " + str(gedlibpy.get_lower_bound(g,h)) + ", Runtime = " + str(gedlibpy.get_runtime(g,h)))
+
+
+Please read the documentation for more examples and functions.
+
+
+An advice if you don't code in a shell
+---------------------------------------
+
+Python library don't indicate each C++ error. If you have a restart causing by an error in your code, please use on a linux shell for having C++ errors.
diff --git a/lang/fr/gklearn/gedlib/gedlibpy.cpp b/lang/fr/gklearn/gedlib/gedlibpy.cpp
new file mode 100644
index 0000000000..18e7cd8607
--- /dev/null
+++ b/lang/fr/gklearn/gedlib/gedlibpy.cpp
@@ -0,0 +1,26620 @@
+/* Generated by Cython 0.29.16 */
+
+/* BEGIN: Cython Metadata
+{
+ "distutils": {
+ "depends": [
+ "src/GedLibBind.hpp"
+ ],
+ "extra_compile_args": [
+ "-std=c++11"
+ ],
+ "extra_link_args": [
+ "-std=c++11"
+ ],
+ "include_dirs": [
+ "src",
+ "include",
+ "include/lsape",
+ "include/Eigen",
+ "include/nomad",
+ "include/sgtelib",
+ "include/libsvm.3.22",
+ "include/fann",
+ "include/boost_1_69_0"
+ ],
+ "language": "c++",
+ "libraries": [
+ "doublefann",
+ "sgtelib",
+ "svm",
+ "nomad"
+ ],
+ "library_dirs": [
+ "lib/fann",
+ "lib/gedlib",
+ "lib/libsvm.3.22",
+ "lib/nomad"
+ ],
+ "name": "gedlibpy",
+ "sources": [
+ "gedlibpy.pyx"
+ ]
+ },
+ "module_name": "gedlibpy"
+}
+END: Cython Metadata */
+
+#define PY_SSIZE_T_CLEAN
+#include "Python.h"
+#ifndef Py_PYTHON_H
+ #error Python headers needed to compile C extensions, please install development version of Python.
+#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)
+ #error Cython requires Python 2.6+ or Python 3.3+.
+#else
+#define CYTHON_ABI "0_29_16"
+#define CYTHON_HEX_VERSION 0x001D10F0
+#define CYTHON_FUTURE_DIVISION 1
+#include
+#ifndef offsetof
+ #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )
+#endif
+#if !defined(WIN32) && !defined(MS_WINDOWS)
+ #ifndef __stdcall
+ #define __stdcall
+ #endif
+ #ifndef __cdecl
+ #define __cdecl
+ #endif
+ #ifndef __fastcall
+ #define __fastcall
+ #endif
+#endif
+#ifndef DL_IMPORT
+ #define DL_IMPORT(t) t
+#endif
+#ifndef DL_EXPORT
+ #define DL_EXPORT(t) t
+#endif
+#define __PYX_COMMA ,
+#ifndef HAVE_LONG_LONG
+ #if PY_VERSION_HEX >= 0x02070000
+ #define HAVE_LONG_LONG
+ #endif
+#endif
+#ifndef PY_LONG_LONG
+ #define PY_LONG_LONG LONG_LONG
+#endif
+#ifndef Py_HUGE_VAL
+ #define Py_HUGE_VAL HUGE_VAL
+#endif
+#ifdef PYPY_VERSION
+ #define CYTHON_COMPILING_IN_PYPY 1
+ #define CYTHON_COMPILING_IN_PYSTON 0
+ #define CYTHON_COMPILING_IN_CPYTHON 0
+ #undef CYTHON_USE_TYPE_SLOTS
+ #define CYTHON_USE_TYPE_SLOTS 0
+ #undef CYTHON_USE_PYTYPE_LOOKUP
+ #define CYTHON_USE_PYTYPE_LOOKUP 0
+ #if PY_VERSION_HEX < 0x03050000
+ #undef CYTHON_USE_ASYNC_SLOTS
+ #define CYTHON_USE_ASYNC_SLOTS 0
+ #elif !defined(CYTHON_USE_ASYNC_SLOTS)
+ #define CYTHON_USE_ASYNC_SLOTS 1
+ #endif
+ #undef CYTHON_USE_PYLIST_INTERNALS
+ #define CYTHON_USE_PYLIST_INTERNALS 0
+ #undef CYTHON_USE_UNICODE_INTERNALS
+ #define CYTHON_USE_UNICODE_INTERNALS 0
+ #undef CYTHON_USE_UNICODE_WRITER
+ #define CYTHON_USE_UNICODE_WRITER 0
+ #undef CYTHON_USE_PYLONG_INTERNALS
+ #define CYTHON_USE_PYLONG_INTERNALS 0
+ #undef CYTHON_AVOID_BORROWED_REFS
+ #define CYTHON_AVOID_BORROWED_REFS 1
+ #undef CYTHON_ASSUME_SAFE_MACROS
+ #define CYTHON_ASSUME_SAFE_MACROS 0
+ #undef CYTHON_UNPACK_METHODS
+ #define CYTHON_UNPACK_METHODS 0
+ #undef CYTHON_FAST_THREAD_STATE
+ #define CYTHON_FAST_THREAD_STATE 0
+ #undef CYTHON_FAST_PYCALL
+ #define CYTHON_FAST_PYCALL 0
+ #undef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT 0
+ #undef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE 0
+ #undef CYTHON_USE_DICT_VERSIONS
+ #define CYTHON_USE_DICT_VERSIONS 0
+ #undef CYTHON_USE_EXC_INFO_STACK
+ #define CYTHON_USE_EXC_INFO_STACK 0
+#elif defined(PYSTON_VERSION)
+ #define CYTHON_COMPILING_IN_PYPY 0
+ #define CYTHON_COMPILING_IN_PYSTON 1
+ #define CYTHON_COMPILING_IN_CPYTHON 0
+ #ifndef CYTHON_USE_TYPE_SLOTS
+ #define CYTHON_USE_TYPE_SLOTS 1
+ #endif
+ #undef CYTHON_USE_PYTYPE_LOOKUP
+ #define CYTHON_USE_PYTYPE_LOOKUP 0
+ #undef CYTHON_USE_ASYNC_SLOTS
+ #define CYTHON_USE_ASYNC_SLOTS 0
+ #undef CYTHON_USE_PYLIST_INTERNALS
+ #define CYTHON_USE_PYLIST_INTERNALS 0
+ #ifndef CYTHON_USE_UNICODE_INTERNALS
+ #define CYTHON_USE_UNICODE_INTERNALS 1
+ #endif
+ #undef CYTHON_USE_UNICODE_WRITER
+ #define CYTHON_USE_UNICODE_WRITER 0
+ #undef CYTHON_USE_PYLONG_INTERNALS
+ #define CYTHON_USE_PYLONG_INTERNALS 0
+ #ifndef CYTHON_AVOID_BORROWED_REFS
+ #define CYTHON_AVOID_BORROWED_REFS 0
+ #endif
+ #ifndef CYTHON_ASSUME_SAFE_MACROS
+ #define CYTHON_ASSUME_SAFE_MACROS 1
+ #endif
+ #ifndef CYTHON_UNPACK_METHODS
+ #define CYTHON_UNPACK_METHODS 1
+ #endif
+ #undef CYTHON_FAST_THREAD_STATE
+ #define CYTHON_FAST_THREAD_STATE 0
+ #undef CYTHON_FAST_PYCALL
+ #define CYTHON_FAST_PYCALL 0
+ #undef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT 0
+ #undef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE 0
+ #undef CYTHON_USE_DICT_VERSIONS
+ #define CYTHON_USE_DICT_VERSIONS 0
+ #undef CYTHON_USE_EXC_INFO_STACK
+ #define CYTHON_USE_EXC_INFO_STACK 0
+#else
+ #define CYTHON_COMPILING_IN_PYPY 0
+ #define CYTHON_COMPILING_IN_PYSTON 0
+ #define CYTHON_COMPILING_IN_CPYTHON 1
+ #ifndef CYTHON_USE_TYPE_SLOTS
+ #define CYTHON_USE_TYPE_SLOTS 1
+ #endif
+ #if PY_VERSION_HEX < 0x02070000
+ #undef CYTHON_USE_PYTYPE_LOOKUP
+ #define CYTHON_USE_PYTYPE_LOOKUP 0
+ #elif !defined(CYTHON_USE_PYTYPE_LOOKUP)
+ #define CYTHON_USE_PYTYPE_LOOKUP 1
+ #endif
+ #if PY_MAJOR_VERSION < 3
+ #undef CYTHON_USE_ASYNC_SLOTS
+ #define CYTHON_USE_ASYNC_SLOTS 0
+ #elif !defined(CYTHON_USE_ASYNC_SLOTS)
+ #define CYTHON_USE_ASYNC_SLOTS 1
+ #endif
+ #if PY_VERSION_HEX < 0x02070000
+ #undef CYTHON_USE_PYLONG_INTERNALS
+ #define CYTHON_USE_PYLONG_INTERNALS 0
+ #elif !defined(CYTHON_USE_PYLONG_INTERNALS)
+ #define CYTHON_USE_PYLONG_INTERNALS 1
+ #endif
+ #ifndef CYTHON_USE_PYLIST_INTERNALS
+ #define CYTHON_USE_PYLIST_INTERNALS 1
+ #endif
+ #ifndef CYTHON_USE_UNICODE_INTERNALS
+ #define CYTHON_USE_UNICODE_INTERNALS 1
+ #endif
+ #if PY_VERSION_HEX < 0x030300F0
+ #undef CYTHON_USE_UNICODE_WRITER
+ #define CYTHON_USE_UNICODE_WRITER 0
+ #elif !defined(CYTHON_USE_UNICODE_WRITER)
+ #define CYTHON_USE_UNICODE_WRITER 1
+ #endif
+ #ifndef CYTHON_AVOID_BORROWED_REFS
+ #define CYTHON_AVOID_BORROWED_REFS 0
+ #endif
+ #ifndef CYTHON_ASSUME_SAFE_MACROS
+ #define CYTHON_ASSUME_SAFE_MACROS 1
+ #endif
+ #ifndef CYTHON_UNPACK_METHODS
+ #define CYTHON_UNPACK_METHODS 1
+ #endif
+ #ifndef CYTHON_FAST_THREAD_STATE
+ #define CYTHON_FAST_THREAD_STATE 1
+ #endif
+ #ifndef CYTHON_FAST_PYCALL
+ #define CYTHON_FAST_PYCALL 1
+ #endif
+ #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000)
+ #endif
+ #ifndef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)
+ #endif
+ #ifndef CYTHON_USE_DICT_VERSIONS
+ #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1)
+ #endif
+ #ifndef CYTHON_USE_EXC_INFO_STACK
+ #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3)
+ #endif
+#endif
+#if !defined(CYTHON_FAST_PYCCALL)
+#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)
+#endif
+#if CYTHON_USE_PYLONG_INTERNALS
+ #include "longintrepr.h"
+ #undef SHIFT
+ #undef BASE
+ #undef MASK
+ #ifdef SIZEOF_VOID_P
+ enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };
+ #endif
+#endif
+#ifndef __has_attribute
+ #define __has_attribute(x) 0
+#endif
+#ifndef __has_cpp_attribute
+ #define __has_cpp_attribute(x) 0
+#endif
+#ifndef CYTHON_RESTRICT
+ #if defined(__GNUC__)
+ #define CYTHON_RESTRICT __restrict__
+ #elif defined(_MSC_VER) && _MSC_VER >= 1400
+ #define CYTHON_RESTRICT __restrict
+ #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+ #define CYTHON_RESTRICT restrict
+ #else
+ #define CYTHON_RESTRICT
+ #endif
+#endif
+#ifndef CYTHON_UNUSED
+# if defined(__GNUC__)
+# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
+# define CYTHON_UNUSED __attribute__ ((__unused__))
+# else
+# define CYTHON_UNUSED
+# endif
+# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))
+# define CYTHON_UNUSED __attribute__ ((__unused__))
+# else
+# define CYTHON_UNUSED
+# endif
+#endif
+#ifndef CYTHON_MAYBE_UNUSED_VAR
+# if defined(__cplusplus)
+ template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }
+# else
+# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)
+# endif
+#endif
+#ifndef CYTHON_NCP_UNUSED
+# if CYTHON_COMPILING_IN_CPYTHON
+# define CYTHON_NCP_UNUSED
+# else
+# define CYTHON_NCP_UNUSED CYTHON_UNUSED
+# endif
+#endif
+#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)
+#ifdef _MSC_VER
+ #ifndef _MSC_STDINT_H_
+ #if _MSC_VER < 1300
+ typedef unsigned char uint8_t;
+ typedef unsigned int uint32_t;
+ #else
+ typedef unsigned __int8 uint8_t;
+ typedef unsigned __int32 uint32_t;
+ #endif
+ #endif
+#else
+ #include
+#endif
+#ifndef CYTHON_FALLTHROUGH
+ #if defined(__cplusplus) && __cplusplus >= 201103L
+ #if __has_cpp_attribute(fallthrough)
+ #define CYTHON_FALLTHROUGH [[fallthrough]]
+ #elif __has_cpp_attribute(clang::fallthrough)
+ #define CYTHON_FALLTHROUGH [[clang::fallthrough]]
+ #elif __has_cpp_attribute(gnu::fallthrough)
+ #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]
+ #endif
+ #endif
+ #ifndef CYTHON_FALLTHROUGH
+ #if __has_attribute(fallthrough)
+ #define CYTHON_FALLTHROUGH __attribute__((fallthrough))
+ #else
+ #define CYTHON_FALLTHROUGH
+ #endif
+ #endif
+ #if defined(__clang__ ) && defined(__apple_build_version__)
+ #if __apple_build_version__ < 7000000
+ #undef CYTHON_FALLTHROUGH
+ #define CYTHON_FALLTHROUGH
+ #endif
+ #endif
+#endif
+
+#ifndef __cplusplus
+ #error "Cython files generated with the C++ option must be compiled with a C++ compiler."
+#endif
+#ifndef CYTHON_INLINE
+ #if defined(__clang__)
+ #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
+ #else
+ #define CYTHON_INLINE inline
+ #endif
+#endif
+template
+void __Pyx_call_destructor(T& x) {
+ x.~T();
+}
+template
+class __Pyx_FakeReference {
+ public:
+ __Pyx_FakeReference() : ptr(NULL) { }
+ __Pyx_FakeReference(const T& ref) : ptr(const_cast(&ref)) { }
+ T *operator->() { return ptr; }
+ T *operator&() { return ptr; }
+ operator T&() { return *ptr; }
+ template bool operator ==(U other) { return *ptr == other; }
+ template bool operator !=(U other) { return *ptr != other; }
+ private:
+ T *ptr;
+};
+
+#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)
+ #define Py_OptimizeFlag 0
+#endif
+#define __PYX_BUILD_PY_SSIZE_T "n"
+#define CYTHON_FORMAT_SSIZE_T "z"
+#if PY_MAJOR_VERSION < 3
+ #define __Pyx_BUILTIN_MODULE_NAME "__builtin__"
+ #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
+ PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
+ #define __Pyx_DefaultClassType PyClass_Type
+#else
+ #define __Pyx_BUILTIN_MODULE_NAME "builtins"
+#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2
+ #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
+ PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
+#else
+ #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
+ PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
+#endif
+ #define __Pyx_DefaultClassType PyType_Type
+#endif
+#ifndef Py_TPFLAGS_CHECKTYPES
+ #define Py_TPFLAGS_CHECKTYPES 0
+#endif
+#ifndef Py_TPFLAGS_HAVE_INDEX
+ #define Py_TPFLAGS_HAVE_INDEX 0
+#endif
+#ifndef Py_TPFLAGS_HAVE_NEWBUFFER
+ #define Py_TPFLAGS_HAVE_NEWBUFFER 0
+#endif
+#ifndef Py_TPFLAGS_HAVE_FINALIZE
+ #define Py_TPFLAGS_HAVE_FINALIZE 0
+#endif
+#ifndef METH_STACKLESS
+ #define METH_STACKLESS 0
+#endif
+#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL)
+ #ifndef METH_FASTCALL
+ #define METH_FASTCALL 0x80
+ #endif
+ typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs);
+ typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args,
+ Py_ssize_t nargs, PyObject *kwnames);
+#else
+ #define __Pyx_PyCFunctionFast _PyCFunctionFast
+ #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords
+#endif
+#if CYTHON_FAST_PYCCALL
+#define __Pyx_PyFastCFunction_Check(func)\
+ ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)))))
+#else
+#define __Pyx_PyFastCFunction_Check(func) 0
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)
+ #define PyObject_Malloc(s) PyMem_Malloc(s)
+ #define PyObject_Free(p) PyMem_Free(p)
+ #define PyObject_Realloc(p) PyMem_Realloc(p)
+#endif
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1
+ #define PyMem_RawMalloc(n) PyMem_Malloc(n)
+ #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n)
+ #define PyMem_RawFree(p) PyMem_Free(p)
+#endif
+#if CYTHON_COMPILING_IN_PYSTON
+ #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co)
+ #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)
+#else
+ #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0)
+ #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
+#endif
+#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000
+ #define __Pyx_PyThreadState_Current PyThreadState_GET()
+#elif PY_VERSION_HEX >= 0x03060000
+ #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()
+#elif PY_VERSION_HEX >= 0x03000000
+ #define __Pyx_PyThreadState_Current PyThreadState_GET()
+#else
+ #define __Pyx_PyThreadState_Current _PyThreadState_Current
+#endif
+#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT)
+#include "pythread.h"
+#define Py_tss_NEEDS_INIT 0
+typedef int Py_tss_t;
+static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {
+ *key = PyThread_create_key();
+ return 0;
+}
+static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {
+ Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t));
+ *key = Py_tss_NEEDS_INIT;
+ return key;
+}
+static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {
+ PyObject_Free(key);
+}
+static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {
+ return *key != Py_tss_NEEDS_INIT;
+}
+static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {
+ PyThread_delete_key(*key);
+ *key = Py_tss_NEEDS_INIT;
+}
+static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {
+ return PyThread_set_key_value(*key, value);
+}
+static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
+ return PyThread_get_key_value(*key);
+}
+#endif
+#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)
+#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))
+#else
+#define __Pyx_PyDict_NewPresized(n) PyDict_New()
+#endif
+#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION
+ #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
+ #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)
+#else
+ #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)
+ #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y)
+#endif
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS
+#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash)
+#else
+#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name)
+#endif
+#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)
+ #define CYTHON_PEP393_ENABLED 1
+ #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\
+ 0 : _PyUnicode_Ready((PyObject *)(op)))
+ #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u)
+ #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)
+ #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u)
+ #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u)
+ #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u)
+ #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i)
+ #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch)
+ #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))
+#else
+ #define CYTHON_PEP393_ENABLED 0
+ #define PyUnicode_1BYTE_KIND 1
+ #define PyUnicode_2BYTE_KIND 2
+ #define PyUnicode_4BYTE_KIND 4
+ #define __Pyx_PyUnicode_READY(op) (0)
+ #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u)
+ #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))
+ #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)
+ #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE))
+ #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u))
+ #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))
+ #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch)
+ #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u))
+#endif
+#if CYTHON_COMPILING_IN_PYPY
+ #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b)
+ #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b)
+#else
+ #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b)
+ #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\
+ PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)
+ #define PyUnicode_Contains(u, s) PySequence_Contains(u, s)
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)
+ #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type)
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)
+ #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt)
+#endif
+#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))
+#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))
+#if PY_MAJOR_VERSION >= 3
+ #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b)
+#else
+ #define __Pyx_PyString_Format(a, b) PyString_Format(a, b)
+#endif
+#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)
+ #define PyObject_ASCII(o) PyObject_Repr(o)
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define PyBaseString_Type PyUnicode_Type
+ #define PyStringObject PyUnicodeObject
+ #define PyString_Type PyUnicode_Type
+ #define PyString_Check PyUnicode_Check
+ #define PyString_CheckExact PyUnicode_CheckExact
+#ifndef PyObject_Unicode
+ #define PyObject_Unicode PyObject_Str
+#endif
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)
+ #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)
+#else
+ #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))
+ #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))
+#endif
+#ifndef PySet_CheckExact
+ #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type)
+#endif
+#if CYTHON_ASSUME_SAFE_MACROS
+ #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq)
+#else
+ #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq)
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define PyIntObject PyLongObject
+ #define PyInt_Type PyLong_Type
+ #define PyInt_Check(op) PyLong_Check(op)
+ #define PyInt_CheckExact(op) PyLong_CheckExact(op)
+ #define PyInt_FromString PyLong_FromString
+ #define PyInt_FromUnicode PyLong_FromUnicode
+ #define PyInt_FromLong PyLong_FromLong
+ #define PyInt_FromSize_t PyLong_FromSize_t
+ #define PyInt_FromSsize_t PyLong_FromSsize_t
+ #define PyInt_AsLong PyLong_AsLong
+ #define PyInt_AS_LONG PyLong_AS_LONG
+ #define PyInt_AsSsize_t PyLong_AsSsize_t
+ #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask
+ #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask
+ #define PyNumber_Int PyNumber_Long
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define PyBoolObject PyLongObject
+#endif
+#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY
+ #ifndef PyUnicode_InternFromString
+ #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)
+ #endif
+#endif
+#if PY_VERSION_HEX < 0x030200A4
+ typedef long Py_hash_t;
+ #define __Pyx_PyInt_FromHash_t PyInt_FromLong
+ #define __Pyx_PyInt_AsHash_t PyInt_AsLong
+#else
+ #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t
+ #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : (Py_INCREF(func), func))
+#else
+ #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)
+#endif
+#if CYTHON_USE_ASYNC_SLOTS
+ #if PY_VERSION_HEX >= 0x030500B1
+ #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods
+ #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)
+ #else
+ #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))
+ #endif
+#else
+ #define __Pyx_PyType_AsAsync(obj) NULL
+#endif
+#ifndef __Pyx_PyAsyncMethodsStruct
+ typedef struct {
+ unaryfunc am_await;
+ unaryfunc am_aiter;
+ unaryfunc am_anext;
+ } __Pyx_PyAsyncMethodsStruct;
+#endif
+
+#if defined(WIN32) || defined(MS_WINDOWS)
+ #define _USE_MATH_DEFINES
+#endif
+#include
+#ifdef NAN
+#define __PYX_NAN() ((float) NAN)
+#else
+static CYTHON_INLINE float __PYX_NAN() {
+ float value;
+ memset(&value, 0xFF, sizeof(value));
+ return value;
+}
+#endif
+#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)
+#define __Pyx_truncl trunc
+#else
+#define __Pyx_truncl truncl
+#endif
+
+
+#define __PYX_ERR(f_index, lineno, Ln_error) \
+{ \
+ __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \
+}
+
+#ifndef __PYX_EXTERN_C
+ #ifdef __cplusplus
+ #define __PYX_EXTERN_C extern "C"
+ #else
+ #define __PYX_EXTERN_C extern
+ #endif
+#endif
+
+#define __PYX_HAVE__gedlibpy
+#define __PYX_HAVE_API__gedlibpy
+/* Early includes */
+#include "ios"
+#include "new"
+#include "stdexcept"
+#include "typeinfo"
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include "numpy/arrayobject.h"
+#include "numpy/ufuncobject.h"
+#include "pythread.h"
+#include "src/GedLibBind.hpp"
+#ifdef _OPENMP
+#include
+#endif /* _OPENMP */
+
+#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS)
+#define CYTHON_WITHOUT_ASSERTIONS
+#endif
+
+typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;
+ const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;
+
+#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0
+#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0
+#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8)
+#define __PYX_DEFAULT_STRING_ENCODING ""
+#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString
+#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
+#define __Pyx_uchar_cast(c) ((unsigned char)c)
+#define __Pyx_long_cast(x) ((long)x)
+#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\
+ (sizeof(type) < sizeof(Py_ssize_t)) ||\
+ (sizeof(type) > sizeof(Py_ssize_t) &&\
+ likely(v < (type)PY_SSIZE_T_MAX ||\
+ v == (type)PY_SSIZE_T_MAX) &&\
+ (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\
+ v == (type)PY_SSIZE_T_MIN))) ||\
+ (sizeof(type) == sizeof(Py_ssize_t) &&\
+ (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\
+ v == (type)PY_SSIZE_T_MAX))) )
+static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) {
+ return (size_t) i < (size_t) limit;
+}
+#if defined (__cplusplus) && __cplusplus >= 201103L
+ #include
+ #define __Pyx_sst_abs(value) std::abs(value)
+#elif SIZEOF_INT >= SIZEOF_SIZE_T
+ #define __Pyx_sst_abs(value) abs(value)
+#elif SIZEOF_LONG >= SIZEOF_SIZE_T
+ #define __Pyx_sst_abs(value) labs(value)
+#elif defined (_MSC_VER)
+ #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value))
+#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+ #define __Pyx_sst_abs(value) llabs(value)
+#elif defined (__GNUC__)
+ #define __Pyx_sst_abs(value) __builtin_llabs(value)
+#else
+ #define __Pyx_sst_abs(value) ((value<0) ? -value : value)
+#endif
+static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*);
+static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);
+#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))
+#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)
+#define __Pyx_PyBytes_FromString PyBytes_FromString
+#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize
+static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);
+#if PY_MAJOR_VERSION < 3
+ #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString
+ #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
+#else
+ #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString
+ #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize
+#endif
+#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s)
+#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s)
+#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s)
+#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s)
+#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)
+static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
+ const Py_UNICODE *u_end = u;
+ while (*u_end++) ;
+ return (size_t)(u_end - u - 1);
+}
+#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))
+#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode
+#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode
+#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)
+#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)
+static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b);
+static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);
+static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*);
+static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);
+#define __Pyx_PySequence_Tuple(obj)\
+ (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj))
+static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);
+static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);
+#if CYTHON_ASSUME_SAFE_MACROS
+#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))
+#else
+#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)
+#endif
+#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))
+#if PY_MAJOR_VERSION >= 3
+#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))
+#else
+#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))
+#endif
+#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))
+#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
+static int __Pyx_sys_getdefaultencoding_not_ascii;
+static int __Pyx_init_sys_getdefaultencoding_params(void) {
+ PyObject* sys;
+ PyObject* default_encoding = NULL;
+ PyObject* ascii_chars_u = NULL;
+ PyObject* ascii_chars_b = NULL;
+ const char* default_encoding_c;
+ sys = PyImport_ImportModule("sys");
+ if (!sys) goto bad;
+ default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL);
+ Py_DECREF(sys);
+ if (!default_encoding) goto bad;
+ default_encoding_c = PyBytes_AsString(default_encoding);
+ if (!default_encoding_c) goto bad;
+ if (strcmp(default_encoding_c, "ascii") == 0) {
+ __Pyx_sys_getdefaultencoding_not_ascii = 0;
+ } else {
+ char ascii_chars[128];
+ int c;
+ for (c = 0; c < 128; c++) {
+ ascii_chars[c] = c;
+ }
+ __Pyx_sys_getdefaultencoding_not_ascii = 1;
+ ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);
+ if (!ascii_chars_u) goto bad;
+ ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);
+ if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {
+ PyErr_Format(
+ PyExc_ValueError,
+ "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.",
+ default_encoding_c);
+ goto bad;
+ }
+ Py_DECREF(ascii_chars_u);
+ Py_DECREF(ascii_chars_b);
+ }
+ Py_DECREF(default_encoding);
+ return 0;
+bad:
+ Py_XDECREF(default_encoding);
+ Py_XDECREF(ascii_chars_u);
+ Py_XDECREF(ascii_chars_b);
+ return -1;
+}
+#endif
+#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3
+#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)
+#else
+#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)
+#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
+static char* __PYX_DEFAULT_STRING_ENCODING;
+static int __Pyx_init_sys_getdefaultencoding_params(void) {
+ PyObject* sys;
+ PyObject* default_encoding = NULL;
+ char* default_encoding_c;
+ sys = PyImport_ImportModule("sys");
+ if (!sys) goto bad;
+ default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL);
+ Py_DECREF(sys);
+ if (!default_encoding) goto bad;
+ default_encoding_c = PyBytes_AsString(default_encoding);
+ if (!default_encoding_c) goto bad;
+ __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1);
+ if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;
+ strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);
+ Py_DECREF(default_encoding);
+ return 0;
+bad:
+ Py_XDECREF(default_encoding);
+ return -1;
+}
+#endif
+#endif
+
+
+/* Test for GCC > 2.95 */
+#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))
+ #define likely(x) __builtin_expect(!!(x), 1)
+ #define unlikely(x) __builtin_expect(!!(x), 0)
+#else /* !__GNUC__ or GCC < 2.95 */
+ #define likely(x) (x)
+ #define unlikely(x) (x)
+#endif /* __GNUC__ */
+static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; }
+
+static PyObject *__pyx_m = NULL;
+static PyObject *__pyx_d;
+static PyObject *__pyx_b;
+static PyObject *__pyx_cython_runtime = NULL;
+static PyObject *__pyx_empty_tuple;
+static PyObject *__pyx_empty_bytes;
+static PyObject *__pyx_empty_unicode;
+static int __pyx_lineno;
+static int __pyx_clineno = 0;
+static const char * __pyx_cfilenm= __FILE__;
+static const char *__pyx_filename;
+
+/* Header.proto */
+#if !defined(CYTHON_CCOMPLEX)
+ #if defined(__cplusplus)
+ #define CYTHON_CCOMPLEX 1
+ #elif defined(_Complex_I)
+ #define CYTHON_CCOMPLEX 1
+ #else
+ #define CYTHON_CCOMPLEX 0
+ #endif
+#endif
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ #include
+ #else
+ #include
+ #endif
+#endif
+#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__)
+ #undef _Complex_I
+ #define _Complex_I 1.0fj
+#endif
+
+
+static const char *__pyx_f[] = {
+ "gedlibpy.pyx",
+ "stringsource",
+ "__init__.pxd",
+ "array.pxd",
+ "type.pxd",
+ "bool.pxd",
+ "complex.pxd",
+};
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":776
+ * # in Cython to enable them only on the right systems.
+ *
+ * ctypedef npy_int8 int8_t # <<<<<<<<<<<<<<
+ * ctypedef npy_int16 int16_t
+ * ctypedef npy_int32 int32_t
+ */
+typedef npy_int8 __pyx_t_5numpy_int8_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":777
+ *
+ * ctypedef npy_int8 int8_t
+ * ctypedef npy_int16 int16_t # <<<<<<<<<<<<<<
+ * ctypedef npy_int32 int32_t
+ * ctypedef npy_int64 int64_t
+ */
+typedef npy_int16 __pyx_t_5numpy_int16_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":778
+ * ctypedef npy_int8 int8_t
+ * ctypedef npy_int16 int16_t
+ * ctypedef npy_int32 int32_t # <<<<<<<<<<<<<<
+ * ctypedef npy_int64 int64_t
+ * #ctypedef npy_int96 int96_t
+ */
+typedef npy_int32 __pyx_t_5numpy_int32_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":779
+ * ctypedef npy_int16 int16_t
+ * ctypedef npy_int32 int32_t
+ * ctypedef npy_int64 int64_t # <<<<<<<<<<<<<<
+ * #ctypedef npy_int96 int96_t
+ * #ctypedef npy_int128 int128_t
+ */
+typedef npy_int64 __pyx_t_5numpy_int64_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":783
+ * #ctypedef npy_int128 int128_t
+ *
+ * ctypedef npy_uint8 uint8_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uint16 uint16_t
+ * ctypedef npy_uint32 uint32_t
+ */
+typedef npy_uint8 __pyx_t_5numpy_uint8_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":784
+ *
+ * ctypedef npy_uint8 uint8_t
+ * ctypedef npy_uint16 uint16_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uint32 uint32_t
+ * ctypedef npy_uint64 uint64_t
+ */
+typedef npy_uint16 __pyx_t_5numpy_uint16_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":785
+ * ctypedef npy_uint8 uint8_t
+ * ctypedef npy_uint16 uint16_t
+ * ctypedef npy_uint32 uint32_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uint64 uint64_t
+ * #ctypedef npy_uint96 uint96_t
+ */
+typedef npy_uint32 __pyx_t_5numpy_uint32_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":786
+ * ctypedef npy_uint16 uint16_t
+ * ctypedef npy_uint32 uint32_t
+ * ctypedef npy_uint64 uint64_t # <<<<<<<<<<<<<<
+ * #ctypedef npy_uint96 uint96_t
+ * #ctypedef npy_uint128 uint128_t
+ */
+typedef npy_uint64 __pyx_t_5numpy_uint64_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":790
+ * #ctypedef npy_uint128 uint128_t
+ *
+ * ctypedef npy_float32 float32_t # <<<<<<<<<<<<<<
+ * ctypedef npy_float64 float64_t
+ * #ctypedef npy_float80 float80_t
+ */
+typedef npy_float32 __pyx_t_5numpy_float32_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":791
+ *
+ * ctypedef npy_float32 float32_t
+ * ctypedef npy_float64 float64_t # <<<<<<<<<<<<<<
+ * #ctypedef npy_float80 float80_t
+ * #ctypedef npy_float128 float128_t
+ */
+typedef npy_float64 __pyx_t_5numpy_float64_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":800
+ * # The int types are mapped a bit surprising --
+ * # numpy.int corresponds to 'l' and numpy.long to 'q'
+ * ctypedef npy_long int_t # <<<<<<<<<<<<<<
+ * ctypedef npy_longlong long_t
+ * ctypedef npy_longlong longlong_t
+ */
+typedef npy_long __pyx_t_5numpy_int_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":801
+ * # numpy.int corresponds to 'l' and numpy.long to 'q'
+ * ctypedef npy_long int_t
+ * ctypedef npy_longlong long_t # <<<<<<<<<<<<<<
+ * ctypedef npy_longlong longlong_t
+ *
+ */
+typedef npy_longlong __pyx_t_5numpy_long_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":802
+ * ctypedef npy_long int_t
+ * ctypedef npy_longlong long_t
+ * ctypedef npy_longlong longlong_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_ulong uint_t
+ */
+typedef npy_longlong __pyx_t_5numpy_longlong_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":804
+ * ctypedef npy_longlong longlong_t
+ *
+ * ctypedef npy_ulong uint_t # <<<<<<<<<<<<<<
+ * ctypedef npy_ulonglong ulong_t
+ * ctypedef npy_ulonglong ulonglong_t
+ */
+typedef npy_ulong __pyx_t_5numpy_uint_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":805
+ *
+ * ctypedef npy_ulong uint_t
+ * ctypedef npy_ulonglong ulong_t # <<<<<<<<<<<<<<
+ * ctypedef npy_ulonglong ulonglong_t
+ *
+ */
+typedef npy_ulonglong __pyx_t_5numpy_ulong_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":806
+ * ctypedef npy_ulong uint_t
+ * ctypedef npy_ulonglong ulong_t
+ * ctypedef npy_ulonglong ulonglong_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_intp intp_t
+ */
+typedef npy_ulonglong __pyx_t_5numpy_ulonglong_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":808
+ * ctypedef npy_ulonglong ulonglong_t
+ *
+ * ctypedef npy_intp intp_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uintp uintp_t
+ *
+ */
+typedef npy_intp __pyx_t_5numpy_intp_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":809
+ *
+ * ctypedef npy_intp intp_t
+ * ctypedef npy_uintp uintp_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_double float_t
+ */
+typedef npy_uintp __pyx_t_5numpy_uintp_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":811
+ * ctypedef npy_uintp uintp_t
+ *
+ * ctypedef npy_double float_t # <<<<<<<<<<<<<<
+ * ctypedef npy_double double_t
+ * ctypedef npy_longdouble longdouble_t
+ */
+typedef npy_double __pyx_t_5numpy_float_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":812
+ *
+ * ctypedef npy_double float_t
+ * ctypedef npy_double double_t # <<<<<<<<<<<<<<
+ * ctypedef npy_longdouble longdouble_t
+ *
+ */
+typedef npy_double __pyx_t_5numpy_double_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":813
+ * ctypedef npy_double float_t
+ * ctypedef npy_double double_t
+ * ctypedef npy_longdouble longdouble_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_cfloat cfloat_t
+ */
+typedef npy_longdouble __pyx_t_5numpy_longdouble_t;
+
+/* "gedlibpy.pyx":39
+ * #Long unsigned int equivalent
+ * cimport numpy as cnp
+ * ctypedef cnp.npy_uint32 UINT32_t # <<<<<<<<<<<<<<
+ * from cpython cimport array
+ *
+ */
+typedef npy_uint32 __pyx_t_8gedlibpy_UINT32_t;
+/* Declarations.proto */
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ typedef ::std::complex< float > __pyx_t_float_complex;
+ #else
+ typedef float _Complex __pyx_t_float_complex;
+ #endif
+#else
+ typedef struct { float real, imag; } __pyx_t_float_complex;
+#endif
+static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float);
+
+/* Declarations.proto */
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ typedef ::std::complex< double > __pyx_t_double_complex;
+ #else
+ typedef double _Complex __pyx_t_double_complex;
+ #endif
+#else
+ typedef struct { double real, imag; } __pyx_t_double_complex;
+#endif
+static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double);
+
+
+/*--- Type declarations ---*/
+#ifndef _ARRAYARRAY_H
+struct arrayobject;
+typedef struct arrayobject arrayobject;
+#endif
+struct __pyx_obj_8gedlibpy_GEDEnv;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":815
+ * ctypedef npy_longdouble longdouble_t
+ *
+ * ctypedef npy_cfloat cfloat_t # <<<<<<<<<<<<<<
+ * ctypedef npy_cdouble cdouble_t
+ * ctypedef npy_clongdouble clongdouble_t
+ */
+typedef npy_cfloat __pyx_t_5numpy_cfloat_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":816
+ *
+ * ctypedef npy_cfloat cfloat_t
+ * ctypedef npy_cdouble cdouble_t # <<<<<<<<<<<<<<
+ * ctypedef npy_clongdouble clongdouble_t
+ *
+ */
+typedef npy_cdouble __pyx_t_5numpy_cdouble_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":817
+ * ctypedef npy_cfloat cfloat_t
+ * ctypedef npy_cdouble cdouble_t
+ * ctypedef npy_clongdouble clongdouble_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_cdouble complex_t
+ */
+typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t;
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":819
+ * ctypedef npy_clongdouble clongdouble_t
+ *
+ * ctypedef npy_cdouble complex_t # <<<<<<<<<<<<<<
+ *
+ * cdef inline object PyArray_MultiIterNew1(a):
+ */
+typedef npy_cdouble __pyx_t_5numpy_complex_t;
+
+/* "gedlibpy.pyx":182
+ *
+ * # @cython.auto_pickle(True)
+ * cdef class GEDEnv: # <<<<<<<<<<<<<<
+ * """Cython wrapper class for C++ class PyGEDEnv
+ * """
+ */
+struct __pyx_obj_8gedlibpy_GEDEnv {
+ PyObject_HEAD
+ pyged::PyGEDEnv *c_env;
+};
+
+
+/* --- Runtime support code (head) --- */
+/* Refnanny.proto */
+#ifndef CYTHON_REFNANNY
+ #define CYTHON_REFNANNY 0
+#endif
+#if CYTHON_REFNANNY
+ typedef struct {
+ void (*INCREF)(void*, PyObject*, int);
+ void (*DECREF)(void*, PyObject*, int);
+ void (*GOTREF)(void*, PyObject*, int);
+ void (*GIVEREF)(void*, PyObject*, int);
+ void* (*SetupContext)(const char*, int, const char*);
+ void (*FinishContext)(void**);
+ } __Pyx_RefNannyAPIStruct;
+ static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;
+ static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);
+ #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;
+#ifdef WITH_THREAD
+ #define __Pyx_RefNannySetupContext(name, acquire_gil)\
+ if (acquire_gil) {\
+ PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\
+ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
+ PyGILState_Release(__pyx_gilstate_save);\
+ } else {\
+ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
+ }
+#else
+ #define __Pyx_RefNannySetupContext(name, acquire_gil)\
+ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)
+#endif
+ #define __Pyx_RefNannyFinishContext()\
+ __Pyx_RefNanny->FinishContext(&__pyx_refnanny)
+ #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)
+ #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)
+ #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)
+ #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)
+#else
+ #define __Pyx_RefNannyDeclarations
+ #define __Pyx_RefNannySetupContext(name, acquire_gil)
+ #define __Pyx_RefNannyFinishContext()
+ #define __Pyx_INCREF(r) Py_INCREF(r)
+ #define __Pyx_DECREF(r) Py_DECREF(r)
+ #define __Pyx_GOTREF(r)
+ #define __Pyx_GIVEREF(r)
+ #define __Pyx_XINCREF(r) Py_XINCREF(r)
+ #define __Pyx_XDECREF(r) Py_XDECREF(r)
+ #define __Pyx_XGOTREF(r)
+ #define __Pyx_XGIVEREF(r)
+#endif
+#define __Pyx_XDECREF_SET(r, v) do {\
+ PyObject *tmp = (PyObject *) r;\
+ r = v; __Pyx_XDECREF(tmp);\
+ } while (0)
+#define __Pyx_DECREF_SET(r, v) do {\
+ PyObject *tmp = (PyObject *) r;\
+ r = v; __Pyx_DECREF(tmp);\
+ } while (0)
+#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)
+#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)
+
+/* PyObjectGetAttrStr.proto */
+#if CYTHON_USE_TYPE_SLOTS
+static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name);
+#else
+#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)
+#endif
+
+/* GetBuiltinName.proto */
+static PyObject *__Pyx_GetBuiltinName(PyObject *name);
+
+/* ListCompAppend.proto */
+#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS
+static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) {
+ PyListObject* L = (PyListObject*) list;
+ Py_ssize_t len = Py_SIZE(list);
+ if (likely(L->allocated > len)) {
+ Py_INCREF(x);
+ PyList_SET_ITEM(list, len, x);
+ Py_SIZE(list) = len+1;
+ return 0;
+ }
+ return PyList_Append(list, x);
+}
+#else
+#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x)
+#endif
+
+/* IncludeCppStringH.proto */
+#include
+
+/* decode_c_string_utf16.proto */
+static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) {
+ int byteorder = 0;
+ return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
+}
+static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) {
+ int byteorder = -1;
+ return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
+}
+static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) {
+ int byteorder = 1;
+ return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
+}
+
+/* decode_c_bytes.proto */
+static CYTHON_INLINE PyObject* __Pyx_decode_c_bytes(
+ const char* cstring, Py_ssize_t length, Py_ssize_t start, Py_ssize_t stop,
+ const char* encoding, const char* errors,
+ PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors));
+
+/* decode_cpp_string.proto */
+static CYTHON_INLINE PyObject* __Pyx_decode_cpp_string(
+ std::string cppstring, Py_ssize_t start, Py_ssize_t stop,
+ const char* encoding, const char* errors,
+ PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) {
+ return __Pyx_decode_c_bytes(
+ cppstring.data(), cppstring.size(), start, stop, encoding, errors, decode_func);
+}
+
+/* RaiseArgTupleInvalid.proto */
+static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,
+ Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);
+
+/* KeywordStringCheck.proto */
+static int __Pyx_CheckKeywordStrings(PyObject *kwdict, const char* function_name, int kw_allowed);
+
+/* RaiseDoubleKeywords.proto */
+static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);
+
+/* ParseKeywords.proto */
+static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\
+ PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\
+ const char* function_name);
+
+/* PyCFunctionFastCall.proto */
+#if CYTHON_FAST_PYCCALL
+static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs);
+#else
+#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL)
+#endif
+
+/* PyFunctionFastCall.proto */
+#if CYTHON_FAST_PYCALL
+#define __Pyx_PyFunction_FastCall(func, args, nargs)\
+ __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL)
+#if 1 || PY_VERSION_HEX < 0x030600B1
+static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs);
+#else
+#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)
+#endif
+#define __Pyx_BUILD_ASSERT_EXPR(cond)\
+ (sizeof(char [1 - 2*!(cond)]) - 1)
+#ifndef Py_MEMBER_SIZE
+#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member)
+#endif
+ static size_t __pyx_pyframe_localsplus_offset = 0;
+ #include "frameobject.h"
+ #define __Pxy_PyFrame_Initialize_Offsets()\
+ ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\
+ (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus)))
+ #define __Pyx_PyFrame_GetLocalsplus(frame)\
+ (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset))
+#endif
+
+/* PyObjectCall.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);
+#else
+#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)
+#endif
+
+/* PyObjectCall2Args.proto */
+static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2);
+
+/* PyObjectCallMethO.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg);
+#endif
+
+/* PyObjectCallOneArg.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg);
+
+/* PyDictVersioning.proto */
+#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS
+#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1)
+#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
+#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\
+ (version_var) = __PYX_GET_DICT_VERSION(dict);\
+ (cache_var) = (value);
+#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\
+ static PY_UINT64_T __pyx_dict_version = 0;\
+ static PyObject *__pyx_dict_cached_value = NULL;\
+ if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\
+ (VAR) = __pyx_dict_cached_value;\
+ } else {\
+ (VAR) = __pyx_dict_cached_value = (LOOKUP);\
+ __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\
+ }\
+}
+static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj);
+static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj);
+static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version);
+#else
+#define __PYX_GET_DICT_VERSION(dict) (0)
+#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)
+#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP);
+#endif
+
+/* GetModuleGlobalName.proto */
+#if CYTHON_USE_DICT_VERSIONS
+#define __Pyx_GetModuleGlobalName(var, name) {\
+ static PY_UINT64_T __pyx_dict_version = 0;\
+ static PyObject *__pyx_dict_cached_value = NULL;\
+ (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\
+ (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\
+ __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\
+}
+#define __Pyx_GetModuleGlobalNameUncached(var, name) {\
+ PY_UINT64_T __pyx_dict_version;\
+ PyObject *__pyx_dict_cached_value;\
+ (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\
+}
+static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value);
+#else
+#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name)
+#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name)
+static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name);
+#endif
+
+/* PySequenceContains.proto */
+static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) {
+ int result = PySequence_Contains(seq, item);
+ return unlikely(result < 0) ? result : (result == (eq == Py_EQ));
+}
+
+/* PyThreadStateGet.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate;
+#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current;
+#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type
+#else
+#define __Pyx_PyThreadState_declare
+#define __Pyx_PyThreadState_assign
+#define __Pyx_PyErr_Occurred() PyErr_Occurred()
+#endif
+
+/* PyErrFetchRestore.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL)
+#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)
+#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)
+#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)
+#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)
+static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
+static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
+#if CYTHON_COMPILING_IN_CPYTHON
+#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL))
+#else
+#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
+#endif
+#else
+#define __Pyx_PyErr_Clear() PyErr_Clear()
+#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
+#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb)
+#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb)
+#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb)
+#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb)
+#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb)
+#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb)
+#endif
+
+/* RaiseException.proto */
+static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);
+
+/* GetItemInt.proto */
+#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
+ (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
+ __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\
+ (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\
+ __Pyx_GetItemInt_Generic(o, to_py_func(i))))
+#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
+ (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
+ __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\
+ (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL))
+static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,
+ int wraparound, int boundscheck);
+#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
+ (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
+ __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\
+ (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL))
+static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,
+ int wraparound, int boundscheck);
+static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j);
+static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i,
+ int is_list, int wraparound, int boundscheck);
+
+/* ObjectGetItem.proto */
+#if CYTHON_USE_TYPE_SLOTS
+static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key);
+#else
+#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key)
+#endif
+
+/* PyObjectCallNoArg.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func);
+#else
+#define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL)
+#endif
+
+/* IterFinish.proto */
+static CYTHON_INLINE int __Pyx_IterFinish(void);
+
+/* PyObjectGetMethod.proto */
+static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method);
+
+/* PyObjectCallMethod0.proto */
+static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name);
+
+/* RaiseNeedMoreValuesToUnpack.proto */
+static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);
+
+/* RaiseTooManyValuesToUnpack.proto */
+static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);
+
+/* UnpackItemEndCheck.proto */
+static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected);
+
+/* RaiseNoneIterError.proto */
+static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void);
+
+/* UnpackTupleError.proto */
+static void __Pyx_UnpackTupleError(PyObject *, Py_ssize_t index);
+
+/* UnpackTuple2.proto */
+#define __Pyx_unpack_tuple2(tuple, value1, value2, is_tuple, has_known_size, decref_tuple)\
+ (likely(is_tuple || PyTuple_Check(tuple)) ?\
+ (likely(has_known_size || PyTuple_GET_SIZE(tuple) == 2) ?\
+ __Pyx_unpack_tuple2_exact(tuple, value1, value2, decref_tuple) :\
+ (__Pyx_UnpackTupleError(tuple, 2), -1)) :\
+ __Pyx_unpack_tuple2_generic(tuple, value1, value2, has_known_size, decref_tuple))
+static CYTHON_INLINE int __Pyx_unpack_tuple2_exact(
+ PyObject* tuple, PyObject** value1, PyObject** value2, int decref_tuple);
+static int __Pyx_unpack_tuple2_generic(
+ PyObject* tuple, PyObject** value1, PyObject** value2, int has_known_size, int decref_tuple);
+
+/* dict_iter.proto */
+static CYTHON_INLINE PyObject* __Pyx_dict_iterator(PyObject* dict, int is_dict, PyObject* method_name,
+ Py_ssize_t* p_orig_length, int* p_is_dict);
+static CYTHON_INLINE int __Pyx_dict_iter_next(PyObject* dict_or_iter, Py_ssize_t orig_length, Py_ssize_t* ppos,
+ PyObject** pkey, PyObject** pvalue, PyObject** pitem, int is_dict);
+
+/* PyIntBinop.proto */
+#if !CYTHON_COMPILING_IN_PYPY
+static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check);
+#else
+#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\
+ (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2))
+#endif
+
+/* PyObjectSetAttrStr.proto */
+#if CYTHON_USE_TYPE_SLOTS
+#define __Pyx_PyObject_DelAttrStr(o,n) __Pyx_PyObject_SetAttrStr(o, n, NULL)
+static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value);
+#else
+#define __Pyx_PyObject_DelAttrStr(o,n) PyObject_DelAttr(o,n)
+#define __Pyx_PyObject_SetAttrStr(o,n,v) PyObject_SetAttr(o,n,v)
+#endif
+
+/* DictGetItem.proto */
+#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY
+static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key);
+#define __Pyx_PyObject_Dict_GetItem(obj, name)\
+ (likely(PyDict_CheckExact(obj)) ?\
+ __Pyx_PyDict_GetItem(obj, name) : PyObject_GetItem(obj, name))
+#else
+#define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key)
+#define __Pyx_PyObject_Dict_GetItem(obj, name) PyObject_GetItem(obj, name)
+#endif
+
+/* ExtTypeTest.proto */
+static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type);
+
+/* GetTopmostException.proto */
+#if CYTHON_USE_EXC_INFO_STACK
+static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate);
+#endif
+
+/* SaveResetException.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)
+static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
+#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)
+static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
+#else
+#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb)
+#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb)
+#endif
+
+/* PyErrExceptionMatches.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)
+static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);
+#else
+#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err)
+#endif
+
+/* GetException.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb)
+static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
+#else
+static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);
+#endif
+
+/* PyObject_GenericGetAttrNoDict.proto */
+#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
+static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name);
+#else
+#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr
+#endif
+
+/* PyObject_GenericGetAttr.proto */
+#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
+static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name);
+#else
+#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr
+#endif
+
+/* SetupReduce.proto */
+static int __Pyx_setup_reduce(PyObject* type_obj);
+
+/* TypeImport.proto */
+#ifndef __PYX_HAVE_RT_ImportType_proto
+#define __PYX_HAVE_RT_ImportType_proto
+enum __Pyx_ImportType_CheckSize {
+ __Pyx_ImportType_CheckSize_Error = 0,
+ __Pyx_ImportType_CheckSize_Warn = 1,
+ __Pyx_ImportType_CheckSize_Ignore = 2
+};
+static PyTypeObject *__Pyx_ImportType(PyObject* module, const char *module_name, const char *class_name, size_t size, enum __Pyx_ImportType_CheckSize check_size);
+#endif
+
+/* Import.proto */
+static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);
+
+/* ImportFrom.proto */
+static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name);
+
+/* CalculateMetaclass.proto */
+static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases);
+
+/* Py3ClassCreate.proto */
+static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, PyObject *qualname,
+ PyObject *mkw, PyObject *modname, PyObject *doc);
+static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, PyObject *dict,
+ PyObject *mkw, int calculate_metaclass, int allow_py2_metaclass);
+
+/* FetchCommonType.proto */
+static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type);
+
+/* CythonFunction.proto */
+#define __Pyx_CyFunction_USED 1
+#define __Pyx_CYFUNCTION_STATICMETHOD 0x01
+#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02
+#define __Pyx_CYFUNCTION_CCLASS 0x04
+#define __Pyx_CyFunction_GetClosure(f)\
+ (((__pyx_CyFunctionObject *) (f))->func_closure)
+#define __Pyx_CyFunction_GetClassObj(f)\
+ (((__pyx_CyFunctionObject *) (f))->func_classobj)
+#define __Pyx_CyFunction_Defaults(type, f)\
+ ((type *)(((__pyx_CyFunctionObject *) (f))->defaults))
+#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\
+ ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g)
+typedef struct {
+ PyCFunctionObject func;
+#if PY_VERSION_HEX < 0x030500A0
+ PyObject *func_weakreflist;
+#endif
+ PyObject *func_dict;
+ PyObject *func_name;
+ PyObject *func_qualname;
+ PyObject *func_doc;
+ PyObject *func_globals;
+ PyObject *func_code;
+ PyObject *func_closure;
+ PyObject *func_classobj;
+ void *defaults;
+ int defaults_pyobjects;
+ size_t defaults_size; // used by FusedFunction for copying defaults
+ int flags;
+ PyObject *defaults_tuple;
+ PyObject *defaults_kwdict;
+ PyObject *(*defaults_getter)(PyObject *);
+ PyObject *func_annotations;
+} __pyx_CyFunctionObject;
+static PyTypeObject *__pyx_CyFunctionType = 0;
+#define __Pyx_CyFunction_Check(obj) (__Pyx_TypeCheck(obj, __pyx_CyFunctionType))
+#define __Pyx_CyFunction_NewEx(ml, flags, qualname, self, module, globals, code)\
+ __Pyx_CyFunction_New(__pyx_CyFunctionType, ml, flags, qualname, self, module, globals, code)
+static PyObject *__Pyx_CyFunction_New(PyTypeObject *, PyMethodDef *ml,
+ int flags, PyObject* qualname,
+ PyObject *self,
+ PyObject *module, PyObject *globals,
+ PyObject* code);
+static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m,
+ size_t size,
+ int pyobjects);
+static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m,
+ PyObject *tuple);
+static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m,
+ PyObject *dict);
+static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m,
+ PyObject *dict);
+static int __pyx_CyFunction_init(void);
+
+/* SetNameInClass.proto */
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1
+#define __Pyx_SetNameInClass(ns, name, value)\
+ (likely(PyDict_CheckExact(ns)) ? _PyDict_SetItem_KnownHash(ns, name, value, ((PyASCIIObject *) name)->hash) : PyObject_SetItem(ns, name, value))
+#elif CYTHON_COMPILING_IN_CPYTHON
+#define __Pyx_SetNameInClass(ns, name, value)\
+ (likely(PyDict_CheckExact(ns)) ? PyDict_SetItem(ns, name, value) : PyObject_SetItem(ns, name, value))
+#else
+#define __Pyx_SetNameInClass(ns, name, value) PyObject_SetItem(ns, name, value)
+#endif
+
+/* CLineInTraceback.proto */
+#ifdef CYTHON_CLINE_IN_TRACEBACK
+#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0)
+#else
+static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line);
+#endif
+
+/* CodeObjectCache.proto */
+typedef struct {
+ PyCodeObject* code_object;
+ int code_line;
+} __Pyx_CodeObjectCacheEntry;
+struct __Pyx_CodeObjectCache {
+ int count;
+ int max_count;
+ __Pyx_CodeObjectCacheEntry* entries;
+};
+static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};
+static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);
+static PyCodeObject *__pyx_find_code_object(int code_line);
+static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);
+
+/* AddTraceback.proto */
+static void __Pyx_AddTraceback(const char *funcname, int c_line,
+ int py_line, const char *filename);
+
+/* ArrayAPI.proto */
+#ifndef _ARRAYARRAY_H
+#define _ARRAYARRAY_H
+typedef struct arraydescr {
+ int typecode;
+ int itemsize;
+ PyObject * (*getitem)(struct arrayobject *, Py_ssize_t);
+ int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *);
+#if PY_MAJOR_VERSION >= 3
+ char *formats;
+#endif
+} arraydescr;
+struct arrayobject {
+ PyObject_HEAD
+ Py_ssize_t ob_size;
+ union {
+ char *ob_item;
+ float *as_floats;
+ double *as_doubles;
+ int *as_ints;
+ unsigned int *as_uints;
+ unsigned char *as_uchars;
+ signed char *as_schars;
+ char *as_chars;
+ unsigned long *as_ulongs;
+ long *as_longs;
+#if PY_MAJOR_VERSION >= 3
+ unsigned long long *as_ulonglongs;
+ long long *as_longlongs;
+#endif
+ short *as_shorts;
+ unsigned short *as_ushorts;
+ Py_UNICODE *as_pyunicodes;
+ void *as_voidptr;
+ } data;
+ Py_ssize_t allocated;
+ struct arraydescr *ob_descr;
+ PyObject *weakreflist;
+#if PY_MAJOR_VERSION >= 3
+ int ob_exports;
+#endif
+};
+#ifndef NO_NEWARRAY_INLINE
+static CYTHON_INLINE PyObject * newarrayobject(PyTypeObject *type, Py_ssize_t size,
+ struct arraydescr *descr) {
+ arrayobject *op;
+ size_t nbytes;
+ if (size < 0) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ nbytes = size * descr->itemsize;
+ if (nbytes / descr->itemsize != (size_t)size) {
+ return PyErr_NoMemory();
+ }
+ op = (arrayobject *) type->tp_alloc(type, 0);
+ if (op == NULL) {
+ return NULL;
+ }
+ op->ob_descr = descr;
+ op->allocated = size;
+ op->weakreflist = NULL;
+ op->ob_size = size;
+ if (size <= 0) {
+ op->data.ob_item = NULL;
+ }
+ else {
+ op->data.ob_item = PyMem_NEW(char, nbytes);
+ if (op->data.ob_item == NULL) {
+ Py_DECREF(op);
+ return PyErr_NoMemory();
+ }
+ }
+ return (PyObject *) op;
+}
+#else
+PyObject* newarrayobject(PyTypeObject *type, Py_ssize_t size,
+ struct arraydescr *descr);
+#endif
+static CYTHON_INLINE int resize(arrayobject *self, Py_ssize_t n) {
+ void *items = (void*) self->data.ob_item;
+ PyMem_Resize(items, char, (size_t)(n * self->ob_descr->itemsize));
+ if (items == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ self->data.ob_item = (char*) items;
+ self->ob_size = n;
+ self->allocated = n;
+ return 0;
+}
+static CYTHON_INLINE int resize_smart(arrayobject *self, Py_ssize_t n) {
+ void *items = (void*) self->data.ob_item;
+ Py_ssize_t newsize;
+ if (n < self->allocated && n*4 > self->allocated) {
+ self->ob_size = n;
+ return 0;
+ }
+ newsize = n + (n / 2) + 1;
+ if (newsize <= n) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ PyMem_Resize(items, char, (size_t)(newsize * self->ob_descr->itemsize));
+ if (items == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ self->data.ob_item = (char*) items;
+ self->ob_size = n;
+ self->allocated = newsize;
+ return 0;
+}
+#endif
+
+/* CppExceptionConversion.proto */
+#ifndef __Pyx_CppExn2PyErr
+#include
+#include
+#include
+#include
+static void __Pyx_CppExn2PyErr() {
+ try {
+ if (PyErr_Occurred())
+ ; // let the latest Python exn pass through and ignore the current one
+ else
+ throw;
+ } catch (const std::bad_alloc& exn) {
+ PyErr_SetString(PyExc_MemoryError, exn.what());
+ } catch (const std::bad_cast& exn) {
+ PyErr_SetString(PyExc_TypeError, exn.what());
+ } catch (const std::bad_typeid& exn) {
+ PyErr_SetString(PyExc_TypeError, exn.what());
+ } catch (const std::domain_error& exn) {
+ PyErr_SetString(PyExc_ValueError, exn.what());
+ } catch (const std::invalid_argument& exn) {
+ PyErr_SetString(PyExc_ValueError, exn.what());
+ } catch (const std::ios_base::failure& exn) {
+ PyErr_SetString(PyExc_IOError, exn.what());
+ } catch (const std::out_of_range& exn) {
+ PyErr_SetString(PyExc_IndexError, exn.what());
+ } catch (const std::overflow_error& exn) {
+ PyErr_SetString(PyExc_OverflowError, exn.what());
+ } catch (const std::range_error& exn) {
+ PyErr_SetString(PyExc_ArithmeticError, exn.what());
+ } catch (const std::underflow_error& exn) {
+ PyErr_SetString(PyExc_ArithmeticError, exn.what());
+ } catch (const std::exception& exn) {
+ PyErr_SetString(PyExc_RuntimeError, exn.what());
+ }
+ catch (...)
+ {
+ PyErr_SetString(PyExc_RuntimeError, "Unknown exception");
+ }
+}
+#endif
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_npy_uint64(npy_uint64 value);
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value);
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);
+
+/* RealImag.proto */
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ #define __Pyx_CREAL(z) ((z).real())
+ #define __Pyx_CIMAG(z) ((z).imag())
+ #else
+ #define __Pyx_CREAL(z) (__real__(z))
+ #define __Pyx_CIMAG(z) (__imag__(z))
+ #endif
+#else
+ #define __Pyx_CREAL(z) ((z).real)
+ #define __Pyx_CIMAG(z) ((z).imag)
+#endif
+#if defined(__cplusplus) && CYTHON_CCOMPLEX\
+ && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103)
+ #define __Pyx_SET_CREAL(z,x) ((z).real(x))
+ #define __Pyx_SET_CIMAG(z,y) ((z).imag(y))
+#else
+ #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x)
+ #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y)
+#endif
+
+/* Arithmetic.proto */
+#if CYTHON_CCOMPLEX
+ #define __Pyx_c_eq_float(a, b) ((a)==(b))
+ #define __Pyx_c_sum_float(a, b) ((a)+(b))
+ #define __Pyx_c_diff_float(a, b) ((a)-(b))
+ #define __Pyx_c_prod_float(a, b) ((a)*(b))
+ #define __Pyx_c_quot_float(a, b) ((a)/(b))
+ #define __Pyx_c_neg_float(a) (-(a))
+ #ifdef __cplusplus
+ #define __Pyx_c_is_zero_float(z) ((z)==(float)0)
+ #define __Pyx_c_conj_float(z) (::std::conj(z))
+ #if 1
+ #define __Pyx_c_abs_float(z) (::std::abs(z))
+ #define __Pyx_c_pow_float(a, b) (::std::pow(a, b))
+ #endif
+ #else
+ #define __Pyx_c_is_zero_float(z) ((z)==0)
+ #define __Pyx_c_conj_float(z) (conjf(z))
+ #if 1
+ #define __Pyx_c_abs_float(z) (cabsf(z))
+ #define __Pyx_c_pow_float(a, b) (cpowf(a, b))
+ #endif
+ #endif
+#else
+ static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex);
+ static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex);
+ #if 1
+ static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ #endif
+#endif
+
+/* Arithmetic.proto */
+#if CYTHON_CCOMPLEX
+ #define __Pyx_c_eq_double(a, b) ((a)==(b))
+ #define __Pyx_c_sum_double(a, b) ((a)+(b))
+ #define __Pyx_c_diff_double(a, b) ((a)-(b))
+ #define __Pyx_c_prod_double(a, b) ((a)*(b))
+ #define __Pyx_c_quot_double(a, b) ((a)/(b))
+ #define __Pyx_c_neg_double(a) (-(a))
+ #ifdef __cplusplus
+ #define __Pyx_c_is_zero_double(z) ((z)==(double)0)
+ #define __Pyx_c_conj_double(z) (::std::conj(z))
+ #if 1
+ #define __Pyx_c_abs_double(z) (::std::abs(z))
+ #define __Pyx_c_pow_double(a, b) (::std::pow(a, b))
+ #endif
+ #else
+ #define __Pyx_c_is_zero_double(z) ((z)==0)
+ #define __Pyx_c_conj_double(z) (conj(z))
+ #if 1
+ #define __Pyx_c_abs_double(z) (cabs(z))
+ #define __Pyx_c_pow_double(a, b) (cpow(a, b))
+ #endif
+ #endif
+#else
+ static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex);
+ static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex);
+ #if 1
+ static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ #endif
+#endif
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value);
+
+/* CIntFromPy.proto */
+static CYTHON_INLINE size_t __Pyx_PyInt_As_size_t(PyObject *);
+
+/* CIntFromPy.proto */
+static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);
+
+/* CIntFromPy.proto */
+static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);
+
+/* FastTypeChecks.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)
+static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);
+static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);
+static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);
+#else
+#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)
+#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)
+#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))
+#endif
+#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)
+
+/* CStringEquals.proto */
+static CYTHON_INLINE int __Pyx_StrEq(const char *, const char *);
+
+/* CheckBinaryVersion.proto */
+static int __Pyx_check_binary_version(void);
+
+/* InitStrings.proto */
+static int __Pyx_InitStrings(__Pyx_StringTabEntry *t);
+
+
+/* Module declarations from 'libcpp.vector' */
+
+/* Module declarations from 'libc.string' */
+
+/* Module declarations from 'libcpp.string' */
+
+/* Module declarations from 'libcpp.utility' */
+
+/* Module declarations from 'libcpp.map' */
+
+/* Module declarations from 'libcpp' */
+
+/* Module declarations from 'libcpp.pair' */
+
+/* Module declarations from 'libcpp.list' */
+
+/* Module declarations from 'cpython.buffer' */
+
+/* Module declarations from 'libc.stdio' */
+
+/* Module declarations from '__builtin__' */
+
+/* Module declarations from 'cpython.type' */
+static PyTypeObject *__pyx_ptype_7cpython_4type_type = 0;
+
+/* Module declarations from 'cpython.version' */
+
+/* Module declarations from 'cpython.exc' */
+
+/* Module declarations from 'cpython.module' */
+
+/* Module declarations from 'cpython.mem' */
+
+/* Module declarations from 'cpython.tuple' */
+
+/* Module declarations from 'cpython.list' */
+
+/* Module declarations from 'cpython.sequence' */
+
+/* Module declarations from 'cpython.mapping' */
+
+/* Module declarations from 'cpython.iterator' */
+
+/* Module declarations from 'cpython.number' */
+
+/* Module declarations from 'cpython.int' */
+
+/* Module declarations from '__builtin__' */
+
+/* Module declarations from 'cpython.bool' */
+static PyTypeObject *__pyx_ptype_7cpython_4bool_bool = 0;
+
+/* Module declarations from 'cpython.long' */
+
+/* Module declarations from 'cpython.float' */
+
+/* Module declarations from '__builtin__' */
+
+/* Module declarations from 'cpython.complex' */
+static PyTypeObject *__pyx_ptype_7cpython_7complex_complex = 0;
+
+/* Module declarations from 'cpython.string' */
+
+/* Module declarations from 'cpython.unicode' */
+
+/* Module declarations from 'cpython.dict' */
+
+/* Module declarations from 'cpython.instance' */
+
+/* Module declarations from 'cpython.function' */
+
+/* Module declarations from 'cpython.method' */
+
+/* Module declarations from 'cpython.weakref' */
+
+/* Module declarations from 'cpython.getargs' */
+
+/* Module declarations from 'cpython.pythread' */
+
+/* Module declarations from 'cpython.pystate' */
+
+/* Module declarations from 'cpython.cobject' */
+
+/* Module declarations from 'cpython.oldbuffer' */
+
+/* Module declarations from 'cpython.set' */
+
+/* Module declarations from 'cpython.bytes' */
+
+/* Module declarations from 'cpython.pycapsule' */
+
+/* Module declarations from 'cpython' */
+
+/* Module declarations from 'cpython.object' */
+
+/* Module declarations from 'cpython.ref' */
+
+/* Module declarations from 'numpy' */
+
+/* Module declarations from 'numpy' */
+static PyTypeObject *__pyx_ptype_5numpy_dtype = 0;
+static PyTypeObject *__pyx_ptype_5numpy_flatiter = 0;
+static PyTypeObject *__pyx_ptype_5numpy_broadcast = 0;
+static PyTypeObject *__pyx_ptype_5numpy_ndarray = 0;
+static PyTypeObject *__pyx_ptype_5numpy_ufunc = 0;
+static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/
+
+/* Module declarations from 'array' */
+
+/* Module declarations from 'cpython.array' */
+static PyTypeObject *__pyx_ptype_7cpython_5array_array = 0;
+static CYTHON_INLINE int __pyx_f_7cpython_5array_extend_buffer(arrayobject *, char *, Py_ssize_t); /*proto*/
+
+/* Module declarations from 'cython' */
+
+/* Module declarations from 'gedlibpy' */
+static PyTypeObject *__pyx_ptype_8gedlibpy_GEDEnv = 0;
+static CYTHON_INLINE PyObject *__pyx_convert_PyObject_string_to_py_std__in_string(std::string const &); /*proto*/
+static CYTHON_INLINE PyObject *__pyx_convert_PyUnicode_string_to_py_std__in_string(std::string const &); /*proto*/
+static CYTHON_INLINE PyObject *__pyx_convert_PyStr_string_to_py_std__in_string(std::string const &); /*proto*/
+static CYTHON_INLINE PyObject *__pyx_convert_PyBytes_string_to_py_std__in_string(std::string const &); /*proto*/
+static CYTHON_INLINE PyObject *__pyx_convert_PyByteArray_string_to_py_std__in_string(std::string const &); /*proto*/
+static std::string __pyx_convert_string_from_py_std__in_string(PyObject *); /*proto*/
+static PyObject *__pyx_convert_pair_to_py_size_t____size_t(std::pair const &); /*proto*/
+static PyObject *__pyx_convert_vector_to_py_size_t(const std::vector &); /*proto*/
+static std::map __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(PyObject *); /*proto*/
+static PyObject *__pyx_convert_map_to_py_std_3a__3a_string____std_3a__3a_string(std::map const &); /*proto*/
+static PyObject *__pyx_convert_map_to_py_std_3a__3a_pair_3c_size_t_2c_size_t_3e_______std_3a__3a_map_3c_std_3a__3a_string_2c_std_3a__3a_string_3e___(std::map ,std::map > const &); /*proto*/
+static PyObject *__pyx_convert_vector_to_py_std_3a__3a_vector_3c_size_t_3e___(const std::vector > &); /*proto*/
+static std::vector __pyx_convert_vector_from_py_double(PyObject *); /*proto*/
+static PyObject *__pyx_convert_vector_to_py_npy_uint64(const std::vector &); /*proto*/
+static PyObject *__pyx_convert_vector_to_py_std_3a__3a_pair_3c_size_t_2c_size_t_3e___(const std::vector > &); /*proto*/
+static PyObject *__pyx_convert_vector_to_py_int(const std::vector &); /*proto*/
+static PyObject *__pyx_convert_vector_to_py_std_3a__3a_vector_3c_int_3e___(const std::vector > &); /*proto*/
+static PyObject *__pyx_convert_vector_to_py_std_3a__3a_vector_3c_npy_uint64_3e___(const std::vector > &); /*proto*/
+static std::vector __pyx_convert_vector_from_py_size_t(PyObject *); /*proto*/
+static std::vector > __pyx_convert_vector_from_py_std_3a__3a_vector_3c_size_t_3e___(PyObject *); /*proto*/
+static std::vector > __pyx_convert_vector_from_py_std_3a__3a_vector_3c_double_3e___(PyObject *); /*proto*/
+static PyObject *__pyx_convert_vector_to_py_double(const std::vector &); /*proto*/
+static PyObject *__pyx_convert_vector_to_py_std_3a__3a_vector_3c_double_3e___(const std::vector > &); /*proto*/
+static std::vector > __pyx_convert_vector_from_py_std_3a__3a_map_3c_std_3a__3a_string_2c_std_3a__3a_string_3e___(PyObject *); /*proto*/
+static std::pair __pyx_convert_pair_from_py_size_t__and_size_t(PyObject *); /*proto*/
+static std::vector > __pyx_convert_vector_from_py_std_3a__3a_pair_3c_size_t_2c_size_t_3e___(PyObject *); /*proto*/
+#define __Pyx_MODULE_NAME "gedlibpy"
+extern int __pyx_module_is_main_gedlibpy;
+int __pyx_module_is_main_gedlibpy = 0;
+
+/* Implementation of 'gedlibpy' */
+static PyObject *__pyx_builtin_range;
+static PyObject *__pyx_builtin_print;
+static PyObject *__pyx_builtin_enumerate;
+static PyObject *__pyx_builtin_TypeError;
+static PyObject *__pyx_builtin_ValueError;
+static PyObject *__pyx_builtin_RuntimeError;
+static PyObject *__pyx_builtin_ImportError;
+static PyObject *__pyx_builtin_MemoryError;
+static const char __pyx_k_[] = "";
+static const char __pyx_k_g[] = "g";
+static const char __pyx_k_h[] = "h";
+static const char __pyx_k_g1[] = "g1";
+static const char __pyx_k_g2[] = "g2";
+static const char __pyx_k_id[] = "id";
+static const char __pyx_k_np[] = "np";
+static const char __pyx_k_nx[] = "nx";
+static const char __pyx_k_os[] = "os";
+static const char __pyx_k__20[] = "*";
+static const char __pyx_k_doc[] = "__doc__";
+static const char __pyx_k_inf[] = "inf";
+static const char __pyx_k_key[] = "key";
+static const char __pyx_k_res[] = "res";
+static const char __pyx_k_cdll[] = "cdll";
+static const char __pyx_k_file[] = "__file__";
+static const char __pyx_k_g_id[] = "g_id";
+static const char __pyx_k_h_id[] = "h_id";
+static const char __pyx_k_head[] = "head";
+static const char __pyx_k_init[] = "init";
+static const char __pyx_k_lib1[] = "lib1";
+static const char __pyx_k_lib2[] = "lib2";
+static const char __pyx_k_lib3[] = "lib3";
+static const char __pyx_k_lib4[] = "lib4";
+static const char __pyx_k_main[] = "__main__";
+static const char __pyx_k_name[] = "name";
+static const char __pyx_k_path[] = "path";
+static const char __pyx_k_self[] = "self";
+static const char __pyx_k_tail[] = "tail";
+static const char __pyx_k_test[] = "__test__";
+static const char __pyx_k_Error[] = "Error";
+static const char __pyx_k_Graph[] = "Graph";
+static const char __pyx_k_edges[] = "edges";
+static const char __pyx_k_graph[] = "graph";
+static const char __pyx_k_items[] = "items";
+static const char __pyx_k_map_b[] = "map_b";
+static const char __pyx_k_map_u[] = "map_u";
+static const char __pyx_k_nodes[] = "nodes";
+static const char __pyx_k_numpy[] = "numpy";
+static const char __pyx_k_print[] = "print";
+static const char __pyx_k_range[] = "range";
+static const char __pyx_k_utf_8[] = "utf-8";
+static const char __pyx_k_value[] = "value";
+static const char __pyx_k_GEDEnv[] = "GEDEnv";
+static const char __pyx_k_classe[] = "classe";
+static const char __pyx_k_ctypes[] = "ctypes";
+static const char __pyx_k_decode[] = "decode";
+static const char __pyx_k_encode[] = "encode";
+static const char __pyx_k_import[] = "__import__";
+static const char __pyx_k_init_2[] = "__init__";
+static const char __pyx_k_method[] = "method";
+static const char __pyx_k_module[] = "__module__";
+static const char __pyx_k_name_2[] = "__name__";
+static const char __pyx_k_option[] = "option";
+static const char __pyx_k_reduce[] = "__reduce__";
+static const char __pyx_k_NodeMap[] = "NodeMap";
+static const char __pyx_k_classes[] = "classes";
+static const char __pyx_k_dataset[] = "dataset";
+static const char __pyx_k_dirname[] = "dirname";
+static const char __pyx_k_message[] = "message";
+static const char __pyx_k_node_id[] = "node_id";
+static const char __pyx_k_options[] = "options";
+static const char __pyx_k_prepare[] = "__prepare__";
+static const char __pyx_k_add_edge[] = "add_edge";
+static const char __pyx_k_add_node[] = "add_node";
+static const char __pyx_k_gedlibpy[] = "gedlibpy";
+static const char __pyx_k_getstate[] = "__getstate__";
+static const char __pyx_k_graph_id[] = "graph_id";
+static const char __pyx_k_networkx[] = "networkx";
+static const char __pyx_k_node_map[] = "node_map";
+static const char __pyx_k_nx_graph[] = "nx_graph";
+static const char __pyx_k_path_XML[] = "path_XML";
+static const char __pyx_k_qualname[] = "__qualname__";
+static const char __pyx_k_realpath[] = "realpath";
+static const char __pyx_k_setstate[] = "__setstate__";
+static const char __pyx_k_InitError[] = "InitError";
+static const char __pyx_k_TypeError[] = "TypeError";
+static const char __pyx_k_add_graph[] = "add_graph";
+static const char __pyx_k_adj_lists[] = "adj_lists";
+static const char __pyx_k_edge_list[] = "edge_list";
+static const char __pyx_k_edge_type[] = "edge_type";
+static const char __pyx_k_edit_cost[] = "edit_cost";
+static const char __pyx_k_enumerate[] = "enumerate";
+static const char __pyx_k_graph_ids[] = "graph_ids";
+static const char __pyx_k_iteritems[] = "iteritems";
+static const char __pyx_k_map_edges[] = "map_edges";
+static const char __pyx_k_metaclass[] = "__metaclass__";
+static const char __pyx_k_node_type[] = "node_type";
+static const char __pyx_k_reduce_ex[] = "__reduce_ex__";
+static const char __pyx_k_ValueError[] = "ValueError";
+static const char __pyx_k_adj_matrix[] = "adj_matrix";
+static const char __pyx_k_edge_label[] = "edge_label";
+static const char __pyx_k_graph_name[] = "graph_name";
+static const char __pyx_k_map_edge_b[] = "map_edge_b";
+static const char __pyx_k_node_label[] = "node_label";
+static const char __pyx_k_run_method[] = "run_method";
+static const char __pyx_k_set_method[] = "set_method";
+static const char __pyx_k_ImportError[] = "ImportError";
+static const char __pyx_k_LoadLibrary[] = "LoadLibrary";
+static const char __pyx_k_MemoryError[] = "MemoryError";
+static const char __pyx_k_MethodError[] = "MethodError";
+static const char __pyx_k_as_relation[] = "as_relation";
+static const char __pyx_k_clear_graph[] = "clear_graph";
+static const char __pyx_k_graph_class[] = "graph_class";
+static const char __pyx_k_init_method[] = "init_method";
+static const char __pyx_k_init_option[] = "init_option";
+static const char __pyx_k_path_folder[] = "path_folder";
+static const char __pyx_k_restart_env[] = "restart_env";
+static const char __pyx_k_RuntimeError[] = "RuntimeError";
+static const char __pyx_k_add_nx_graph[] = "add_nx_graph";
+static const char __pyx_k_edge_label_1[] = "edge_label_1";
+static const char __pyx_k_edge_label_2[] = "edge_label_2";
+static const char __pyx_k_edge_label_b[] = "edge_label_b";
+static const char __pyx_k_gedlibpy_pyx[] = "gedlibpy.pyx";
+static const char __pyx_k_get_node_map[] = "get_node_map";
+static const char __pyx_k_node_label_1[] = "node_label_1";
+static const char __pyx_k_node_label_2[] = "node_label_2";
+static const char __pyx_k_EditCostError[] = "EditCostError";
+static const char __pyx_k_Graphs_loaded[] = "Graphs loaded ! ";
+static const char __pyx_k_get_edge_data[] = "get_edge_data";
+static const char __pyx_k_list_of_edges[] = "list_of_edges";
+static const char __pyx_k_list_of_nodes[] = "list_of_nodes";
+static const char __pyx_k_reduce_cython[] = "__reduce_cython__";
+static const char __pyx_k_set_edit_cost[] = "set_edit_cost";
+static const char __pyx_k_add_assignment[] = "add_assignment";
+static const char __pyx_k_get_dummy_node[] = "get_dummy_node";
+static const char __pyx_k_is_initialized[] = "is_initialized";
+static const char __pyx_k_decode_your_map[] = "decode_your_map";
+static const char __pyx_k_encode_your_map[] = "encode_your_map";
+static const char __pyx_k_get_graph_edges[] = "get_graph_edges";
+static const char __pyx_k_get_upper_bound[] = "get_upper_bound";
+static const char __pyx_k_gklearn_ged_env[] = "gklearn.ged.env";
+static const char __pyx_k_load_GXL_graphs[] = "load_GXL_graphs";
+static const char __pyx_k_print_to_stdout[] = "print_to_stdout";
+static const char __pyx_k_setstate_cython[] = "__setstate_cython__";
+static const char __pyx_k_InitError___init[] = "InitError.__init__";
+static const char __pyx_k_Number_of_graphs[] = "Number of graphs = ";
+static const char __pyx_k_get_init_options[] = "get_init_options";
+static const char __pyx_k_set_induced_cost[] = "set_induced_cost";
+static const char __pyx_k_ignore_duplicates[] = "ignore_duplicates";
+static const char __pyx_k_original_node_ids[] = "original_node_ids";
+static const char __pyx_k_MethodError___init[] = "MethodError.__init__";
+static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback";
+static const char __pyx_k_decode_graph_edges[] = "decode_graph_edges";
+static const char __pyx_k_edit_cost_constant[] = "edit_cost_constant";
+static const char __pyx_k_get_method_options[] = "get_method_options";
+static const char __pyx_k_get_graph_num_nodes[] = "get_graph_num_nodes";
+static const char __pyx_k_EditCostError___init[] = "EditCostError.__init__";
+static const char __pyx_k_list_of_init_options[] = "list_of_init_options";
+static const char __pyx_k_get_edit_cost_options[] = "get_edit_cost_options";
+static const char __pyx_k_get_graph_node_labels[] = "get_graph_node_labels";
+static const char __pyx_k_get_original_node_ids[] = "get_original_node_ids";
+static const char __pyx_k_lib_nomad_libnomad_so[] = "/lib/nomad/libnomad.so";
+static const char __pyx_k_list_of_method_options[] = "list_of_method_options";
+static const char __pyx_k_lib_nomad_libsgtelib_so[] = "/lib/nomad/libsgtelib.so";
+static const char __pyx_k_Computation_between_graph[] = "Computation between graph ";
+static const char __pyx_k_Initialization_terminated[] = "Initialization terminated !";
+static const char __pyx_k_lib_fann_libdoublefann_so[] = "/lib/fann/libdoublefann.so";
+static const char __pyx_k_lib_libsvm_3_22_libsvm_so[] = "/lib/libsvm.3.22/libsvm.so";
+static const char __pyx_k_list_of_edit_cost_options[] = "list_of_edit_cost_options";
+static const char __pyx_k_Initialization_in_progress[] = "Initialization in progress...";
+static const char __pyx_k_Loading_graphs_in_progress[] = "Loading graphs in progress...";
+static const char __pyx_k_ndarray_is_not_C_contiguous[] = "ndarray is not C contiguous";
+static const char __pyx_k_EAGER_WITHOUT_SHUFFLED_COPIES[] = "EAGER_WITHOUT_SHUFFLED_COPIES";
+static const char __pyx_k_Class_for_Edit_Cost_Error_Raise[] = "\n\t\tClass for Edit Cost Error. Raise an error if an edit cost function doesn't exist in the library (not in list_of_edit_cost_options).\n\n\t\t:attribute message: The message to print when an error is detected.\n\t\t:type message: string\n\t";
+static const char __pyx_k_Class_for_Init_Error_Raise_an_e[] = "\n\t\tClass for Init Error. Raise an error if an init option doesn't exist in the library (not in list_of_init_options).\n\n\t\t:attribute message: The message to print when an error is detected.\n\t\t:type message: string\n\t";
+static const char __pyx_k_Class_for_Method_Error_Raise_an[] = "\n\t\tClass for Method Error. Raise an error if a computation method doesn't exist in the library (not in list_of_method_options).\n\n\t\t:attribute message: The message to print when an error is detected.\n\t\t:type message: string\n\t";
+static const char __pyx_k_Class_for_error_s_management_Th[] = "\n\t\tClass for error's management. This one is general. \n\t";
+static const char __pyx_k_Finish_The_return_contains_edit[] = "Finish ! The return contains edit distances and NodeMap but you can check the result with graphs'ID until you restart the environment";
+static const char __pyx_k_Finish_You_can_check_the_result[] = "Finish ! You can check the result with each ID of graphs ! There are in the return";
+static const char __pyx_k_Python_GedLib_module_This_modul[] = "\n\tPython GedLib module\n\t======================\n\t\n\tThis module allow to use a C++ library for edit distance between graphs (GedLib) with Python.\n\n\t\n\tAuthors\n\t-------------------\n \n\tDavid Blumenthal\n\tNatacha Lambert\n\tLinlin Jia\n\n\tCopyright (C) 2019-2020 by all the authors\n\n\tClasses & Functions\n\t-------------------\n \n";
+static const char __pyx_k_This_edit_cost_function_doesn_t[] = "This edit cost function doesn't exist, please see list_of_edit_cost_options for selecting a edit cost function";
+static const char __pyx_k_numpy_core_multiarray_failed_to[] = "numpy.core.multiarray failed to import";
+static const char __pyx_k_unknown_dtype_code_in_numpy_pxd[] = "unknown dtype code in numpy.pxd (%d)";
+static const char __pyx_k_with_all_the_others_including_h[] = " with all the others including himself.";
+static const char __pyx_k_Format_string_allocated_too_shor[] = "Format string allocated too short, see comment in numpy.pxd";
+static const char __pyx_k_Non_native_byte_order_not_suppor[] = "Non-native byte order not supported";
+static const char __pyx_k_Please_don_t_restart_the_environ[] = "Please don't restart the environment or recall this function, you will lose your results !";
+static const char __pyx_k_This_init_option_doesn_t_exist_p[] = "This init option doesn't exist, please see list_of_init_options for selecting an option. You can choose any options.";
+static const char __pyx_k_This_method_doesn_t_exist_please[] = "This method doesn't exist, please see list_of_method_options for selecting a method";
+static const char __pyx_k_ndarray_is_not_Fortran_contiguou[] = "ndarray is not Fortran contiguous";
+static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__";
+static const char __pyx_k_numpy_core_umath_failed_to_impor[] = "numpy.core.umath failed to import";
+static const char __pyx_k_Format_string_allocated_too_shor_2[] = "Format string allocated too short.";
+static PyObject *__pyx_kp_u_;
+static PyObject *__pyx_kp_s_Class_for_Edit_Cost_Error_Raise;
+static PyObject *__pyx_kp_s_Class_for_Init_Error_Raise_an_e;
+static PyObject *__pyx_kp_s_Class_for_Method_Error_Raise_an;
+static PyObject *__pyx_kp_s_Class_for_error_s_management_Th;
+static PyObject *__pyx_kp_u_Computation_between_graph;
+static PyObject *__pyx_n_u_EAGER_WITHOUT_SHUFFLED_COPIES;
+static PyObject *__pyx_n_s_EditCostError;
+static PyObject *__pyx_n_s_EditCostError___init;
+static PyObject *__pyx_n_s_Error;
+static PyObject *__pyx_kp_u_Finish_The_return_contains_edit;
+static PyObject *__pyx_kp_u_Finish_You_can_check_the_result;
+static PyObject *__pyx_kp_u_Format_string_allocated_too_shor;
+static PyObject *__pyx_kp_u_Format_string_allocated_too_shor_2;
+static PyObject *__pyx_n_s_GEDEnv;
+static PyObject *__pyx_n_s_Graph;
+static PyObject *__pyx_kp_u_Graphs_loaded;
+static PyObject *__pyx_n_s_ImportError;
+static PyObject *__pyx_n_s_InitError;
+static PyObject *__pyx_n_s_InitError___init;
+static PyObject *__pyx_kp_u_Initialization_in_progress;
+static PyObject *__pyx_kp_u_Initialization_terminated;
+static PyObject *__pyx_n_s_LoadLibrary;
+static PyObject *__pyx_kp_u_Loading_graphs_in_progress;
+static PyObject *__pyx_n_s_MemoryError;
+static PyObject *__pyx_n_s_MethodError;
+static PyObject *__pyx_n_s_MethodError___init;
+static PyObject *__pyx_n_s_NodeMap;
+static PyObject *__pyx_kp_u_Non_native_byte_order_not_suppor;
+static PyObject *__pyx_kp_u_Number_of_graphs;
+static PyObject *__pyx_kp_u_Please_don_t_restart_the_environ;
+static PyObject *__pyx_n_s_RuntimeError;
+static PyObject *__pyx_kp_u_This_edit_cost_function_doesn_t;
+static PyObject *__pyx_kp_u_This_init_option_doesn_t_exist_p;
+static PyObject *__pyx_kp_u_This_method_doesn_t_exist_please;
+static PyObject *__pyx_n_s_TypeError;
+static PyObject *__pyx_n_s_ValueError;
+static PyObject *__pyx_n_s__20;
+static PyObject *__pyx_n_s_add_assignment;
+static PyObject *__pyx_n_s_add_edge;
+static PyObject *__pyx_n_s_add_graph;
+static PyObject *__pyx_n_s_add_node;
+static PyObject *__pyx_n_s_add_nx_graph;
+static PyObject *__pyx_n_s_adj_lists;
+static PyObject *__pyx_n_s_adj_matrix;
+static PyObject *__pyx_n_s_as_relation;
+static PyObject *__pyx_n_s_cdll;
+static PyObject *__pyx_n_s_classe;
+static PyObject *__pyx_n_s_classes;
+static PyObject *__pyx_n_s_clear_graph;
+static PyObject *__pyx_n_s_cline_in_traceback;
+static PyObject *__pyx_n_s_ctypes;
+static PyObject *__pyx_n_s_dataset;
+static PyObject *__pyx_n_s_decode;
+static PyObject *__pyx_n_s_decode_graph_edges;
+static PyObject *__pyx_n_s_decode_your_map;
+static PyObject *__pyx_n_s_dirname;
+static PyObject *__pyx_n_s_doc;
+static PyObject *__pyx_n_s_edge_label;
+static PyObject *__pyx_n_s_edge_label_1;
+static PyObject *__pyx_n_s_edge_label_2;
+static PyObject *__pyx_n_s_edge_label_b;
+static PyObject *__pyx_n_s_edge_list;
+static PyObject *__pyx_n_s_edge_type;
+static PyObject *__pyx_n_s_edges;
+static PyObject *__pyx_n_s_edit_cost;
+static PyObject *__pyx_n_s_edit_cost_constant;
+static PyObject *__pyx_n_s_encode;
+static PyObject *__pyx_n_s_encode_your_map;
+static PyObject *__pyx_n_s_enumerate;
+static PyObject *__pyx_n_s_file;
+static PyObject *__pyx_n_s_g;
+static PyObject *__pyx_n_s_g1;
+static PyObject *__pyx_n_s_g2;
+static PyObject *__pyx_n_s_g_id;
+static PyObject *__pyx_n_s_gedlibpy;
+static PyObject *__pyx_kp_s_gedlibpy_pyx;
+static PyObject *__pyx_n_s_get_dummy_node;
+static PyObject *__pyx_n_s_get_edge_data;
+static PyObject *__pyx_n_s_get_edit_cost_options;
+static PyObject *__pyx_n_s_get_graph_edges;
+static PyObject *__pyx_n_s_get_graph_node_labels;
+static PyObject *__pyx_n_s_get_graph_num_nodes;
+static PyObject *__pyx_n_s_get_init_options;
+static PyObject *__pyx_n_s_get_method_options;
+static PyObject *__pyx_n_s_get_node_map;
+static PyObject *__pyx_n_s_get_original_node_ids;
+static PyObject *__pyx_n_s_get_upper_bound;
+static PyObject *__pyx_n_s_getstate;
+static PyObject *__pyx_n_s_gklearn_ged_env;
+static PyObject *__pyx_n_s_graph;
+static PyObject *__pyx_n_s_graph_class;
+static PyObject *__pyx_n_s_graph_id;
+static PyObject *__pyx_n_s_graph_ids;
+static PyObject *__pyx_n_s_graph_name;
+static PyObject *__pyx_n_s_h;
+static PyObject *__pyx_n_s_h_id;
+static PyObject *__pyx_n_s_head;
+static PyObject *__pyx_n_u_id;
+static PyObject *__pyx_n_s_ignore_duplicates;
+static PyObject *__pyx_n_s_import;
+static PyObject *__pyx_n_s_inf;
+static PyObject *__pyx_n_s_init;
+static PyObject *__pyx_n_s_init_2;
+static PyObject *__pyx_n_s_init_method;
+static PyObject *__pyx_n_s_init_option;
+static PyObject *__pyx_n_s_is_initialized;
+static PyObject *__pyx_n_s_items;
+static PyObject *__pyx_n_s_iteritems;
+static PyObject *__pyx_n_s_key;
+static PyObject *__pyx_n_s_lib1;
+static PyObject *__pyx_n_s_lib2;
+static PyObject *__pyx_n_s_lib3;
+static PyObject *__pyx_n_s_lib4;
+static PyObject *__pyx_kp_u_lib_fann_libdoublefann_so;
+static PyObject *__pyx_kp_u_lib_libsvm_3_22_libsvm_so;
+static PyObject *__pyx_kp_u_lib_nomad_libnomad_so;
+static PyObject *__pyx_kp_u_lib_nomad_libsgtelib_so;
+static PyObject *__pyx_n_s_list_of_edges;
+static PyObject *__pyx_n_s_list_of_edit_cost_options;
+static PyObject *__pyx_n_s_list_of_init_options;
+static PyObject *__pyx_n_s_list_of_method_options;
+static PyObject *__pyx_n_s_list_of_nodes;
+static PyObject *__pyx_n_s_load_GXL_graphs;
+static PyObject *__pyx_n_s_main;
+static PyObject *__pyx_n_s_map_b;
+static PyObject *__pyx_n_s_map_edge_b;
+static PyObject *__pyx_n_s_map_edges;
+static PyObject *__pyx_n_s_map_u;
+static PyObject *__pyx_n_s_message;
+static PyObject *__pyx_n_s_metaclass;
+static PyObject *__pyx_n_s_method;
+static PyObject *__pyx_n_s_module;
+static PyObject *__pyx_n_s_name;
+static PyObject *__pyx_n_s_name_2;
+static PyObject *__pyx_kp_u_ndarray_is_not_C_contiguous;
+static PyObject *__pyx_kp_u_ndarray_is_not_Fortran_contiguou;
+static PyObject *__pyx_n_s_networkx;
+static PyObject *__pyx_kp_s_no_default___reduce___due_to_non;
+static PyObject *__pyx_n_s_node_id;
+static PyObject *__pyx_n_s_node_label;
+static PyObject *__pyx_n_s_node_label_1;
+static PyObject *__pyx_n_s_node_label_2;
+static PyObject *__pyx_n_s_node_map;
+static PyObject *__pyx_n_s_node_type;
+static PyObject *__pyx_n_s_nodes;
+static PyObject *__pyx_n_s_np;
+static PyObject *__pyx_n_s_numpy;
+static PyObject *__pyx_kp_u_numpy_core_multiarray_failed_to;
+static PyObject *__pyx_kp_u_numpy_core_umath_failed_to_impor;
+static PyObject *__pyx_n_s_nx;
+static PyObject *__pyx_n_s_nx_graph;
+static PyObject *__pyx_n_s_option;
+static PyObject *__pyx_n_s_options;
+static PyObject *__pyx_n_u_original_node_ids;
+static PyObject *__pyx_n_s_os;
+static PyObject *__pyx_n_s_path;
+static PyObject *__pyx_n_s_path_XML;
+static PyObject *__pyx_n_s_path_folder;
+static PyObject *__pyx_n_s_prepare;
+static PyObject *__pyx_n_s_print;
+static PyObject *__pyx_n_s_print_to_stdout;
+static PyObject *__pyx_n_s_qualname;
+static PyObject *__pyx_n_s_range;
+static PyObject *__pyx_n_s_realpath;
+static PyObject *__pyx_n_s_reduce;
+static PyObject *__pyx_n_s_reduce_cython;
+static PyObject *__pyx_n_s_reduce_ex;
+static PyObject *__pyx_n_s_res;
+static PyObject *__pyx_n_s_restart_env;
+static PyObject *__pyx_n_s_run_method;
+static PyObject *__pyx_n_s_self;
+static PyObject *__pyx_n_s_set_edit_cost;
+static PyObject *__pyx_n_s_set_induced_cost;
+static PyObject *__pyx_n_s_set_method;
+static PyObject *__pyx_n_s_setstate;
+static PyObject *__pyx_n_s_setstate_cython;
+static PyObject *__pyx_n_s_tail;
+static PyObject *__pyx_n_s_test;
+static PyObject *__pyx_kp_u_unknown_dtype_code_in_numpy_pxd;
+static PyObject *__pyx_kp_u_utf_8;
+static PyObject *__pyx_n_s_value;
+static PyObject *__pyx_kp_u_with_all_the_others_including_h;
+static PyObject *__pyx_pf_8gedlibpy_get_edit_cost_options(CYTHON_UNUSED PyObject *__pyx_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_2get_method_options(CYTHON_UNUSED PyObject *__pyx_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_4get_init_options(CYTHON_UNUSED PyObject *__pyx_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6get_dummy_node(CYTHON_UNUSED PyObject *__pyx_self); /* proto */
+static int __pyx_pf_8gedlibpy_6GEDEnv___cinit__(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static void __pyx_pf_8gedlibpy_6GEDEnv_2__dealloc__(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_4is_initialized(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_6restart_env(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_8load_GXL_graphs(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_path_folder, PyObject *__pyx_v_path_XML, PyObject *__pyx_v_node_type, PyObject *__pyx_v_edge_type); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_10graph_ids(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_12get_all_graph_ids(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_14get_graph_class(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_16get_graph_name(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_18add_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_name, PyObject *__pyx_v_classe); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_20add_node(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id, PyObject *__pyx_v_node_id, PyObject *__pyx_v_node_label); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_22add_edge(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id, PyObject *__pyx_v_tail, PyObject *__pyx_v_head, PyObject *__pyx_v_edge_label, PyObject *__pyx_v_ignore_duplicates); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_24add_symmetrical_edge(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id, PyObject *__pyx_v_tail, PyObject *__pyx_v_head, PyObject *__pyx_v_edge_label); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_26clear_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_28get_graph_internal_id(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_30get_graph_num_nodes(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_32get_graph_num_edges(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_34get_original_node_ids(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_36get_graph_node_labels(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_38get_graph_edges(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_40get_graph_adjacence_matrix(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_42set_edit_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edit_cost, PyObject *__pyx_v_edit_cost_constant); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_44set_personal_edit_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edit_cost_constant); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_46init(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_init_option, PyObject *__pyx_v_print_to_stdout); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_48set_method(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_method, PyObject *__pyx_v_options); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_50init_method(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_52get_init_time(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_54run_method(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_56get_upper_bound(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_58get_lower_bound(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_60get_forward_map(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_62get_backward_map(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_64get_node_image(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h, PyObject *__pyx_v_node_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_66get_node_pre_image(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h, PyObject *__pyx_v_node_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_68get_induced_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_70get_node_map(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_72get_assignment_matrix(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_74get_all_map(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_76get_runtime(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_78quasimetric_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_80hungarian_LSAP(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_matrix_cost); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_82hungarian_LSAPE(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_matrix_cost); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_84add_random_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_name, PyObject *__pyx_v_classe, PyObject *__pyx_v_list_of_nodes, PyObject *__pyx_v_list_of_edges, PyObject *__pyx_v_ignore_duplicates); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_86add_nx_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_classe, PyObject *__pyx_v_ignore_duplicates); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_88compute_ged_on_two_graphs(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g1, PyObject *__pyx_v_g2, PyObject *__pyx_v_edit_cost, PyObject *__pyx_v_method, PyObject *__pyx_v_options, PyObject *__pyx_v_init_option); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_90compute_edit_distance_on_nx_graphs(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_dataset, PyObject *__pyx_v_classes, PyObject *__pyx_v_edit_cost, PyObject *__pyx_v_method, PyObject *__pyx_v_options, PyObject *__pyx_v_init_option); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_92compute_edit_distance_on_GXl_graphs(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_path_folder, PyObject *__pyx_v_path_XML, PyObject *__pyx_v_edit_cost, PyObject *__pyx_v_method, PyObject *__pyx_v_options, PyObject *__pyx_v_init_option); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_94get_num_node_labels(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_96get_node_label(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_label_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_98get_num_edge_labels(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_100get_edge_label(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_label_id); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_102get_avg_num_nodes(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_104get_node_rel_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_node_label_1, PyObject *__pyx_v_node_label_2); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_106get_node_del_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_node_label); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_108get_node_ins_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_node_label); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_110get_median_node_label(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_node_labels); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_112get_edge_rel_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edge_label_1, PyObject *__pyx_v_edge_label_2); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_114get_edge_del_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edge_label); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_116get_edge_ins_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edge_label); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_118get_median_edge_label(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edge_labels); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_120get_nx_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id, CYTHON_UNUSED PyObject *__pyx_v_adj_matrix, CYTHON_UNUSED PyObject *__pyx_v_adj_lists, CYTHON_UNUSED PyObject *__pyx_v_edge_list); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_122get_init_type(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_124load_nx_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_nx_graph, PyObject *__pyx_v_graph_id, PyObject *__pyx_v_graph_name, PyObject *__pyx_v_graph_class); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_126compute_induced_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g_id, PyObject *__pyx_v_h_id, PyObject *__pyx_v_node_map); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_128__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_130__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_13EditCostError___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_message); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_11MethodError___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_message); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_9InitError___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_message); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_8encode_your_map(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_map_u); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_10decode_your_map(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_map_b); /* proto */
+static PyObject *__pyx_pf_8gedlibpy_12decode_graph_edges(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_map_edge_b); /* proto */
+static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */
+static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */
+static int __pyx_pf_7cpython_5array_5array___getbuffer__(arrayobject *__pyx_v_self, Py_buffer *__pyx_v_info, CYTHON_UNUSED int __pyx_v_flags); /* proto */
+static void __pyx_pf_7cpython_5array_5array_2__releasebuffer__(CYTHON_UNUSED arrayobject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */
+static PyObject *__pyx_tp_new_8gedlibpy_GEDEnv(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
+static PyObject *__pyx_int_0;
+static PyObject *__pyx_int_1;
+static PyObject *__pyx_k__2;
+static PyObject *__pyx_k__3;
+static PyObject *__pyx_tuple__4;
+static PyObject *__pyx_tuple__5;
+static PyObject *__pyx_tuple__6;
+static PyObject *__pyx_tuple__7;
+static PyObject *__pyx_tuple__8;
+static PyObject *__pyx_tuple__9;
+static PyObject *__pyx_tuple__10;
+static PyObject *__pyx_tuple__11;
+static PyObject *__pyx_tuple__12;
+static PyObject *__pyx_tuple__13;
+static PyObject *__pyx_tuple__14;
+static PyObject *__pyx_tuple__15;
+static PyObject *__pyx_tuple__16;
+static PyObject *__pyx_tuple__17;
+static PyObject *__pyx_tuple__18;
+static PyObject *__pyx_tuple__19;
+static PyObject *__pyx_tuple__21;
+static PyObject *__pyx_tuple__23;
+static PyObject *__pyx_tuple__25;
+static PyObject *__pyx_tuple__28;
+static PyObject *__pyx_tuple__30;
+static PyObject *__pyx_tuple__32;
+static PyObject *__pyx_tuple__34;
+static PyObject *__pyx_tuple__36;
+static PyObject *__pyx_tuple__38;
+static PyObject *__pyx_codeobj__22;
+static PyObject *__pyx_codeobj__24;
+static PyObject *__pyx_codeobj__26;
+static PyObject *__pyx_codeobj__27;
+static PyObject *__pyx_codeobj__29;
+static PyObject *__pyx_codeobj__31;
+static PyObject *__pyx_codeobj__33;
+static PyObject *__pyx_codeobj__35;
+static PyObject *__pyx_codeobj__37;
+static PyObject *__pyx_codeobj__39;
+/* Late includes */
+
+/* "gedlibpy.pyx":129
+ *
+ *
+ * def get_edit_cost_options() : # <<<<<<<<<<<<<<
+ * """
+ * Searchs the differents edit cost functions and returns the result.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_1get_edit_cost_options(PyObject *__pyx_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_get_edit_cost_options[] = "\n\t\tSearchs the differents edit cost functions and returns the result.\n \n\t\t:return: The list of edit cost functions\n\t\t:rtype: list[string]\n \n\t\t.. warning:: This function is useless for an external use. Please use directly list_of_edit_cost_options. \n\t\t.. note:: Prefer the list_of_edit_cost_options attribute of this module.\n\t";
+static PyMethodDef __pyx_mdef_8gedlibpy_1get_edit_cost_options = {"get_edit_cost_options", (PyCFunction)__pyx_pw_8gedlibpy_1get_edit_cost_options, METH_NOARGS, __pyx_doc_8gedlibpy_get_edit_cost_options};
+static PyObject *__pyx_pw_8gedlibpy_1get_edit_cost_options(PyObject *__pyx_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_edit_cost_options (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_get_edit_cost_options(__pyx_self);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_get_edit_cost_options(CYTHON_UNUSED PyObject *__pyx_self) {
+ std::string __pyx_7genexpr__pyx_v_option;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ std::vector __pyx_t_2;
+ std::vector ::iterator __pyx_t_3;
+ std::vector *__pyx_t_4;
+ std::string __pyx_t_5;
+ PyObject *__pyx_t_6 = NULL;
+ __Pyx_RefNannySetupContext("get_edit_cost_options", 0);
+
+ /* "gedlibpy.pyx":140
+ * """
+ *
+ * return [option.decode('utf-8') for option in getEditCostStringOptions()] # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ { /* enter inner scope */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 140, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ try {
+ __pyx_t_2 = pyged::getEditCostStringOptions();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 140, __pyx_L1_error)
+ }
+ __pyx_t_4 = &__pyx_t_2;
+ __pyx_t_3 = __pyx_t_4->begin();
+ for (;;) {
+ if (!(__pyx_t_3 != __pyx_t_4->end())) break;
+ __pyx_t_5 = *__pyx_t_3;
+ ++__pyx_t_3;
+ __pyx_7genexpr__pyx_v_option = __pyx_t_5;
+ __pyx_t_6 = __Pyx_decode_cpp_string(__pyx_7genexpr__pyx_v_option, 0, PY_SSIZE_T_MAX, NULL, NULL, PyUnicode_DecodeUTF8); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 140, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_6))) __PYX_ERR(0, 140, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ } /* exit inner scope */
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":129
+ *
+ *
+ * def get_edit_cost_options() : # <<<<<<<<<<<<<<
+ * """
+ * Searchs the differents edit cost functions and returns the result.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_AddTraceback("gedlibpy.get_edit_cost_options", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":143
+ *
+ *
+ * def get_method_options() : # <<<<<<<<<<<<<<
+ * """
+ * Searchs the differents method for edit distance computation between graphs and returns the result.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_3get_method_options(PyObject *__pyx_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_2get_method_options[] = "\n\t\tSearchs the differents method for edit distance computation between graphs and returns the result.\n \n\t\t:return: The list of method to compute the edit distance between graphs\n\t\t:rtype: list[string]\n \n\t\t.. warning:: This function is useless for an external use. Please use directly list_of_method_options.\n\t\t.. note:: Prefer the list_of_method_options attribute of this module.\n\t";
+static PyMethodDef __pyx_mdef_8gedlibpy_3get_method_options = {"get_method_options", (PyCFunction)__pyx_pw_8gedlibpy_3get_method_options, METH_NOARGS, __pyx_doc_8gedlibpy_2get_method_options};
+static PyObject *__pyx_pw_8gedlibpy_3get_method_options(PyObject *__pyx_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_method_options (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_2get_method_options(__pyx_self);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_2get_method_options(CYTHON_UNUSED PyObject *__pyx_self) {
+ std::string __pyx_8genexpr1__pyx_v_option;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ std::vector __pyx_t_2;
+ std::vector ::iterator __pyx_t_3;
+ std::vector *__pyx_t_4;
+ std::string __pyx_t_5;
+ PyObject *__pyx_t_6 = NULL;
+ __Pyx_RefNannySetupContext("get_method_options", 0);
+
+ /* "gedlibpy.pyx":153
+ * .. note:: Prefer the list_of_method_options attribute of this module.
+ * """
+ * return [option.decode('utf-8') for option in getMethodStringOptions()] # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ { /* enter inner scope */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 153, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ try {
+ __pyx_t_2 = pyged::getMethodStringOptions();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 153, __pyx_L1_error)
+ }
+ __pyx_t_4 = &__pyx_t_2;
+ __pyx_t_3 = __pyx_t_4->begin();
+ for (;;) {
+ if (!(__pyx_t_3 != __pyx_t_4->end())) break;
+ __pyx_t_5 = *__pyx_t_3;
+ ++__pyx_t_3;
+ __pyx_8genexpr1__pyx_v_option = __pyx_t_5;
+ __pyx_t_6 = __Pyx_decode_cpp_string(__pyx_8genexpr1__pyx_v_option, 0, PY_SSIZE_T_MAX, NULL, NULL, PyUnicode_DecodeUTF8); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 153, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_6))) __PYX_ERR(0, 153, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ } /* exit inner scope */
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":143
+ *
+ *
+ * def get_method_options() : # <<<<<<<<<<<<<<
+ * """
+ * Searchs the differents method for edit distance computation between graphs and returns the result.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_AddTraceback("gedlibpy.get_method_options", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":156
+ *
+ *
+ * def get_init_options() : # <<<<<<<<<<<<<<
+ * """
+ * Searchs the differents initialization parameters for the environment computation for graphs and returns the result.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_5get_init_options(PyObject *__pyx_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_4get_init_options[] = "\n\t\tSearchs the differents initialization parameters for the environment computation for graphs and returns the result.\n \n\t\t:return: The list of options to initialize the computation environment\n\t\t:rtype: list[string]\n \n\t\t.. warning:: This function is useless for an external use. Please use directly list_of_init_options.\n\t\t.. note:: Prefer the list_of_init_options attribute of this module.\n\t";
+static PyMethodDef __pyx_mdef_8gedlibpy_5get_init_options = {"get_init_options", (PyCFunction)__pyx_pw_8gedlibpy_5get_init_options, METH_NOARGS, __pyx_doc_8gedlibpy_4get_init_options};
+static PyObject *__pyx_pw_8gedlibpy_5get_init_options(PyObject *__pyx_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_init_options (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_4get_init_options(__pyx_self);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_4get_init_options(CYTHON_UNUSED PyObject *__pyx_self) {
+ std::string __pyx_8genexpr2__pyx_v_option;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ std::vector __pyx_t_2;
+ std::vector ::iterator __pyx_t_3;
+ std::vector *__pyx_t_4;
+ std::string __pyx_t_5;
+ PyObject *__pyx_t_6 = NULL;
+ __Pyx_RefNannySetupContext("get_init_options", 0);
+
+ /* "gedlibpy.pyx":166
+ * .. note:: Prefer the list_of_init_options attribute of this module.
+ * """
+ * return [option.decode('utf-8') for option in getInitStringOptions()] # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ { /* enter inner scope */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 166, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ try {
+ __pyx_t_2 = pyged::getInitStringOptions();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 166, __pyx_L1_error)
+ }
+ __pyx_t_4 = &__pyx_t_2;
+ __pyx_t_3 = __pyx_t_4->begin();
+ for (;;) {
+ if (!(__pyx_t_3 != __pyx_t_4->end())) break;
+ __pyx_t_5 = *__pyx_t_3;
+ ++__pyx_t_3;
+ __pyx_8genexpr2__pyx_v_option = __pyx_t_5;
+ __pyx_t_6 = __Pyx_decode_cpp_string(__pyx_8genexpr2__pyx_v_option, 0, PY_SSIZE_T_MAX, NULL, NULL, PyUnicode_DecodeUTF8); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 166, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_6))) __PYX_ERR(0, 166, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ } /* exit inner scope */
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":156
+ *
+ *
+ * def get_init_options() : # <<<<<<<<<<<<<<
+ * """
+ * Searchs the differents initialization parameters for the environment computation for graphs and returns the result.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_AddTraceback("gedlibpy.get_init_options", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":169
+ *
+ *
+ * def get_dummy_node() : # <<<<<<<<<<<<<<
+ * """
+ * Returns the ID of a dummy node.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_7get_dummy_node(PyObject *__pyx_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6get_dummy_node[] = "\n\t\tReturns the ID of a dummy node.\n\n\t\t:return: The ID of the dummy node (18446744073709551614 for my computer, the hugest number possible)\n\t\t:rtype: size_t\n\t\t\n\t\t.. note:: A dummy node is used when a node isn't associated to an other node.\t \n\t";
+static PyMethodDef __pyx_mdef_8gedlibpy_7get_dummy_node = {"get_dummy_node", (PyCFunction)__pyx_pw_8gedlibpy_7get_dummy_node, METH_NOARGS, __pyx_doc_8gedlibpy_6get_dummy_node};
+static PyObject *__pyx_pw_8gedlibpy_7get_dummy_node(PyObject *__pyx_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_dummy_node (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6get_dummy_node(__pyx_self);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6get_dummy_node(CYTHON_UNUSED PyObject *__pyx_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("get_dummy_node", 0);
+
+ /* "gedlibpy.pyx":178
+ * .. note:: A dummy node is used when a node isn't associated to an other node.
+ * """
+ * return getDummyNode() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ try {
+ __pyx_t_1 = pyged::getDummyNode();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 178, __pyx_L1_error)
+ }
+ __pyx_t_2 = __Pyx_PyInt_FromSize_t(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 178, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":169
+ *
+ *
+ * def get_dummy_node() : # <<<<<<<<<<<<<<
+ * """
+ * Returns the ID of a dummy node.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("gedlibpy.get_dummy_node", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":189
+ *
+ *
+ * def __cinit__(self): # <<<<<<<<<<<<<<
+ * # self.c_env = PyGEDEnv()
+ * self.c_env = new PyGEDEnv()
+ */
+
+/* Python wrapper */
+static int __pyx_pw_8gedlibpy_6GEDEnv_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static int __pyx_pw_8gedlibpy_6GEDEnv_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0);
+ if (unlikely(PyTuple_GET_SIZE(__pyx_args) > 0)) {
+ __Pyx_RaiseArgtupleInvalid("__cinit__", 1, 0, 0, PyTuple_GET_SIZE(__pyx_args)); return -1;}
+ if (unlikely(__pyx_kwds) && unlikely(PyDict_Size(__pyx_kwds) > 0) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "__cinit__", 0))) return -1;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv___cinit__(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static int __pyx_pf_8gedlibpy_6GEDEnv___cinit__(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ pyged::PyGEDEnv *__pyx_t_1;
+ __Pyx_RefNannySetupContext("__cinit__", 0);
+
+ /* "gedlibpy.pyx":191
+ * def __cinit__(self):
+ * # self.c_env = PyGEDEnv()
+ * self.c_env = new PyGEDEnv() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ try {
+ __pyx_t_1 = new pyged::PyGEDEnv();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 191, __pyx_L1_error)
+ }
+ __pyx_v_self->c_env = __pyx_t_1;
+
+ /* "gedlibpy.pyx":189
+ *
+ *
+ * def __cinit__(self): # <<<<<<<<<<<<<<
+ * # self.c_env = PyGEDEnv()
+ * self.c_env = new PyGEDEnv()
+ */
+
+ /* function exit code */
+ __pyx_r = 0;
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = -1;
+ __pyx_L0:;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":194
+ *
+ *
+ * def __dealloc__(self): # <<<<<<<<<<<<<<
+ * del self.c_env
+ *
+ */
+
+/* Python wrapper */
+static void __pyx_pw_8gedlibpy_6GEDEnv_3__dealloc__(PyObject *__pyx_v_self); /*proto*/
+static void __pyx_pw_8gedlibpy_6GEDEnv_3__dealloc__(PyObject *__pyx_v_self) {
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
+ __pyx_pf_8gedlibpy_6GEDEnv_2__dealloc__(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+static void __pyx_pf_8gedlibpy_6GEDEnv_2__dealloc__(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__dealloc__", 0);
+
+ /* "gedlibpy.pyx":195
+ *
+ * def __dealloc__(self):
+ * del self.c_env # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ delete __pyx_v_self->c_env;
+
+ /* "gedlibpy.pyx":194
+ *
+ *
+ * def __dealloc__(self): # <<<<<<<<<<<<<<
+ * del self.c_env
+ *
+ */
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+/* "gedlibpy.pyx":203
+ *
+ *
+ * def is_initialized(self) : # <<<<<<<<<<<<<<
+ * """
+ * Checks and returns if the computation environment is initialized or not.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_5is_initialized(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_4is_initialized[] = "\n\t\t\tChecks and returns if the computation environment is initialized or not.\n\t \n\t\t\t:return: True if it's initialized, False otherwise\n\t\t\t:rtype: bool\n\t\t\t\n\t\t\t.. note:: This function exists for internals verifications but you can use it for your code. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_5is_initialized(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("is_initialized (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_4is_initialized(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_4is_initialized(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ bool __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("is_initialized", 0);
+
+ /* "gedlibpy.pyx":212
+ * .. note:: This function exists for internals verifications but you can use it for your code.
+ * """
+ * return self.c_env.isInitialized() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ try {
+ __pyx_t_1 = __pyx_v_self->c_env->isInitialized();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 212, __pyx_L1_error)
+ }
+ __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 212, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":203
+ *
+ *
+ * def is_initialized(self) : # <<<<<<<<<<<<<<
+ * """
+ * Checks and returns if the computation environment is initialized or not.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.is_initialized", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":215
+ *
+ *
+ * def restart_env(self) : # <<<<<<<<<<<<<<
+ * """
+ * Restarts the environment variable. All data related to it will be delete.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_7restart_env(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_6restart_env[] = "\n\t\t\tRestarts the environment variable. All data related to it will be delete. \n\t \n\t\t\t.. warning:: This function deletes all graphs, computations and more so make sure you don't need anymore your environment. \n\t\t\t.. note:: You can now delete and add somes graphs after initialization so you can avoid this function. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_7restart_env(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("restart_env (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_6restart_env(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_6restart_env(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("restart_env", 0);
+
+ /* "gedlibpy.pyx":222
+ * .. note:: You can now delete and add somes graphs after initialization so you can avoid this function.
+ * """
+ * self.c_env.restartEnv() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ try {
+ __pyx_v_self->c_env->restartEnv();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 222, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":215
+ *
+ *
+ * def restart_env(self) : # <<<<<<<<<<<<<<
+ * """
+ * Restarts the environment variable. All data related to it will be delete.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.restart_env", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":225
+ *
+ *
+ * def load_GXL_graphs(self, path_folder, path_XML, node_type, edge_type) : # <<<<<<<<<<<<<<
+ * """
+ * Loads some GXL graphes on the environment which is in a same folder, and present in the XMLfile.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_9load_GXL_graphs(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_8load_GXL_graphs[] = "\n\t\t\tLoads some GXL graphes on the environment which is in a same folder, and present in the XMLfile. \n\t\t\t\n\t\t\t:param path_folder: The folder's path which contains GXL graphs\n\t\t\t:param path_XML: The XML's path which indicates which graphes you want to load\n\t\t\t:param node_type: Select if nodes are labeled or unlabeled\n\t\t\t:param edge_type: Select if edges are labeled or unlabeled\n\t\t\t:type path_folder: string\n\t\t\t:type path_XML: string\n\t\t\t:type node_type: bool\n\t\t\t:type edge_type: bool\n\n\t \n\t\t\t.. note:: You can call this function multiple times if you want, but not after an init call. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_9load_GXL_graphs(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_path_folder = 0;
+ PyObject *__pyx_v_path_XML = 0;
+ PyObject *__pyx_v_node_type = 0;
+ PyObject *__pyx_v_edge_type = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("load_GXL_graphs (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path_folder,&__pyx_n_s_path_XML,&__pyx_n_s_node_type,&__pyx_n_s_edge_type,0};
+ PyObject* values[4] = {0,0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_path_folder)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_path_XML)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("load_GXL_graphs", 1, 4, 4, 1); __PYX_ERR(0, 225, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_node_type)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("load_GXL_graphs", 1, 4, 4, 2); __PYX_ERR(0, 225, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edge_type)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("load_GXL_graphs", 1, 4, 4, 3); __PYX_ERR(0, 225, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "load_GXL_graphs") < 0)) __PYX_ERR(0, 225, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 4) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ }
+ __pyx_v_path_folder = values[0];
+ __pyx_v_path_XML = values[1];
+ __pyx_v_node_type = values[2];
+ __pyx_v_edge_type = values[3];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("load_GXL_graphs", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 225, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.load_GXL_graphs", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_8load_GXL_graphs(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_path_folder, __pyx_v_path_XML, __pyx_v_node_type, __pyx_v_edge_type);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_8load_GXL_graphs(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_path_folder, PyObject *__pyx_v_path_XML, PyObject *__pyx_v_node_type, PyObject *__pyx_v_edge_type) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ std::string __pyx_t_4;
+ std::string __pyx_t_5;
+ bool __pyx_t_6;
+ bool __pyx_t_7;
+ __Pyx_RefNannySetupContext("load_GXL_graphs", 0);
+
+ /* "gedlibpy.pyx":241
+ * .. note:: You can call this function multiple times if you want, but not after an init call.
+ * """
+ * self.c_env.loadGXLGraph(path_folder.encode('utf-8'), path_XML.encode('utf-8'), node_type, edge_type) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_path_folder, __pyx_n_s_encode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 241, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 241, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __pyx_convert_string_from_py_std__in_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 241, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_path_XML, __pyx_n_s_encode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 241, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 241, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_5 = __pyx_convert_string_from_py_std__in_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 241, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_v_node_type); if (unlikely((__pyx_t_6 == ((bool)-1)) && PyErr_Occurred())) __PYX_ERR(0, 241, __pyx_L1_error)
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_v_edge_type); if (unlikely((__pyx_t_7 == ((bool)-1)) && PyErr_Occurred())) __PYX_ERR(0, 241, __pyx_L1_error)
+ try {
+ __pyx_v_self->c_env->loadGXLGraph(__pyx_t_4, __pyx_t_5, __pyx_t_6, __pyx_t_7);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 241, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":225
+ *
+ *
+ * def load_GXL_graphs(self, path_folder, path_XML, node_type, edge_type) : # <<<<<<<<<<<<<<
+ * """
+ * Loads some GXL graphes on the environment which is in a same folder, and present in the XMLfile.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.load_GXL_graphs", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":244
+ *
+ *
+ * def graph_ids(self) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs the first and last IDs of the loaded graphs in the environment.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_11graph_ids(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_10graph_ids[] = "\n\t\t\tSearchs the first and last IDs of the loaded graphs in the environment. \n\t \n\t\t\t:return: The pair of the first and the last graphs Ids\n\t\t\t:rtype: tuple(size_t, size_t)\n\t\t\t\n\t\t\t.. note:: Prefer this function if you have huges structures with lots of graphs. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_11graph_ids(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("graph_ids (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_10graph_ids(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_10graph_ids(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ std::pair __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("graph_ids", 0);
+
+ /* "gedlibpy.pyx":253
+ * .. note:: Prefer this function if you have huges structures with lots of graphs.
+ * """
+ * return self.c_env.getGraphIds() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ try {
+ __pyx_t_1 = __pyx_v_self->c_env->getGraphIds();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 253, __pyx_L1_error)
+ }
+ __pyx_t_2 = __pyx_convert_pair_to_py_size_t____size_t(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 253, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":244
+ *
+ *
+ * def graph_ids(self) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs the first and last IDs of the loaded graphs in the environment.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.graph_ids", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":256
+ *
+ *
+ * def get_all_graph_ids(self) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs all the IDs of the loaded graphs in the environment.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_13get_all_graph_ids(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_12get_all_graph_ids[] = "\n\t\t\tSearchs all the IDs of the loaded graphs in the environment. \n\t \n\t\t\t:return: The list of all graphs's Ids \n\t\t\t:rtype: list[size_t]\n\t\t\t\n\t\t\t.. note:: The last ID is equal to (number of graphs - 1). The order correspond to the loading order. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_13get_all_graph_ids(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_all_graph_ids (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_12get_all_graph_ids(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_12get_all_graph_ids(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ std::vector __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("get_all_graph_ids", 0);
+
+ /* "gedlibpy.pyx":265
+ * .. note:: The last ID is equal to (number of graphs - 1). The order correspond to the loading order.
+ * """
+ * return self.c_env.getAllGraphIds() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ try {
+ __pyx_t_1 = __pyx_v_self->c_env->getAllGraphIds();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 265, __pyx_L1_error)
+ }
+ __pyx_t_2 = __pyx_convert_vector_to_py_size_t(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 265, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":256
+ *
+ *
+ * def get_all_graph_ids(self) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs all the IDs of the loaded graphs in the environment.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_all_graph_ids", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":268
+ *
+ *
+ * def get_graph_class(self, id) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the class of a graph with its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_15get_graph_class(PyObject *__pyx_v_self, PyObject *__pyx_v_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_14get_graph_class[] = "\n\t\t\tReturns the class of a graph with its ID.\n\t\n\t\t\t:param id: The ID of the wanted graph\n\t\t\t:type id: size_t\n\t\t\t:return: The class of the graph which correpond to the ID\n\t\t\t:rtype: string\n\t\t\t\n\t\t\t.. seealso:: get_graph_class()\n\t\t\t.. note:: An empty string can be a class. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_15get_graph_class(PyObject *__pyx_v_self, PyObject *__pyx_v_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_graph_class (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_14get_graph_class(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_14get_graph_class(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ std::string __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ __Pyx_RefNannySetupContext("get_graph_class", 0);
+
+ /* "gedlibpy.pyx":280
+ * .. note:: An empty string can be a class.
+ * """
+ * return self.c_env.getGraphClass(id) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_id); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 280, __pyx_L1_error)
+ try {
+ __pyx_t_2 = __pyx_v_self->c_env->getGraphClass(__pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 280, __pyx_L1_error)
+ }
+ __pyx_t_3 = __pyx_convert_PyBytes_string_to_py_std__in_string(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 280, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_r = __pyx_t_3;
+ __pyx_t_3 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":268
+ *
+ *
+ * def get_graph_class(self, id) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the class of a graph with its ID.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_graph_class", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":283
+ *
+ *
+ * def get_graph_name(self, id) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the name of a graph with its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_17get_graph_name(PyObject *__pyx_v_self, PyObject *__pyx_v_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_16get_graph_name[] = "\n\t\t\tReturns the name of a graph with its ID. \n\t\n\t\t\t:param id: The ID of the wanted graph\n\t\t\t:type id: size_t\n\t\t\t:return: The name of the graph which correpond to the ID\n\t\t\t:rtype: string\n\t\t\t\n\t\t\t.. seealso:: get_graph_class()\n\t\t\t.. note:: An empty string can be a name. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_17get_graph_name(PyObject *__pyx_v_self, PyObject *__pyx_v_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_graph_name (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_16get_graph_name(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_16get_graph_name(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ std::string __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ __Pyx_RefNannySetupContext("get_graph_name", 0);
+
+ /* "gedlibpy.pyx":295
+ * .. note:: An empty string can be a name.
+ * """
+ * return self.c_env.getGraphName(id).decode('utf-8') # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_id); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 295, __pyx_L1_error)
+ try {
+ __pyx_t_2 = __pyx_v_self->c_env->getGraphName(__pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 295, __pyx_L1_error)
+ }
+ __pyx_t_3 = __Pyx_decode_cpp_string(__pyx_t_2, 0, PY_SSIZE_T_MAX, NULL, NULL, PyUnicode_DecodeUTF8); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 295, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_r = __pyx_t_3;
+ __pyx_t_3 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":283
+ *
+ *
+ * def get_graph_name(self, id) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the name of a graph with its ID.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_graph_name", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":298
+ *
+ *
+ * def add_graph(self, name="", classe="") : # <<<<<<<<<<<<<<
+ * """
+ * Adds a empty graph on the environment, with its name and its class. Nodes and edges will be add in a second time.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_19add_graph(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_18add_graph[] = "\n\t\t\tAdds a empty graph on the environment, with its name and its class. Nodes and edges will be add in a second time. \n\t\n\t\t\t:param name: The name of the new graph, an empty string by default\n\t\t\t:param classe: The class of the new graph, an empty string by default\n\t\t\t:type name: string\n\t\t\t:type classe: string\n\t\t\t:return: The ID of the newly graphe\n\t\t\t:rtype: size_t\n\t\t\t\n\t\t\t.. seealso::add_node(), add_edge() , add_symmetrical_edge()\n\t\t\t.. note:: You can call this function without parameters. You can also use this function after initialization, call init() after you're finished your modifications. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_19add_graph(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_name = 0;
+ PyObject *__pyx_v_classe = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("add_graph (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,&__pyx_n_s_classe,0};
+ PyObject* values[2] = {0,0};
+ values[0] = ((PyObject *)__pyx_kp_u_);
+ values[1] = ((PyObject *)__pyx_kp_u_);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name);
+ if (value) { values[0] = value; kw_args--; }
+ }
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_classe);
+ if (value) { values[1] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "add_graph") < 0)) __PYX_ERR(0, 298, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_name = values[0];
+ __pyx_v_classe = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("add_graph", 0, 0, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 298, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_18add_graph(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_name, __pyx_v_classe);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_18add_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_name, PyObject *__pyx_v_classe) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ std::string __pyx_t_4;
+ std::string __pyx_t_5;
+ size_t __pyx_t_6;
+ __Pyx_RefNannySetupContext("add_graph", 0);
+
+ /* "gedlibpy.pyx":312
+ * .. note:: You can call this function without parameters. You can also use this function after initialization, call init() after you're finished your modifications.
+ * """
+ * return self.c_env.addGraph(name.encode('utf-8'), classe.encode('utf-8')) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_name, __pyx_n_s_encode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 312, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 312, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __pyx_convert_string_from_py_std__in_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 312, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_classe, __pyx_n_s_encode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 312, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 312, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_5 = __pyx_convert_string_from_py_std__in_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 312, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ try {
+ __pyx_t_6 = __pyx_v_self->c_env->addGraph(__pyx_t_4, __pyx_t_5);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 312, __pyx_L1_error)
+ }
+ __pyx_t_1 = __Pyx_PyInt_FromSize_t(__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 312, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":298
+ *
+ *
+ * def add_graph(self, name="", classe="") : # <<<<<<<<<<<<<<
+ * """
+ * Adds a empty graph on the environment, with its name and its class. Nodes and edges will be add in a second time.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":315
+ *
+ *
+ * def add_node(self, graph_id, node_id, node_label): # <<<<<<<<<<<<<<
+ * """
+ * Adds a node on a graph selected by its ID. A ID and a label for the node is required.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_21add_node(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_20add_node[] = "\n\t\t\tAdds a node on a graph selected by its ID. A ID and a label for the node is required. \n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:param node_id: The ID of the new node\n\t\t\t:param node_label: The label of the new node\n\t\t\t:type graph_id: size_t\n\t\t\t:type node_id: string\n\t\t\t:type node_label: dict{string : string}\n\t\t\t\n\t\t\t.. seealso:: add_graph(), add_edge(), add_symmetrical_edge()\n\t\t\t.. note:: You can also use this function after initialization, but only on a newly added graph. Call init() after you're finished your modifications. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_21add_node(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_graph_id = 0;
+ PyObject *__pyx_v_node_id = 0;
+ PyObject *__pyx_v_node_label = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("add_node (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_graph_id,&__pyx_n_s_node_id,&__pyx_n_s_node_label,0};
+ PyObject* values[3] = {0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_graph_id)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_node_id)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_node", 1, 3, 3, 1); __PYX_ERR(0, 315, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_node_label)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_node", 1, 3, 3, 2); __PYX_ERR(0, 315, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "add_node") < 0)) __PYX_ERR(0, 315, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ }
+ __pyx_v_graph_id = values[0];
+ __pyx_v_node_id = values[1];
+ __pyx_v_node_label = values[2];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("add_node", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 315, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_node", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_20add_node(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_graph_id, __pyx_v_node_id, __pyx_v_node_label);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_20add_node(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id, PyObject *__pyx_v_node_id, PyObject *__pyx_v_node_label) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ std::string __pyx_t_5;
+ std::map __pyx_t_6;
+ __Pyx_RefNannySetupContext("add_node", 0);
+
+ /* "gedlibpy.pyx":329
+ * .. note:: You can also use this function after initialization, but only on a newly added graph. Call init() after you're finished your modifications.
+ * """
+ * self.c_env.addNode(graph_id, node_id.encode('utf-8'), encode_your_map(node_label)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 329, __pyx_L1_error)
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_node_id, __pyx_n_s_encode); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 329, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_4, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 329, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_5 = __pyx_convert_string_from_py_std__in_string(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 329, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 329, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_4, __pyx_v_node_label) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_v_node_label);
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 329, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 329, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ try {
+ __pyx_v_self->c_env->addNode(__pyx_t_1, __pyx_t_5, __pyx_t_6);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 329, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":315
+ *
+ *
+ * def add_node(self, graph_id, node_id, node_label): # <<<<<<<<<<<<<<
+ * """
+ * Adds a node on a graph selected by its ID. A ID and a label for the node is required.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_node", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":332
+ *
+ *
+ * def add_edge(self, graph_id, tail, head, edge_label, ignore_duplicates=True) : # <<<<<<<<<<<<<<
+ * """
+ * Adds an edge on a graph selected by its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_23add_edge(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_22add_edge[] = "\n\t\t\tAdds an edge on a graph selected by its ID. \n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:param tail: The ID of the tail node for the new edge\n\t\t\t:param head: The ID of the head node for the new edge\n\t\t\t:param edge_label: The label of the new edge\n\t\t\t:param ignore_duplicates: If True, duplicate edges are ignored, otherwise it's raise an error if an existing edge is added. True by default\n\t\t\t:type graph_id: size_t\n\t\t\t:type tail: string\n\t\t\t:type head: string\n\t\t\t:type edge_label: dict{string : string}\n\t\t\t:type ignore_duplicates: bool\n\t\t\t\n\t\t\t.. seealso:: add_graph(), add_node(), add_symmetrical_edge()\n\t\t\t.. note:: You can also use this function after initialization, but only on a newly added graph. Call init() after you're finished your modifications. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_23add_edge(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_graph_id = 0;
+ PyObject *__pyx_v_tail = 0;
+ PyObject *__pyx_v_head = 0;
+ PyObject *__pyx_v_edge_label = 0;
+ PyObject *__pyx_v_ignore_duplicates = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("add_edge (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_graph_id,&__pyx_n_s_tail,&__pyx_n_s_head,&__pyx_n_s_edge_label,&__pyx_n_s_ignore_duplicates,0};
+ PyObject* values[5] = {0,0,0,0,0};
+ values[4] = ((PyObject *)Py_True);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_graph_id)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_tail)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_edge", 0, 4, 5, 1); __PYX_ERR(0, 332, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_head)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_edge", 0, 4, 5, 2); __PYX_ERR(0, 332, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edge_label)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_edge", 0, 4, 5, 3); __PYX_ERR(0, 332, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 4:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_ignore_duplicates);
+ if (value) { values[4] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "add_edge") < 0)) __PYX_ERR(0, 332, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_graph_id = values[0];
+ __pyx_v_tail = values[1];
+ __pyx_v_head = values[2];
+ __pyx_v_edge_label = values[3];
+ __pyx_v_ignore_duplicates = values[4];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("add_edge", 0, 4, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 332, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_edge", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_22add_edge(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_graph_id, __pyx_v_tail, __pyx_v_head, __pyx_v_edge_label, __pyx_v_ignore_duplicates);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_22add_edge(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id, PyObject *__pyx_v_tail, PyObject *__pyx_v_head, PyObject *__pyx_v_edge_label, PyObject *__pyx_v_ignore_duplicates) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ std::string __pyx_t_5;
+ std::string __pyx_t_6;
+ std::map __pyx_t_7;
+ bool __pyx_t_8;
+ __Pyx_RefNannySetupContext("add_edge", 0);
+
+ /* "gedlibpy.pyx":350
+ * .. note:: You can also use this function after initialization, but only on a newly added graph. Call init() after you're finished your modifications.
+ * """
+ * self.c_env.addEdge(graph_id, tail.encode('utf-8'), head.encode('utf-8'), encode_your_map(edge_label), ignore_duplicates) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 350, __pyx_L1_error)
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_tail, __pyx_n_s_encode); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 350, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_4, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 350, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_5 = __pyx_convert_string_from_py_std__in_string(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 350, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_head, __pyx_n_s_encode); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 350, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_4, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 350, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __pyx_convert_string_from_py_std__in_string(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 350, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 350, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_4, __pyx_v_edge_label) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_v_edge_label);
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 350, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_7 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 350, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_v_ignore_duplicates); if (unlikely((__pyx_t_8 == ((bool)-1)) && PyErr_Occurred())) __PYX_ERR(0, 350, __pyx_L1_error)
+ try {
+ __pyx_v_self->c_env->addEdge(__pyx_t_1, __pyx_t_5, __pyx_t_6, __pyx_t_7, __pyx_t_8);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 350, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":332
+ *
+ *
+ * def add_edge(self, graph_id, tail, head, edge_label, ignore_duplicates=True) : # <<<<<<<<<<<<<<
+ * """
+ * Adds an edge on a graph selected by its ID.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_edge", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":353
+ *
+ *
+ * def add_symmetrical_edge(self, graph_id, tail, head, edge_label) : # <<<<<<<<<<<<<<
+ * """
+ * Adds a symmetrical edge on a graph selected by its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_25add_symmetrical_edge(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_24add_symmetrical_edge[] = "\n\t\t\tAdds a symmetrical edge on a graph selected by its ID. \n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:param tail: The ID of the tail node for the new edge\n\t\t\t:param head: The ID of the head node for the new edge\n\t\t\t:param edge_label: The label of the new edge\n\t\t\t:type graph_id: size_t\n\t\t\t:type tail: string\n\t\t\t:type head: string\n\t\t\t:type edge_label: dict{string : string}\n\t\t\t\n\t\t\t.. seealso:: add_graph(), add_node(), add_edge()\n\t\t\t.. note:: You can also use this function after initialization, but only on a newly added graph. Call init() after you're finished your modifications. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_25add_symmetrical_edge(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_graph_id = 0;
+ PyObject *__pyx_v_tail = 0;
+ PyObject *__pyx_v_head = 0;
+ PyObject *__pyx_v_edge_label = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("add_symmetrical_edge (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_graph_id,&__pyx_n_s_tail,&__pyx_n_s_head,&__pyx_n_s_edge_label,0};
+ PyObject* values[4] = {0,0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_graph_id)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_tail)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_symmetrical_edge", 1, 4, 4, 1); __PYX_ERR(0, 353, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_head)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_symmetrical_edge", 1, 4, 4, 2); __PYX_ERR(0, 353, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edge_label)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_symmetrical_edge", 1, 4, 4, 3); __PYX_ERR(0, 353, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "add_symmetrical_edge") < 0)) __PYX_ERR(0, 353, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 4) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ }
+ __pyx_v_graph_id = values[0];
+ __pyx_v_tail = values[1];
+ __pyx_v_head = values[2];
+ __pyx_v_edge_label = values[3];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("add_symmetrical_edge", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 353, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_symmetrical_edge", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_24add_symmetrical_edge(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_graph_id, __pyx_v_tail, __pyx_v_head, __pyx_v_edge_label);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_24add_symmetrical_edge(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id, PyObject *__pyx_v_tail, PyObject *__pyx_v_head, PyObject *__pyx_v_edge_label) {
+ PyObject *__pyx_v_tailB = NULL;
+ PyObject *__pyx_v_headB = NULL;
+ PyObject *__pyx_v_edgeLabelB = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ size_t __pyx_t_4;
+ std::string __pyx_t_5;
+ std::string __pyx_t_6;
+ std::map __pyx_t_7;
+ __Pyx_RefNannySetupContext("add_symmetrical_edge", 0);
+
+ /* "gedlibpy.pyx":369
+ * .. note:: You can also use this function after initialization, but only on a newly added graph. Call init() after you're finished your modifications.
+ * """
+ * tailB = tail.encode('utf-8') # <<<<<<<<<<<<<<
+ * headB = head.encode('utf-8')
+ * edgeLabelB = encode_your_map(edge_label)
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_tail, __pyx_n_s_encode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 369, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 369, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_tailB = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":370
+ * """
+ * tailB = tail.encode('utf-8')
+ * headB = head.encode('utf-8') # <<<<<<<<<<<<<<
+ * edgeLabelB = encode_your_map(edge_label)
+ * self.c_env.addEdge(graph_id, tailB, headB, edgeLabelB, True)
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_head, __pyx_n_s_encode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 370, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 370, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_headB = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":371
+ * tailB = tail.encode('utf-8')
+ * headB = head.encode('utf-8')
+ * edgeLabelB = encode_your_map(edge_label) # <<<<<<<<<<<<<<
+ * self.c_env.addEdge(graph_id, tailB, headB, edgeLabelB, True)
+ * self.c_env.addEdge(graph_id, headB, tailB, edgeLabelB, True)
+ */
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 371, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_edge_label) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_edge_label);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 371, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_edgeLabelB = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":372
+ * headB = head.encode('utf-8')
+ * edgeLabelB = encode_your_map(edge_label)
+ * self.c_env.addEdge(graph_id, tailB, headB, edgeLabelB, True) # <<<<<<<<<<<<<<
+ * self.c_env.addEdge(graph_id, headB, tailB, edgeLabelB, True)
+ *
+ */
+ __pyx_t_4 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_4 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 372, __pyx_L1_error)
+ __pyx_t_5 = __pyx_convert_string_from_py_std__in_string(__pyx_v_tailB); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 372, __pyx_L1_error)
+ __pyx_t_6 = __pyx_convert_string_from_py_std__in_string(__pyx_v_headB); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 372, __pyx_L1_error)
+ __pyx_t_7 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_v_edgeLabelB); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 372, __pyx_L1_error)
+ try {
+ __pyx_v_self->c_env->addEdge(__pyx_t_4, __pyx_t_5, __pyx_t_6, __pyx_t_7, 1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 372, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":373
+ * edgeLabelB = encode_your_map(edge_label)
+ * self.c_env.addEdge(graph_id, tailB, headB, edgeLabelB, True)
+ * self.c_env.addEdge(graph_id, headB, tailB, edgeLabelB, True) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __pyx_t_4 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_4 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 373, __pyx_L1_error)
+ __pyx_t_6 = __pyx_convert_string_from_py_std__in_string(__pyx_v_headB); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 373, __pyx_L1_error)
+ __pyx_t_5 = __pyx_convert_string_from_py_std__in_string(__pyx_v_tailB); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 373, __pyx_L1_error)
+ __pyx_t_7 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_v_edgeLabelB); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 373, __pyx_L1_error)
+ try {
+ __pyx_v_self->c_env->addEdge(__pyx_t_4, __pyx_t_6, __pyx_t_5, __pyx_t_7, 1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 373, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":353
+ *
+ *
+ * def add_symmetrical_edge(self, graph_id, tail, head, edge_label) : # <<<<<<<<<<<<<<
+ * """
+ * Adds a symmetrical edge on a graph selected by its ID.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_symmetrical_edge", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_tailB);
+ __Pyx_XDECREF(__pyx_v_headB);
+ __Pyx_XDECREF(__pyx_v_edgeLabelB);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":376
+ *
+ *
+ * def clear_graph(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Deletes a graph, selected by its ID, to the environment.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_27clear_graph(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_26clear_graph[] = "\n\t\t\tDeletes a graph, selected by its ID, to the environment.\n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:type graph_id: size_t\n\t\t\t\n\t\t\t.. note:: Call init() after you're finished your modifications. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_27clear_graph(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("clear_graph (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_26clear_graph(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_graph_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_26clear_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ __Pyx_RefNannySetupContext("clear_graph", 0);
+
+ /* "gedlibpy.pyx":385
+ * .. note:: Call init() after you're finished your modifications.
+ * """
+ * self.c_env.clearGraph(graph_id) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 385, __pyx_L1_error)
+ try {
+ __pyx_v_self->c_env->clearGraph(__pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 385, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":376
+ *
+ *
+ * def clear_graph(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Deletes a graph, selected by its ID, to the environment.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.clear_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":388
+ *
+ *
+ * def get_graph_internal_id(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns the internal Id of a graph, selected by its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_29get_graph_internal_id(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_28get_graph_internal_id[] = "\n\t\t\tSearchs and returns the internal Id of a graph, selected by its ID. \n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:type graph_id: size_t\n\t\t\t:return: The internal ID of the selected graph\n\t\t\t:rtype: size_t\n\t\t\t\n\t\t\t.. seealso:: get_graph_num_nodes(), get_graph_num_edges(), get_original_node_ids(), get_graph_node_labels(), get_graph_edges(), get_graph_adjacence_matrix()\n\t\t\t.. note:: These functions allow to collect all the graph's informations.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_29get_graph_internal_id(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_graph_internal_id (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_28get_graph_internal_id(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_graph_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_28get_graph_internal_id(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ __Pyx_RefNannySetupContext("get_graph_internal_id", 0);
+
+ /* "gedlibpy.pyx":400
+ * .. note:: These functions allow to collect all the graph's informations.
+ * """
+ * return self.c_env.getGraphInternalId(graph_id) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 400, __pyx_L1_error)
+ try {
+ __pyx_t_2 = __pyx_v_self->c_env->getGraphInternalId(__pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 400, __pyx_L1_error)
+ }
+ __pyx_t_3 = __Pyx_PyInt_FromSize_t(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 400, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_r = __pyx_t_3;
+ __pyx_t_3 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":388
+ *
+ *
+ * def get_graph_internal_id(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns the internal Id of a graph, selected by its ID.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_graph_internal_id", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":403
+ *
+ *
+ * def get_graph_num_nodes(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns the number of nodes on a graph, selected by its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_31get_graph_num_nodes(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_30get_graph_num_nodes[] = "\n\t\t\tSearchs and returns the number of nodes on a graph, selected by its ID. \n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:type graph_id: size_t\n\t\t\t:return: The number of nodes on the selected graph\n\t\t\t:rtype: size_t\n\t\t\t\n\t\t\t.. seealso:: get_graph_internal_id(), get_graph_num_edges(), get_original_node_ids(), get_graph_node_labels(), get_graph_edges(), get_graph_adjacence_matrix()\n\t\t\t.. note:: These functions allow to collect all the graph's informations.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_31get_graph_num_nodes(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_graph_num_nodes (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_30get_graph_num_nodes(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_graph_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_30get_graph_num_nodes(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ __Pyx_RefNannySetupContext("get_graph_num_nodes", 0);
+
+ /* "gedlibpy.pyx":415
+ * .. note:: These functions allow to collect all the graph's informations.
+ * """
+ * return self.c_env.getGraphNumNodes(graph_id) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 415, __pyx_L1_error)
+ try {
+ __pyx_t_2 = __pyx_v_self->c_env->getGraphNumNodes(__pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 415, __pyx_L1_error)
+ }
+ __pyx_t_3 = __Pyx_PyInt_FromSize_t(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 415, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_r = __pyx_t_3;
+ __pyx_t_3 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":403
+ *
+ *
+ * def get_graph_num_nodes(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns the number of nodes on a graph, selected by its ID.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_graph_num_nodes", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":418
+ *
+ *
+ * def get_graph_num_edges(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns the number of edges on a graph, selected by its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_33get_graph_num_edges(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_32get_graph_num_edges[] = "\n\t\t\tSearchs and returns the number of edges on a graph, selected by its ID. \n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:type graph_id: size_t\n\t\t\t:return: The number of edges on the selected graph\n\t\t\t:rtype: size_t\n\t\t\t\n\t\t\t.. seealso:: get_graph_internal_id(), get_graph_num_nodes(), get_original_node_ids(), get_graph_node_labels(), get_graph_edges(), get_graph_adjacence_matrix()\n\t\t\t.. note:: These functions allow to collect all the graph's informations.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_33get_graph_num_edges(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_graph_num_edges (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_32get_graph_num_edges(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_graph_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_32get_graph_num_edges(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ __Pyx_RefNannySetupContext("get_graph_num_edges", 0);
+
+ /* "gedlibpy.pyx":430
+ * .. note:: These functions allow to collect all the graph's informations.
+ * """
+ * return self.c_env.getGraphNumEdges(graph_id) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 430, __pyx_L1_error)
+ try {
+ __pyx_t_2 = __pyx_v_self->c_env->getGraphNumEdges(__pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 430, __pyx_L1_error)
+ }
+ __pyx_t_3 = __Pyx_PyInt_FromSize_t(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 430, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_r = __pyx_t_3;
+ __pyx_t_3 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":418
+ *
+ *
+ * def get_graph_num_edges(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns the number of edges on a graph, selected by its ID.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_graph_num_edges", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":433
+ *
+ *
+ * def get_original_node_ids(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns all th Ids of nodes on a graph, selected by its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_35get_original_node_ids(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_34get_original_node_ids[] = "\n\t\t\tSearchs and returns all th Ids of nodes on a graph, selected by its ID. \n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:type graph_id: size_t\n\t\t\t:return: The list of IDs's nodes on the selected graph\n\t\t\t:rtype: list[string]\n\t\t\t\n\t\t\t.. seealso::get_graph_internal_id(), get_graph_num_nodes(), get_graph_num_edges(), get_graph_node_labels(), get_graph_edges(), get_graph_adjacence_matrix()\n\t\t\t.. note:: These functions allow to collect all the graph's informations.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_35get_original_node_ids(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_original_node_ids (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_34get_original_node_ids(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_graph_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_34get_original_node_ids(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ std::string __pyx_8genexpr3__pyx_v_gid;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ size_t __pyx_t_2;
+ std::vector __pyx_t_3;
+ std::vector ::iterator __pyx_t_4;
+ std::vector *__pyx_t_5;
+ std::string __pyx_t_6;
+ PyObject *__pyx_t_7 = NULL;
+ __Pyx_RefNannySetupContext("get_original_node_ids", 0);
+
+ /* "gedlibpy.pyx":445
+ * .. note:: These functions allow to collect all the graph's informations.
+ * """
+ * return [gid.decode('utf-8') for gid in self.c_env.getGraphOriginalNodeIds(graph_id)] # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ { /* enter inner scope */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 445, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 445, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getGraphOriginalNodeIds(__pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 445, __pyx_L1_error)
+ }
+ __pyx_t_5 = &__pyx_t_3;
+ __pyx_t_4 = __pyx_t_5->begin();
+ for (;;) {
+ if (!(__pyx_t_4 != __pyx_t_5->end())) break;
+ __pyx_t_6 = *__pyx_t_4;
+ ++__pyx_t_4;
+ __pyx_8genexpr3__pyx_v_gid = __pyx_t_6;
+ __pyx_t_7 = __Pyx_decode_cpp_string(__pyx_8genexpr3__pyx_v_gid, 0, PY_SSIZE_T_MAX, NULL, NULL, PyUnicode_DecodeUTF8); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 445, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_7))) __PYX_ERR(0, 445, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ }
+ } /* exit inner scope */
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":433
+ *
+ *
+ * def get_original_node_ids(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns all th Ids of nodes on a graph, selected by its ID.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_original_node_ids", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":448
+ *
+ *
+ * def get_graph_node_labels(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns all the labels of nodes on a graph, selected by its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_37get_graph_node_labels(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_36get_graph_node_labels[] = "\n\t\t\tSearchs and returns all the labels of nodes on a graph, selected by its ID. \n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:type graph_id: size_t\n\t\t\t:return: The list of nodes' labels on the selected graph\n\t\t\t:rtype: list[dict{string : string}]\n\t\t\t\n\t\t\t.. seealso:: get_graph_internal_id(), get_graph_num_nodes(), get_graph_num_edges(), get_original_node_ids(), get_graph_edges(), get_graph_adjacence_matrix()\n\t\t\t.. note:: These functions allow to collect all the graph's informations.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_37get_graph_node_labels(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_graph_node_labels (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_36get_graph_node_labels(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_graph_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_36get_graph_node_labels(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ std::map __pyx_8genexpr4__pyx_v_node_label;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ size_t __pyx_t_2;
+ std::vector > __pyx_t_3;
+ std::vector > ::iterator __pyx_t_4;
+ std::vector > *__pyx_t_5;
+ std::map __pyx_t_6;
+ PyObject *__pyx_t_7 = NULL;
+ PyObject *__pyx_t_8 = NULL;
+ PyObject *__pyx_t_9 = NULL;
+ PyObject *__pyx_t_10 = NULL;
+ __Pyx_RefNannySetupContext("get_graph_node_labels", 0);
+
+ /* "gedlibpy.pyx":460
+ * .. note:: These functions allow to collect all the graph's informations.
+ * """
+ * return [decode_your_map(node_label) for node_label in self.c_env.getGraphNodeLabels(graph_id)] # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ { /* enter inner scope */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 460, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 460, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getGraphNodeLabels(__pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 460, __pyx_L1_error)
+ }
+ __pyx_t_5 = &__pyx_t_3;
+ __pyx_t_4 = __pyx_t_5->begin();
+ for (;;) {
+ if (!(__pyx_t_4 != __pyx_t_5->end())) break;
+ __pyx_t_6 = *__pyx_t_4;
+ ++__pyx_t_4;
+ __pyx_8genexpr4__pyx_v_node_label = __pyx_t_6;
+ __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_decode_your_map); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 460, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_9 = __pyx_convert_map_to_py_std_3a__3a_string____std_3a__3a_string(__pyx_8genexpr4__pyx_v_node_label); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 460, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __pyx_t_10 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_8))) {
+ __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_8);
+ if (likely(__pyx_t_10)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
+ __Pyx_INCREF(__pyx_t_10);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_8, function);
+ }
+ }
+ __pyx_t_7 = (__pyx_t_10) ? __Pyx_PyObject_Call2Args(__pyx_t_8, __pyx_t_10, __pyx_t_9) : __Pyx_PyObject_CallOneArg(__pyx_t_8, __pyx_t_9);
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 460, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_7))) __PYX_ERR(0, 460, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ }
+ } /* exit inner scope */
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":448
+ *
+ *
+ * def get_graph_node_labels(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns all the labels of nodes on a graph, selected by its ID.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_XDECREF(__pyx_t_8);
+ __Pyx_XDECREF(__pyx_t_9);
+ __Pyx_XDECREF(__pyx_t_10);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_graph_node_labels", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":463
+ *
+ *
+ * def get_graph_edges(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns all the edges on a graph, selected by its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_39get_graph_edges(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_38get_graph_edges[] = "\n\t\t\tSearchs and returns all the edges on a graph, selected by its ID. \n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:type graph_id: size_t\n\t\t\t:return: The list of edges on the selected graph\n\t\t\t:rtype: dict{tuple(size_t, size_t) : dict{string : string}}\n\t\t\t\n\t\t\t.. seealso::get_graph_internal_id(), get_graph_num_nodes(), get_graph_num_edges(), get_original_node_ids(), get_graph_node_labels(), get_graph_adjacence_matrix()\n\t\t\t.. note:: These functions allow to collect all the graph's informations.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_39get_graph_edges(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_graph_edges (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_38get_graph_edges(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_graph_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_38get_graph_edges(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ size_t __pyx_t_3;
+ std::map ,std::map > __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ __Pyx_RefNannySetupContext("get_graph_edges", 0);
+
+ /* "gedlibpy.pyx":475
+ * .. note:: These functions allow to collect all the graph's informations.
+ * """
+ * return decode_graph_edges(self.c_env.getGraphEdges(graph_id)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_decode_graph_edges); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 475, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_3 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 475, __pyx_L1_error)
+ try {
+ __pyx_t_4 = __pyx_v_self->c_env->getGraphEdges(__pyx_t_3);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 475, __pyx_L1_error)
+ }
+ __pyx_t_5 = __pyx_convert_map_to_py_std_3a__3a_pair_3c_size_t_2c_size_t_3e_______std_3a__3a_map_3c_std_3a__3a_string_2c_std_3a__3a_string_3e___(__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 475, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_6, __pyx_t_5) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 475, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":463
+ *
+ *
+ * def get_graph_edges(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns all the edges on a graph, selected by its ID.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_graph_edges", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":478
+ *
+ *
+ * def get_graph_adjacence_matrix(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns the adjacence list of a graph, selected by its ID.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_41get_graph_adjacence_matrix(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_40get_graph_adjacence_matrix[] = "\n\t\t\tSearchs and returns the adjacence list of a graph, selected by its ID. \n\t\n\t\t\t:param graph_id: The ID of the wanted graph\n\t\t\t:type graph_id: size_t\n\t\t\t:return: The adjacence list of the selected graph\n\t\t\t:rtype: list[list[size_t]]\n\t\t\t\n\t\t\t.. seealso:: get_graph_internal_id(), get_graph_num_nodes(), get_graph_num_edges(), get_original_node_ids(), get_graph_node_labels(), get_graph_edges()\n\t\t\t.. note:: These functions allow to collect all the graph's informations.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_41get_graph_adjacence_matrix(PyObject *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_graph_adjacence_matrix (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_40get_graph_adjacence_matrix(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_graph_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_40get_graph_adjacence_matrix(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ std::vector > __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ __Pyx_RefNannySetupContext("get_graph_adjacence_matrix", 0);
+
+ /* "gedlibpy.pyx":490
+ * .. note:: These functions allow to collect all the graph's informations.
+ * """
+ * return self.c_env.getGraphAdjacenceMatrix(graph_id) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_graph_id); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 490, __pyx_L1_error)
+ try {
+ __pyx_t_2 = __pyx_v_self->c_env->getGraphAdjacenceMatrix(__pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 490, __pyx_L1_error)
+ }
+ __pyx_t_3 = __pyx_convert_vector_to_py_std_3a__3a_vector_3c_size_t_3e___(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 490, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_r = __pyx_t_3;
+ __pyx_t_3 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":478
+ *
+ *
+ * def get_graph_adjacence_matrix(self, graph_id) : # <<<<<<<<<<<<<<
+ * """
+ * Searchs and returns the adjacence list of a graph, selected by its ID.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_graph_adjacence_matrix", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":493
+ *
+ *
+ * def set_edit_cost(self, edit_cost, edit_cost_constant = []) : # <<<<<<<<<<<<<<
+ * """
+ * Sets an edit cost function to the environment, if it exists.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_43set_edit_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_42set_edit_cost[] = "\n\t\t\tSets an edit cost function to the environment, if it exists. \n\t\n\t\t\t:param edit_cost: The name of the edit cost function\n\t\t\t:type edit_cost: string\n\t\t\t:param edi_cost_constant: The parameters you will add to the editCost, empty by default\n\t\t\t:type edit_cost_constant: list\n\t\t\t\n\t\t\t.. seealso:: list_of_edit_cost_options\n\t\t\t.. note:: Try to make sure the edit cost function exists with list_of_edit_cost_options, raise an error otherwise. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_43set_edit_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_edit_cost = 0;
+ PyObject *__pyx_v_edit_cost_constant = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("set_edit_cost (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_edit_cost,&__pyx_n_s_edit_cost_constant,0};
+ PyObject* values[2] = {0,0};
+ values[1] = __pyx_k__2;
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edit_cost)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edit_cost_constant);
+ if (value) { values[1] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "set_edit_cost") < 0)) __PYX_ERR(0, 493, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_edit_cost = values[0];
+ __pyx_v_edit_cost_constant = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("set_edit_cost", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 493, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.set_edit_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_42set_edit_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_edit_cost, __pyx_v_edit_cost_constant);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_42set_edit_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edit_cost, PyObject *__pyx_v_edit_cost_constant) {
+ PyObject *__pyx_v_edit_cost_b = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ int __pyx_t_2;
+ int __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ std::string __pyx_t_6;
+ std::vector __pyx_t_7;
+ __Pyx_RefNannySetupContext("set_edit_cost", 0);
+
+ /* "gedlibpy.pyx":505
+ * .. note:: Try to make sure the edit cost function exists with list_of_edit_cost_options, raise an error otherwise.
+ * """
+ * if edit_cost in list_of_edit_cost_options: # <<<<<<<<<<<<<<
+ * edit_cost_b = edit_cost.encode('utf-8')
+ * self.c_env.setEditCost(edit_cost_b, edit_cost_constant)
+ */
+ __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_list_of_edit_cost_options); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 505, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = (__Pyx_PySequence_ContainsTF(__pyx_v_edit_cost, __pyx_t_1, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 505, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_3 = (__pyx_t_2 != 0);
+ if (likely(__pyx_t_3)) {
+
+ /* "gedlibpy.pyx":506
+ * """
+ * if edit_cost in list_of_edit_cost_options:
+ * edit_cost_b = edit_cost.encode('utf-8') # <<<<<<<<<<<<<<
+ * self.c_env.setEditCost(edit_cost_b, edit_cost_constant)
+ * else:
+ */
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_edit_cost, __pyx_n_s_encode); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 506, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 506, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_v_edit_cost_b = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":507
+ * if edit_cost in list_of_edit_cost_options:
+ * edit_cost_b = edit_cost.encode('utf-8')
+ * self.c_env.setEditCost(edit_cost_b, edit_cost_constant) # <<<<<<<<<<<<<<
+ * else:
+ * raise EditCostError("This edit cost function doesn't exist, please see list_of_edit_cost_options for selecting a edit cost function")
+ */
+ __pyx_t_6 = __pyx_convert_string_from_py_std__in_string(__pyx_v_edit_cost_b); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 507, __pyx_L1_error)
+ __pyx_t_7 = __pyx_convert_vector_from_py_double(__pyx_v_edit_cost_constant); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 507, __pyx_L1_error)
+ try {
+ __pyx_v_self->c_env->setEditCost(__pyx_t_6, __pyx_t_7);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 507, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":505
+ * .. note:: Try to make sure the edit cost function exists with list_of_edit_cost_options, raise an error otherwise.
+ * """
+ * if edit_cost in list_of_edit_cost_options: # <<<<<<<<<<<<<<
+ * edit_cost_b = edit_cost.encode('utf-8')
+ * self.c_env.setEditCost(edit_cost_b, edit_cost_constant)
+ */
+ goto __pyx_L3;
+ }
+
+ /* "gedlibpy.pyx":509
+ * self.c_env.setEditCost(edit_cost_b, edit_cost_constant)
+ * else:
+ * raise EditCostError("This edit cost function doesn't exist, please see list_of_edit_cost_options for selecting a edit cost function") # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ /*else*/ {
+ __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_EditCostError); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 509, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_kp_u_This_edit_cost_function_doesn_t) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_kp_u_This_edit_cost_function_doesn_t);
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 509, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(0, 509, __pyx_L1_error)
+ }
+ __pyx_L3:;
+
+ /* "gedlibpy.pyx":493
+ *
+ *
+ * def set_edit_cost(self, edit_cost, edit_cost_constant = []) : # <<<<<<<<<<<<<<
+ * """
+ * Sets an edit cost function to the environment, if it exists.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.set_edit_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_edit_cost_b);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":512
+ *
+ *
+ * def set_personal_edit_cost(self, edit_cost_constant = []) : # <<<<<<<<<<<<<<
+ * """
+ * Sets an personal edit cost function to the environment.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_45set_personal_edit_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_44set_personal_edit_cost[] = "\n\t\t\tSets an personal edit cost function to the environment.\n\t\n\t\t\t:param edit_cost_constant: The parameters you will add to the editCost, empty by default\n\t\t\t:type edit_cost_constant: list\n\t\n\t\t\t.. seealso:: list_of_edit_cost_options, set_edit_cost()\n\t\t\t.. note::You have to modify the C++ function to use it. Please see the documentation to add your Edit Cost function. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_45set_personal_edit_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_edit_cost_constant = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("set_personal_edit_cost (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_edit_cost_constant,0};
+ PyObject* values[1] = {0};
+ values[0] = __pyx_k__3;
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edit_cost_constant);
+ if (value) { values[0] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "set_personal_edit_cost") < 0)) __PYX_ERR(0, 512, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_edit_cost_constant = values[0];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("set_personal_edit_cost", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 512, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.set_personal_edit_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_44set_personal_edit_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_edit_cost_constant);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_44set_personal_edit_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edit_cost_constant) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ std::vector __pyx_t_1;
+ __Pyx_RefNannySetupContext("set_personal_edit_cost", 0);
+
+ /* "gedlibpy.pyx":522
+ * .. note::You have to modify the C++ function to use it. Please see the documentation to add your Edit Cost function.
+ * """
+ * self.c_env.setPersonalEditCost(edit_cost_constant) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __pyx_t_1 = __pyx_convert_vector_from_py_double(__pyx_v_edit_cost_constant); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 522, __pyx_L1_error)
+ try {
+ __pyx_v_self->c_env->setPersonalEditCost(__pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 522, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":512
+ *
+ *
+ * def set_personal_edit_cost(self, edit_cost_constant = []) : # <<<<<<<<<<<<<<
+ * """
+ * Sets an personal edit cost function to the environment.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.set_personal_edit_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":525
+ *
+ *
+ * def init(self, init_option='EAGER_WITHOUT_SHUFFLED_COPIES', print_to_stdout=False) : # <<<<<<<<<<<<<<
+ * """
+ * Initializes the environment with the chosen edit cost function and graphs.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_47init(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_46init[] = "\n\t\t\tInitializes the environment with the chosen edit cost function and graphs.\n\t\n\t\t\t:param init_option: The name of the init option, \"EAGER_WITHOUT_SHUFFLED_COPIES\" by default\n\t\t\t:type init_option: string\n\t\t\t\n\t\t\t.. seealso:: list_of_init_options\n\t\t\t.. warning:: No modification were allowed after initialization. Try to make sure your choices is correct. You can though clear or add a graph, but recall init() after that. \n\t\t\t.. note:: Try to make sure the option exists with list_of_init_options or choose no options, raise an error otherwise.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_47init(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_init_option = 0;
+ PyObject *__pyx_v_print_to_stdout = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("init (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_init_option,&__pyx_n_s_print_to_stdout,0};
+ PyObject* values[2] = {0,0};
+ values[0] = ((PyObject *)__pyx_n_u_EAGER_WITHOUT_SHUFFLED_COPIES);
+ values[1] = ((PyObject *)Py_False);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_init_option);
+ if (value) { values[0] = value; kw_args--; }
+ }
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_print_to_stdout);
+ if (value) { values[1] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "init") < 0)) __PYX_ERR(0, 525, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_init_option = values[0];
+ __pyx_v_print_to_stdout = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("init", 0, 0, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 525, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.init", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_46init(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_init_option, __pyx_v_print_to_stdout);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_46init(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_init_option, PyObject *__pyx_v_print_to_stdout) {
+ PyObject *__pyx_v_init_option_b = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ int __pyx_t_2;
+ int __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ std::string __pyx_t_6;
+ bool __pyx_t_7;
+ __Pyx_RefNannySetupContext("init", 0);
+
+ /* "gedlibpy.pyx":536
+ * .. note:: Try to make sure the option exists with list_of_init_options or choose no options, raise an error otherwise.
+ * """
+ * if init_option in list_of_init_options: # <<<<<<<<<<<<<<
+ * init_option_b = init_option.encode('utf-8')
+ * self.c_env.initEnv(init_option_b, print_to_stdout)
+ */
+ __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_list_of_init_options); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 536, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = (__Pyx_PySequence_ContainsTF(__pyx_v_init_option, __pyx_t_1, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 536, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_3 = (__pyx_t_2 != 0);
+ if (likely(__pyx_t_3)) {
+
+ /* "gedlibpy.pyx":537
+ * """
+ * if init_option in list_of_init_options:
+ * init_option_b = init_option.encode('utf-8') # <<<<<<<<<<<<<<
+ * self.c_env.initEnv(init_option_b, print_to_stdout)
+ * else:
+ */
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_init_option, __pyx_n_s_encode); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 537, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 537, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_v_init_option_b = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":538
+ * if init_option in list_of_init_options:
+ * init_option_b = init_option.encode('utf-8')
+ * self.c_env.initEnv(init_option_b, print_to_stdout) # <<<<<<<<<<<<<<
+ * else:
+ * raise InitError("This init option doesn't exist, please see list_of_init_options for selecting an option. You can choose any options.")
+ */
+ __pyx_t_6 = __pyx_convert_string_from_py_std__in_string(__pyx_v_init_option_b); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 538, __pyx_L1_error)
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_v_print_to_stdout); if (unlikely((__pyx_t_7 == ((bool)-1)) && PyErr_Occurred())) __PYX_ERR(0, 538, __pyx_L1_error)
+ try {
+ __pyx_v_self->c_env->initEnv(__pyx_t_6, __pyx_t_7);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 538, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":536
+ * .. note:: Try to make sure the option exists with list_of_init_options or choose no options, raise an error otherwise.
+ * """
+ * if init_option in list_of_init_options: # <<<<<<<<<<<<<<
+ * init_option_b = init_option.encode('utf-8')
+ * self.c_env.initEnv(init_option_b, print_to_stdout)
+ */
+ goto __pyx_L3;
+ }
+
+ /* "gedlibpy.pyx":540
+ * self.c_env.initEnv(init_option_b, print_to_stdout)
+ * else:
+ * raise InitError("This init option doesn't exist, please see list_of_init_options for selecting an option. You can choose any options.") # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ /*else*/ {
+ __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_InitError); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 540, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_kp_u_This_init_option_doesn_t_exist_p) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_kp_u_This_init_option_doesn_t_exist_p);
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 540, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(0, 540, __pyx_L1_error)
+ }
+ __pyx_L3:;
+
+ /* "gedlibpy.pyx":525
+ *
+ *
+ * def init(self, init_option='EAGER_WITHOUT_SHUFFLED_COPIES', print_to_stdout=False) : # <<<<<<<<<<<<<<
+ * """
+ * Initializes the environment with the chosen edit cost function and graphs.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.init", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_init_option_b);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":543
+ *
+ *
+ * def set_method(self, method, options="") : # <<<<<<<<<<<<<<
+ * """
+ * Sets a computation method to the environment, if its exists.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_49set_method(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_48set_method[] = "\n\t\t\tSets a computation method to the environment, if its exists. \n\t\n\t\t\t:param method: The name of the computation method\n\t\t\t:param options: The options of the method (like bash options), an empty string by default\n\t\t\t:type method: string\n\t\t\t:type options: string\n\t\t\t\n\t\t\t.. seealso:: init_method(), list_of_method_options\n\t\t\t.. note:: Try to make sure the edit cost function exists with list_of_method_options, raise an error otherwise. Call init_method() after your set. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_49set_method(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_method = 0;
+ PyObject *__pyx_v_options = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("set_method (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_method,&__pyx_n_s_options,0};
+ PyObject* values[2] = {0,0};
+ values[1] = ((PyObject *)__pyx_kp_u_);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_method)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_options);
+ if (value) { values[1] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "set_method") < 0)) __PYX_ERR(0, 543, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_method = values[0];
+ __pyx_v_options = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("set_method", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 543, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.set_method", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_48set_method(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_method, __pyx_v_options);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_48set_method(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_method, PyObject *__pyx_v_options) {
+ PyObject *__pyx_v_method_b = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ int __pyx_t_2;
+ int __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ std::string __pyx_t_6;
+ std::string __pyx_t_7;
+ __Pyx_RefNannySetupContext("set_method", 0);
+
+ /* "gedlibpy.pyx":555
+ * .. note:: Try to make sure the edit cost function exists with list_of_method_options, raise an error otherwise. Call init_method() after your set.
+ * """
+ * if method in list_of_method_options: # <<<<<<<<<<<<<<
+ * method_b = method.encode('utf-8')
+ * self.c_env.setMethod(method_b, options.encode('utf-8'))
+ */
+ __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_list_of_method_options); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 555, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = (__Pyx_PySequence_ContainsTF(__pyx_v_method, __pyx_t_1, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 555, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_3 = (__pyx_t_2 != 0);
+ if (likely(__pyx_t_3)) {
+
+ /* "gedlibpy.pyx":556
+ * """
+ * if method in list_of_method_options:
+ * method_b = method.encode('utf-8') # <<<<<<<<<<<<<<
+ * self.c_env.setMethod(method_b, options.encode('utf-8'))
+ * else:
+ */
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_method, __pyx_n_s_encode); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 556, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 556, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_v_method_b = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":557
+ * if method in list_of_method_options:
+ * method_b = method.encode('utf-8')
+ * self.c_env.setMethod(method_b, options.encode('utf-8')) # <<<<<<<<<<<<<<
+ * else:
+ * raise MethodError("This method doesn't exist, please see list_of_method_options for selecting a method")
+ */
+ __pyx_t_6 = __pyx_convert_string_from_py_std__in_string(__pyx_v_method_b); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 557, __pyx_L1_error)
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_options, __pyx_n_s_encode); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 557, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 557, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_7 = __pyx_convert_string_from_py_std__in_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 557, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ try {
+ __pyx_v_self->c_env->setMethod(__pyx_t_6, __pyx_t_7);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 557, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":555
+ * .. note:: Try to make sure the edit cost function exists with list_of_method_options, raise an error otherwise. Call init_method() after your set.
+ * """
+ * if method in list_of_method_options: # <<<<<<<<<<<<<<
+ * method_b = method.encode('utf-8')
+ * self.c_env.setMethod(method_b, options.encode('utf-8'))
+ */
+ goto __pyx_L3;
+ }
+
+ /* "gedlibpy.pyx":559
+ * self.c_env.setMethod(method_b, options.encode('utf-8'))
+ * else:
+ * raise MethodError("This method doesn't exist, please see list_of_method_options for selecting a method") # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ /*else*/ {
+ __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_MethodError); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 559, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_kp_u_This_method_doesn_t_exist_please) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_kp_u_This_method_doesn_t_exist_please);
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 559, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(0, 559, __pyx_L1_error)
+ }
+ __pyx_L3:;
+
+ /* "gedlibpy.pyx":543
+ *
+ *
+ * def set_method(self, method, options="") : # <<<<<<<<<<<<<<
+ * """
+ * Sets a computation method to the environment, if its exists.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.set_method", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_method_b);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":562
+ *
+ *
+ * def init_method(self) : # <<<<<<<<<<<<<<
+ * """
+ * Inits the environment with the set method.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_51init_method(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_50init_method[] = "\n\t\t\tInits the environment with the set method.\n\t\n\t\t\t.. seealso:: set_method(), list_of_method_options\n\t\t\t.. note:: Call this function after set the method. You can't launch computation or change the method after that. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_51init_method(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("init_method (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_50init_method(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_50init_method(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("init_method", 0);
+
+ /* "gedlibpy.pyx":569
+ * .. note:: Call this function after set the method. You can't launch computation or change the method after that.
+ * """
+ * self.c_env.initMethod() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ try {
+ __pyx_v_self->c_env->initMethod();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 569, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":562
+ *
+ *
+ * def init_method(self) : # <<<<<<<<<<<<<<
+ * """
+ * Inits the environment with the set method.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.init_method", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":572
+ *
+ *
+ * def get_init_time(self) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the initialization time.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_53get_init_time(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_52get_init_time[] = "\n\t\t\tReturns the initialization time.\n\t\n\t\t\t:return: The initialization time\n\t\t\t:rtype: double\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_53get_init_time(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_init_time (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_52get_init_time(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_52get_init_time(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ double __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("get_init_time", 0);
+
+ /* "gedlibpy.pyx":579
+ * :rtype: double
+ * """
+ * return self.c_env.getInitime() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ try {
+ __pyx_t_1 = __pyx_v_self->c_env->getInitime();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 579, __pyx_L1_error)
+ }
+ __pyx_t_2 = PyFloat_FromDouble(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 579, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":572
+ *
+ *
+ * def get_init_time(self) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the initialization time.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_init_time", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":582
+ *
+ *
+ * def run_method(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Computes the edit distance between two graphs g and h, with the edit cost function and method computation selected.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_55run_method(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_54run_method[] = "\n\t\t\tComputes the edit distance between two graphs g and h, with the edit cost function and method computation selected. \n\t\n\t\t\t:param g: The Id of the first graph to compare\n\t\t\t:param h: The Id of the second graph to compare\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t\n\t\t\t.. seealso:: get_upper_bound(), get_lower_bound(), get_forward_map(), get_backward_map(), get_runtime(), quasimetric_cost()\n\t\t\t.. note:: This function only compute the distance between two graphs, without returning a result. Use the differents function to see the result between the two graphs. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_55run_method(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("run_method (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("run_method", 1, 2, 2, 1); __PYX_ERR(0, 582, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "run_method") < 0)) __PYX_ERR(0, 582, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("run_method", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 582, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.run_method", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_54run_method(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_54run_method(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ __Pyx_RefNannySetupContext("run_method", 0);
+
+ /* "gedlibpy.pyx":594
+ * .. note:: This function only compute the distance between two graphs, without returning a result. Use the differents function to see the result between the two graphs.
+ * """
+ * self.c_env.runMethod(g, h) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 594, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 594, __pyx_L1_error)
+ try {
+ __pyx_v_self->c_env->runMethod(__pyx_t_1, __pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 594, __pyx_L1_error)
+ }
+
+ /* "gedlibpy.pyx":582
+ *
+ *
+ * def run_method(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Computes the edit distance between two graphs g and h, with the edit cost function and method computation selected.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.run_method", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":597
+ *
+ *
+ * def get_upper_bound(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the upper bound of the edit distance cost between two graphs g and h.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_57get_upper_bound(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_56get_upper_bound[] = "\n\t\t\tReturns the upper bound of the edit distance cost between two graphs g and h. \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:return: The upper bound of the edit distance cost\n\t\t\t:rtype: double\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_lower_bound(), get_forward_map(), get_backward_map(), get_runtime(), quasimetric_cost()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: The upper bound is equivalent to the result of the pessimist edit distance cost. Methods are heuristics so the library can't compute the real perfect result because it's NP-Hard problem.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_57get_upper_bound(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_upper_bound (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_upper_bound", 1, 2, 2, 1); __PYX_ERR(0, 597, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_upper_bound") < 0)) __PYX_ERR(0, 597, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_upper_bound", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 597, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_upper_bound", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_56get_upper_bound(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_56get_upper_bound(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ double __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ __Pyx_RefNannySetupContext("get_upper_bound", 0);
+
+ /* "gedlibpy.pyx":612
+ * .. note:: The upper bound is equivalent to the result of the pessimist edit distance cost. Methods are heuristics so the library can't compute the real perfect result because it's NP-Hard problem.
+ * """
+ * return self.c_env.getUpperBound(g, h) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 612, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 612, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getUpperBound(__pyx_t_1, __pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 612, __pyx_L1_error)
+ }
+ __pyx_t_4 = PyFloat_FromDouble(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 612, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_r = __pyx_t_4;
+ __pyx_t_4 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":597
+ *
+ *
+ * def get_upper_bound(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the upper bound of the edit distance cost between two graphs g and h.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_upper_bound", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":615
+ *
+ *
+ * def get_lower_bound(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the lower bound of the edit distance cost between two graphs g and h.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_59get_lower_bound(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_58get_lower_bound[] = "\n\t\t\t Returns the lower bound of the edit distance cost between two graphs g and h. \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:return: The lower bound of the edit distance cost\n\t\t\t:rtype: double\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_upper_bound(), get_forward_map(), get_backward_map(), get_runtime(), quasimetric_cost()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: This function can be ignored, because lower bound doesn't have a crucial utility.\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_59get_lower_bound(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_lower_bound (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_lower_bound", 1, 2, 2, 1); __PYX_ERR(0, 615, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_lower_bound") < 0)) __PYX_ERR(0, 615, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_lower_bound", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 615, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_lower_bound", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_58get_lower_bound(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_58get_lower_bound(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ double __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ __Pyx_RefNannySetupContext("get_lower_bound", 0);
+
+ /* "gedlibpy.pyx":630
+ * .. note:: This function can be ignored, because lower bound doesn't have a crucial utility.
+ * """
+ * return self.c_env.getLowerBound(g, h) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 630, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 630, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getLowerBound(__pyx_t_1, __pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 630, __pyx_L1_error)
+ }
+ __pyx_t_4 = PyFloat_FromDouble(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 630, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_r = __pyx_t_4;
+ __pyx_t_4 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":615
+ *
+ *
+ * def get_lower_bound(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the lower bound of the edit distance cost between two graphs g and h.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_lower_bound", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":633
+ *
+ *
+ * def get_forward_map(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the forward map (or the half of the adjacence matrix) between nodes of the two indicated graphs.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_61get_forward_map(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_60get_forward_map[] = "\n\t\t\tReturns the forward map (or the half of the adjacence matrix) between nodes of the two indicated graphs. \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:return: The forward map to the adjacence matrix between nodes of the two graphs\n\t\t\t:rtype: list[npy_uint32]\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_upper_bound(), get_lower_bound(), get_backward_map(), get_runtime(), quasimetric_cost(), get_node_map(), get_assignment_matrix()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: I don't know how to connect the two map to reconstruct the adjacence matrix. Please come back when I know how it's work ! \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_61get_forward_map(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_forward_map (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_forward_map", 1, 2, 2, 1); __PYX_ERR(0, 633, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_forward_map") < 0)) __PYX_ERR(0, 633, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_forward_map", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 633, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_forward_map", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_60get_forward_map(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_60get_forward_map(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ std::vector __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ __Pyx_RefNannySetupContext("get_forward_map", 0);
+
+ /* "gedlibpy.pyx":648
+ * .. note:: I don't know how to connect the two map to reconstruct the adjacence matrix. Please come back when I know how it's work !
+ * """
+ * return self.c_env.getForwardMap(g, h) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 648, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 648, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getForwardMap(__pyx_t_1, __pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 648, __pyx_L1_error)
+ }
+ __pyx_t_4 = __pyx_convert_vector_to_py_npy_uint64(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 648, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_r = __pyx_t_4;
+ __pyx_t_4 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":633
+ *
+ *
+ * def get_forward_map(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the forward map (or the half of the adjacence matrix) between nodes of the two indicated graphs.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_forward_map", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":651
+ *
+ *
+ * def get_backward_map(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the backward map (or the half of the adjacence matrix) between nodes of the two indicated graphs.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_63get_backward_map(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_62get_backward_map[] = "\n\t\t\tReturns the backward map (or the half of the adjacence matrix) between nodes of the two indicated graphs. \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:return: The backward map to the adjacence matrix between nodes of the two graphs\n\t\t\t:rtype: list[npy_uint32]\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_upper_bound(), get_lower_bound(), get_forward_map(), get_runtime(), quasimetric_cost(), get_node_map(), get_assignment_matrix()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: I don't know how to connect the two map to reconstruct the adjacence matrix. Please come back when I know how it's work ! \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_63get_backward_map(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_backward_map (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_backward_map", 1, 2, 2, 1); __PYX_ERR(0, 651, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_backward_map") < 0)) __PYX_ERR(0, 651, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_backward_map", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 651, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_backward_map", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_62get_backward_map(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_62get_backward_map(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ std::vector __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ __Pyx_RefNannySetupContext("get_backward_map", 0);
+
+ /* "gedlibpy.pyx":666
+ * .. note:: I don't know how to connect the two map to reconstruct the adjacence matrix. Please come back when I know how it's work !
+ * """
+ * return self.c_env.getBackwardMap(g, h) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 666, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 666, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getBackwardMap(__pyx_t_1, __pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 666, __pyx_L1_error)
+ }
+ __pyx_t_4 = __pyx_convert_vector_to_py_npy_uint64(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 666, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_r = __pyx_t_4;
+ __pyx_t_4 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":651
+ *
+ *
+ * def get_backward_map(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the backward map (or the half of the adjacence matrix) between nodes of the two indicated graphs.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_backward_map", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":669
+ *
+ *
+ * def get_node_image(self, g, h, node_id) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the node's image in the adjacence matrix, if it exists.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_65get_node_image(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_64get_node_image[] = "\n\t\t\tReturns the node's image in the adjacence matrix, if it exists. \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:param node_id: The ID of the node which you want to see the image\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:type node_id: size_t\n\t\t\t:return: The ID of the image node\n\t\t\t:rtype: size_t\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_forward_map(), get_backward_map(), get_node_pre_image(), get_node_map(), get_assignment_matrix()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: Use BackwardMap's Node to find its images ! You can also use get_forward_map() and get_backward_map().\t \n\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_65get_node_image(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_v_node_id = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_node_image (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,&__pyx_n_s_node_id,0};
+ PyObject* values[3] = {0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_node_image", 1, 3, 3, 1); __PYX_ERR(0, 669, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_node_id)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_node_image", 1, 3, 3, 2); __PYX_ERR(0, 669, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_node_image") < 0)) __PYX_ERR(0, 669, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ __pyx_v_node_id = values[2];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_node_image", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 669, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_image", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_64get_node_image(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h, __pyx_v_node_id);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_64get_node_image(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h, PyObject *__pyx_v_node_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ size_t __pyx_t_3;
+ size_t __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ __Pyx_RefNannySetupContext("get_node_image", 0);
+
+ /* "gedlibpy.pyx":687
+ *
+ * """
+ * return self.c_env.getNodeImage(g, h, node_id) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 687, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 687, __pyx_L1_error)
+ __pyx_t_3 = __Pyx_PyInt_As_size_t(__pyx_v_node_id); if (unlikely((__pyx_t_3 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 687, __pyx_L1_error)
+ try {
+ __pyx_t_4 = __pyx_v_self->c_env->getNodeImage(__pyx_t_1, __pyx_t_2, __pyx_t_3);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 687, __pyx_L1_error)
+ }
+ __pyx_t_5 = __Pyx_PyInt_FromSize_t(__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 687, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_r = __pyx_t_5;
+ __pyx_t_5 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":669
+ *
+ *
+ * def get_node_image(self, g, h, node_id) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the node's image in the adjacence matrix, if it exists.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_image", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":690
+ *
+ *
+ * def get_node_pre_image(self, g, h, node_id) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the node's preimage in the adjacence matrix, if it exists.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_67get_node_pre_image(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_66get_node_pre_image[] = "\n\t\t\tReturns the node's preimage in the adjacence matrix, if it exists. \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:param node_id: The ID of the node which you want to see the preimage\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:type node_id: size_t\n\t\t\t:return: The ID of the preimage node\n\t\t\t:rtype: size_t\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_forward_map(), get_backward_map(), get_node_image(), get_node_map(), get_assignment_matrix()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: Use ForwardMap's Node to find its images ! You can also use get_forward_map() and get_backward_map().\t \n\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_67get_node_pre_image(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_v_node_id = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_node_pre_image (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,&__pyx_n_s_node_id,0};
+ PyObject* values[3] = {0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_node_pre_image", 1, 3, 3, 1); __PYX_ERR(0, 690, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_node_id)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_node_pre_image", 1, 3, 3, 2); __PYX_ERR(0, 690, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_node_pre_image") < 0)) __PYX_ERR(0, 690, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ __pyx_v_node_id = values[2];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_node_pre_image", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 690, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_pre_image", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_66get_node_pre_image(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h, __pyx_v_node_id);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_66get_node_pre_image(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h, PyObject *__pyx_v_node_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ size_t __pyx_t_3;
+ size_t __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ __Pyx_RefNannySetupContext("get_node_pre_image", 0);
+
+ /* "gedlibpy.pyx":708
+ *
+ * """
+ * return self.c_env.getNodePreImage(g, h, node_id) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 708, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 708, __pyx_L1_error)
+ __pyx_t_3 = __Pyx_PyInt_As_size_t(__pyx_v_node_id); if (unlikely((__pyx_t_3 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 708, __pyx_L1_error)
+ try {
+ __pyx_t_4 = __pyx_v_self->c_env->getNodePreImage(__pyx_t_1, __pyx_t_2, __pyx_t_3);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 708, __pyx_L1_error)
+ }
+ __pyx_t_5 = __Pyx_PyInt_FromSize_t(__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 708, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_r = __pyx_t_5;
+ __pyx_t_5 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":690
+ *
+ *
+ * def get_node_pre_image(self, g, h, node_id) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the node's preimage in the adjacence matrix, if it exists.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_pre_image", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":711
+ *
+ *
+ * def get_induced_cost(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the induced cost between the two indicated graphs.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_69get_induced_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_68get_induced_cost[] = "\n\t\t\tReturns the induced cost between the two indicated graphs.\t\n\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:return: The induced cost between the two indicated graphs\n\t\t\t:rtype: double\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_forward_map(), get_backward_map(), get_node_image(), get_node_map(), get_assignment_matrix()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: Use ForwardMap's Node to find its images ! You can also use get_forward_map() and get_backward_map().\t \n\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_69get_induced_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_induced_cost (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_induced_cost", 1, 2, 2, 1); __PYX_ERR(0, 711, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_induced_cost") < 0)) __PYX_ERR(0, 711, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_induced_cost", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 711, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_induced_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_68get_induced_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_68get_induced_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ double __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ __Pyx_RefNannySetupContext("get_induced_cost", 0);
+
+ /* "gedlibpy.pyx":727
+ *
+ * """
+ * return self.c_env.getInducedCost(g, h) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 727, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 727, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getInducedCost(__pyx_t_1, __pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 727, __pyx_L1_error)
+ }
+ __pyx_t_4 = PyFloat_FromDouble(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 727, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_r = __pyx_t_4;
+ __pyx_t_4 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":711
+ *
+ *
+ * def get_induced_cost(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the induced cost between the two indicated graphs.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_induced_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":730
+ *
+ *
+ * def get_node_map(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the Node Map, like C++ NodeMap.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_71get_node_map(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_70get_node_map[] = "\n\t\t\tReturns the Node Map, like C++ NodeMap. \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:return: The Node Map between the two selected graph. \n\t\t\t:rtype: gklearn.ged.env.NodeMap.\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_forward_map(), get_backward_map(), get_node_image(), get_node_pre_image(), get_assignment_matrix()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: This function creates datas so use it if necessary, however you can understand how assignement works with this example.\t \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_71get_node_map(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_node_map (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_node_map", 1, 2, 2, 1); __PYX_ERR(0, 730, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_node_map") < 0)) __PYX_ERR(0, 730, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_node_map", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 730, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_map", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_70get_node_map(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_70get_node_map(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h) {
+ std::vector > __pyx_v_map_as_relation;
+ double __pyx_v_induced_cost;
+ PyObject *__pyx_v_source_map = NULL;
+ PyObject *__pyx_v_target_map = NULL;
+ Py_ssize_t __pyx_v_num_node_source;
+ Py_ssize_t __pyx_v_num_node_target;
+ PyObject *__pyx_v_node_map = NULL;
+ Py_ssize_t __pyx_v_i;
+ std::pair __pyx_8genexpr5__pyx_v_item;
+ std::pair __pyx_8genexpr6__pyx_v_item;
+ PyObject *__pyx_8genexpr7__pyx_v_item = NULL;
+ PyObject *__pyx_8genexpr8__pyx_v_item = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ std::vector > __pyx_t_3;
+ double __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ std::vector > ::iterator __pyx_t_6;
+ std::pair __pyx_t_7;
+ PyObject *__pyx_t_8 = NULL;
+ PyObject *__pyx_t_9 = NULL;
+ Py_ssize_t __pyx_t_10;
+ PyObject *__pyx_t_11 = NULL;
+ int __pyx_t_12;
+ PyObject *__pyx_t_13 = NULL;
+ int __pyx_t_14;
+ PyObject *__pyx_t_15 = NULL;
+ Py_ssize_t __pyx_t_16;
+ Py_ssize_t __pyx_t_17;
+ __Pyx_RefNannySetupContext("get_node_map", 0);
+
+ /* "gedlibpy.pyx":745
+ * .. note:: This function creates datas so use it if necessary, however you can understand how assignement works with this example.
+ * """
+ * map_as_relation = self.c_env.getNodeMap(g, h) # <<<<<<<<<<<<<<
+ * induced_cost = self.c_env.getInducedCost(g, h) # @todo: the C++ implementation for this function in GedLibBind.ipp re-call get_node_map() once more, this is not neccessary.
+ * source_map = [item.first if item.first < len(map_as_relation) else np.inf for item in map_as_relation] # item.first < len(map_as_relation) is not exactly correct.
+ */
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 745, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 745, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getNodeMap(__pyx_t_1, __pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 745, __pyx_L1_error)
+ }
+ __pyx_v_map_as_relation = __pyx_t_3;
+
+ /* "gedlibpy.pyx":746
+ * """
+ * map_as_relation = self.c_env.getNodeMap(g, h)
+ * induced_cost = self.c_env.getInducedCost(g, h) # @todo: the C++ implementation for this function in GedLibBind.ipp re-call get_node_map() once more, this is not neccessary. # <<<<<<<<<<<<<<
+ * source_map = [item.first if item.first < len(map_as_relation) else np.inf for item in map_as_relation] # item.first < len(map_as_relation) is not exactly correct.
+ * # print(source_map)
+ */
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 746, __pyx_L1_error)
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 746, __pyx_L1_error)
+ try {
+ __pyx_t_4 = __pyx_v_self->c_env->getInducedCost(__pyx_t_2, __pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 746, __pyx_L1_error)
+ }
+ __pyx_v_induced_cost = __pyx_t_4;
+
+ /* "gedlibpy.pyx":747
+ * map_as_relation = self.c_env.getNodeMap(g, h)
+ * induced_cost = self.c_env.getInducedCost(g, h) # @todo: the C++ implementation for this function in GedLibBind.ipp re-call get_node_map() once more, this is not neccessary.
+ * source_map = [item.first if item.first < len(map_as_relation) else np.inf for item in map_as_relation] # item.first < len(map_as_relation) is not exactly correct. # <<<<<<<<<<<<<<
+ * # print(source_map)
+ * target_map = [item.second if item.second < len(map_as_relation) else np.inf for item in map_as_relation]
+ */
+ { /* enter inner scope */
+ __pyx_t_5 = PyList_New(0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 747, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = __pyx_v_map_as_relation.begin();
+ for (;;) {
+ if (!(__pyx_t_6 != __pyx_v_map_as_relation.end())) break;
+ __pyx_t_7 = *__pyx_t_6;
+ ++__pyx_t_6;
+ __pyx_8genexpr5__pyx_v_item = __pyx_t_7;
+ __pyx_t_9 = __pyx_convert_vector_to_py_std_3a__3a_pair_3c_size_t_2c_size_t_3e___(__pyx_v_map_as_relation); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 747, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __pyx_t_10 = PyObject_Length(__pyx_t_9); if (unlikely(__pyx_t_10 == ((Py_ssize_t)-1))) __PYX_ERR(0, 747, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ if (((__pyx_8genexpr5__pyx_v_item.first < __pyx_t_10) != 0)) {
+ __pyx_t_9 = __Pyx_PyInt_FromSize_t(__pyx_8genexpr5__pyx_v_item.first); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 747, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __pyx_t_8 = __pyx_t_9;
+ __pyx_t_9 = 0;
+ } else {
+ __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_np); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 747, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_9, __pyx_n_s_inf); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 747, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __pyx_t_8 = __pyx_t_11;
+ __pyx_t_11 = 0;
+ }
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_5, (PyObject*)__pyx_t_8))) __PYX_ERR(0, 747, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ }
+ } /* exit inner scope */
+ __pyx_v_source_map = ((PyObject*)__pyx_t_5);
+ __pyx_t_5 = 0;
+
+ /* "gedlibpy.pyx":749
+ * source_map = [item.first if item.first < len(map_as_relation) else np.inf for item in map_as_relation] # item.first < len(map_as_relation) is not exactly correct.
+ * # print(source_map)
+ * target_map = [item.second if item.second < len(map_as_relation) else np.inf for item in map_as_relation] # <<<<<<<<<<<<<<
+ * # print(target_map)
+ * num_node_source = len([item for item in source_map if item != np.inf])
+ */
+ { /* enter inner scope */
+ __pyx_t_5 = PyList_New(0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 749, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = __pyx_v_map_as_relation.begin();
+ for (;;) {
+ if (!(__pyx_t_6 != __pyx_v_map_as_relation.end())) break;
+ __pyx_t_7 = *__pyx_t_6;
+ ++__pyx_t_6;
+ __pyx_8genexpr6__pyx_v_item = __pyx_t_7;
+ __pyx_t_11 = __pyx_convert_vector_to_py_std_3a__3a_pair_3c_size_t_2c_size_t_3e___(__pyx_v_map_as_relation); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 749, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __pyx_t_10 = PyObject_Length(__pyx_t_11); if (unlikely(__pyx_t_10 == ((Py_ssize_t)-1))) __PYX_ERR(0, 749, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ if (((__pyx_8genexpr6__pyx_v_item.second < __pyx_t_10) != 0)) {
+ __pyx_t_11 = __Pyx_PyInt_FromSize_t(__pyx_8genexpr6__pyx_v_item.second); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 749, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __pyx_t_8 = __pyx_t_11;
+ __pyx_t_11 = 0;
+ } else {
+ __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_np); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 749, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_11, __pyx_n_s_inf); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 749, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __pyx_t_8 = __pyx_t_9;
+ __pyx_t_9 = 0;
+ }
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_5, (PyObject*)__pyx_t_8))) __PYX_ERR(0, 749, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ }
+ } /* exit inner scope */
+ __pyx_v_target_map = ((PyObject*)__pyx_t_5);
+ __pyx_t_5 = 0;
+
+ /* "gedlibpy.pyx":751
+ * target_map = [item.second if item.second < len(map_as_relation) else np.inf for item in map_as_relation]
+ * # print(target_map)
+ * num_node_source = len([item for item in source_map if item != np.inf]) # <<<<<<<<<<<<<<
+ * # print(num_node_source)
+ * num_node_target = len([item for item in target_map if item != np.inf])
+ */
+ { /* enter inner scope */
+ __pyx_t_5 = PyList_New(0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 751, __pyx_L9_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_8 = __pyx_v_source_map; __Pyx_INCREF(__pyx_t_8); __pyx_t_10 = 0;
+ for (;;) {
+ if (__pyx_t_10 >= PyList_GET_SIZE(__pyx_t_8)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_9 = PyList_GET_ITEM(__pyx_t_8, __pyx_t_10); __Pyx_INCREF(__pyx_t_9); __pyx_t_10++; if (unlikely(0 < 0)) __PYX_ERR(0, 751, __pyx_L9_error)
+ #else
+ __pyx_t_9 = PySequence_ITEM(__pyx_t_8, __pyx_t_10); __pyx_t_10++; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 751, __pyx_L9_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ #endif
+ __Pyx_XDECREF_SET(__pyx_8genexpr7__pyx_v_item, __pyx_t_9);
+ __pyx_t_9 = 0;
+ __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_np); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 751, __pyx_L9_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_9, __pyx_n_s_inf); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 751, __pyx_L9_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __pyx_t_9 = PyObject_RichCompare(__pyx_8genexpr7__pyx_v_item, __pyx_t_11, Py_NE); __Pyx_XGOTREF(__pyx_t_9); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 751, __pyx_L9_error)
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __pyx_t_12 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_12 < 0)) __PYX_ERR(0, 751, __pyx_L9_error)
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ if (__pyx_t_12) {
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_5, (PyObject*)__pyx_8genexpr7__pyx_v_item))) __PYX_ERR(0, 751, __pyx_L9_error)
+ }
+ }
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_item); __pyx_8genexpr7__pyx_v_item = 0;
+ goto __pyx_L13_exit_scope;
+ __pyx_L9_error:;
+ __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_item); __pyx_8genexpr7__pyx_v_item = 0;
+ goto __pyx_L1_error;
+ __pyx_L13_exit_scope:;
+ } /* exit inner scope */
+ __pyx_t_10 = PyList_GET_SIZE(__pyx_t_5); if (unlikely(__pyx_t_10 == ((Py_ssize_t)-1))) __PYX_ERR(0, 751, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_v_num_node_source = __pyx_t_10;
+
+ /* "gedlibpy.pyx":753
+ * num_node_source = len([item for item in source_map if item != np.inf])
+ * # print(num_node_source)
+ * num_node_target = len([item for item in target_map if item != np.inf]) # <<<<<<<<<<<<<<
+ * # print(num_node_target)
+ *
+ */
+ { /* enter inner scope */
+ __pyx_t_5 = PyList_New(0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 753, __pyx_L16_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_8 = __pyx_v_target_map; __Pyx_INCREF(__pyx_t_8); __pyx_t_10 = 0;
+ for (;;) {
+ if (__pyx_t_10 >= PyList_GET_SIZE(__pyx_t_8)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_9 = PyList_GET_ITEM(__pyx_t_8, __pyx_t_10); __Pyx_INCREF(__pyx_t_9); __pyx_t_10++; if (unlikely(0 < 0)) __PYX_ERR(0, 753, __pyx_L16_error)
+ #else
+ __pyx_t_9 = PySequence_ITEM(__pyx_t_8, __pyx_t_10); __pyx_t_10++; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 753, __pyx_L16_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ #endif
+ __Pyx_XDECREF_SET(__pyx_8genexpr8__pyx_v_item, __pyx_t_9);
+ __pyx_t_9 = 0;
+ __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_np); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 753, __pyx_L16_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __pyx_t_11 = __Pyx_PyObject_GetAttrStr(__pyx_t_9, __pyx_n_s_inf); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 753, __pyx_L16_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __pyx_t_9 = PyObject_RichCompare(__pyx_8genexpr8__pyx_v_item, __pyx_t_11, Py_NE); __Pyx_XGOTREF(__pyx_t_9); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 753, __pyx_L16_error)
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __pyx_t_12 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_12 < 0)) __PYX_ERR(0, 753, __pyx_L16_error)
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ if (__pyx_t_12) {
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_5, (PyObject*)__pyx_8genexpr8__pyx_v_item))) __PYX_ERR(0, 753, __pyx_L16_error)
+ }
+ }
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_item); __pyx_8genexpr8__pyx_v_item = 0;
+ goto __pyx_L20_exit_scope;
+ __pyx_L16_error:;
+ __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_item); __pyx_8genexpr8__pyx_v_item = 0;
+ goto __pyx_L1_error;
+ __pyx_L20_exit_scope:;
+ } /* exit inner scope */
+ __pyx_t_10 = PyList_GET_SIZE(__pyx_t_5); if (unlikely(__pyx_t_10 == ((Py_ssize_t)-1))) __PYX_ERR(0, 753, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_v_num_node_target = __pyx_t_10;
+
+ /* "gedlibpy.pyx":756
+ * # print(num_node_target)
+ *
+ * node_map = NodeMap(num_node_source, num_node_target) # <<<<<<<<<<<<<<
+ * # print(node_map.get_forward_map(), node_map.get_backward_map())
+ * for i in range(len(source_map)):
+ */
+ __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_NodeMap); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 756, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_9 = PyInt_FromSsize_t(__pyx_v_num_node_source); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 756, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __pyx_t_11 = PyInt_FromSsize_t(__pyx_v_num_node_target); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 756, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __pyx_t_13 = NULL;
+ __pyx_t_14 = 0;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_8))) {
+ __pyx_t_13 = PyMethod_GET_SELF(__pyx_t_8);
+ if (likely(__pyx_t_13)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
+ __Pyx_INCREF(__pyx_t_13);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_8, function);
+ __pyx_t_14 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_8)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_13, __pyx_t_9, __pyx_t_11};
+ __pyx_t_5 = __Pyx_PyFunction_FastCall(__pyx_t_8, __pyx_temp+1-__pyx_t_14, 2+__pyx_t_14); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 756, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_8)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_13, __pyx_t_9, __pyx_t_11};
+ __pyx_t_5 = __Pyx_PyCFunction_FastCall(__pyx_t_8, __pyx_temp+1-__pyx_t_14, 2+__pyx_t_14); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 756, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_15 = PyTuple_New(2+__pyx_t_14); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 756, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_15);
+ if (__pyx_t_13) {
+ __Pyx_GIVEREF(__pyx_t_13); PyTuple_SET_ITEM(__pyx_t_15, 0, __pyx_t_13); __pyx_t_13 = NULL;
+ }
+ __Pyx_GIVEREF(__pyx_t_9);
+ PyTuple_SET_ITEM(__pyx_t_15, 0+__pyx_t_14, __pyx_t_9);
+ __Pyx_GIVEREF(__pyx_t_11);
+ PyTuple_SET_ITEM(__pyx_t_15, 1+__pyx_t_14, __pyx_t_11);
+ __pyx_t_9 = 0;
+ __pyx_t_11 = 0;
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_8, __pyx_t_15, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 756, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_v_node_map = __pyx_t_5;
+ __pyx_t_5 = 0;
+
+ /* "gedlibpy.pyx":758
+ * node_map = NodeMap(num_node_source, num_node_target)
+ * # print(node_map.get_forward_map(), node_map.get_backward_map())
+ * for i in range(len(source_map)): # <<<<<<<<<<<<<<
+ * node_map.add_assignment(source_map[i], target_map[i])
+ * node_map.set_induced_cost(induced_cost)
+ */
+ __pyx_t_10 = PyList_GET_SIZE(__pyx_v_source_map); if (unlikely(__pyx_t_10 == ((Py_ssize_t)-1))) __PYX_ERR(0, 758, __pyx_L1_error)
+ __pyx_t_16 = __pyx_t_10;
+ for (__pyx_t_17 = 0; __pyx_t_17 < __pyx_t_16; __pyx_t_17+=1) {
+ __pyx_v_i = __pyx_t_17;
+
+ /* "gedlibpy.pyx":759
+ * # print(node_map.get_forward_map(), node_map.get_backward_map())
+ * for i in range(len(source_map)):
+ * node_map.add_assignment(source_map[i], target_map[i]) # <<<<<<<<<<<<<<
+ * node_map.set_induced_cost(induced_cost)
+ *
+ */
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_node_map, __pyx_n_s_add_assignment); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 759, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_15 = __Pyx_GetItemInt_List(__pyx_v_source_map, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 1, 1, 1); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 759, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_15);
+ __pyx_t_11 = __Pyx_GetItemInt_List(__pyx_v_target_map, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 1, 1, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 759, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __pyx_t_9 = NULL;
+ __pyx_t_14 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) {
+ __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_8);
+ if (likely(__pyx_t_9)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
+ __Pyx_INCREF(__pyx_t_9);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_8, function);
+ __pyx_t_14 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_8)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_9, __pyx_t_15, __pyx_t_11};
+ __pyx_t_5 = __Pyx_PyFunction_FastCall(__pyx_t_8, __pyx_temp+1-__pyx_t_14, 2+__pyx_t_14); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 759, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0;
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_8)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_9, __pyx_t_15, __pyx_t_11};
+ __pyx_t_5 = __Pyx_PyCFunction_FastCall(__pyx_t_8, __pyx_temp+1-__pyx_t_14, 2+__pyx_t_14); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 759, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0;
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_13 = PyTuple_New(2+__pyx_t_14); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 759, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_13);
+ if (__pyx_t_9) {
+ __Pyx_GIVEREF(__pyx_t_9); PyTuple_SET_ITEM(__pyx_t_13, 0, __pyx_t_9); __pyx_t_9 = NULL;
+ }
+ __Pyx_GIVEREF(__pyx_t_15);
+ PyTuple_SET_ITEM(__pyx_t_13, 0+__pyx_t_14, __pyx_t_15);
+ __Pyx_GIVEREF(__pyx_t_11);
+ PyTuple_SET_ITEM(__pyx_t_13, 1+__pyx_t_14, __pyx_t_11);
+ __pyx_t_15 = 0;
+ __pyx_t_11 = 0;
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_8, __pyx_t_13, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 759, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ }
+
+ /* "gedlibpy.pyx":760
+ * for i in range(len(source_map)):
+ * node_map.add_assignment(source_map[i], target_map[i])
+ * node_map.set_induced_cost(induced_cost) # <<<<<<<<<<<<<<
+ *
+ * return node_map
+ */
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_node_map, __pyx_n_s_set_induced_cost); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 760, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_13 = PyFloat_FromDouble(__pyx_v_induced_cost); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 760, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_13);
+ __pyx_t_11 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) {
+ __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_8);
+ if (likely(__pyx_t_11)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
+ __Pyx_INCREF(__pyx_t_11);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_8, function);
+ }
+ }
+ __pyx_t_5 = (__pyx_t_11) ? __Pyx_PyObject_Call2Args(__pyx_t_8, __pyx_t_11, __pyx_t_13) : __Pyx_PyObject_CallOneArg(__pyx_t_8, __pyx_t_13);
+ __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0;
+ if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 760, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+
+ /* "gedlibpy.pyx":762
+ * node_map.set_induced_cost(induced_cost)
+ *
+ * return node_map # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_node_map);
+ __pyx_r = __pyx_v_node_map;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":730
+ *
+ *
+ * def get_node_map(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the Node Map, like C++ NodeMap.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_8);
+ __Pyx_XDECREF(__pyx_t_9);
+ __Pyx_XDECREF(__pyx_t_11);
+ __Pyx_XDECREF(__pyx_t_13);
+ __Pyx_XDECREF(__pyx_t_15);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_map", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_source_map);
+ __Pyx_XDECREF(__pyx_v_target_map);
+ __Pyx_XDECREF(__pyx_v_node_map);
+ __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_item);
+ __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_item);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":765
+ *
+ *
+ * def get_assignment_matrix(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the Assignment Matrix between two selected graphs g and h.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_73get_assignment_matrix(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_72get_assignment_matrix[] = "\n\t\t\tReturns the Assignment Matrix between two selected graphs g and h. \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:return: The Assignment Matrix between the two selected graph. \n\t\t\t:rtype: list[list[int]]\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_forward_map(), get_backward_map(), get_node_image(), get_node_pre_image(), get_node_map()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: This function creates datas so use it if necessary.\t \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_73get_assignment_matrix(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_assignment_matrix (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_assignment_matrix", 1, 2, 2, 1); __PYX_ERR(0, 765, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_assignment_matrix") < 0)) __PYX_ERR(0, 765, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_assignment_matrix", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 765, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_assignment_matrix", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_72get_assignment_matrix(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_72get_assignment_matrix(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ std::vector > __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ __Pyx_RefNannySetupContext("get_assignment_matrix", 0);
+
+ /* "gedlibpy.pyx":780
+ * .. note:: This function creates datas so use it if necessary.
+ * """
+ * return self.c_env.getAssignmentMatrix(g, h) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 780, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 780, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getAssignmentMatrix(__pyx_t_1, __pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 780, __pyx_L1_error)
+ }
+ __pyx_t_4 = __pyx_convert_vector_to_py_std_3a__3a_vector_3c_int_3e___(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 780, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_r = __pyx_t_4;
+ __pyx_t_4 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":765
+ *
+ *
+ * def get_assignment_matrix(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the Assignment Matrix between two selected graphs g and h.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_assignment_matrix", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":783
+ *
+ *
+ * def get_all_map(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns a vector which contains the forward and the backward maps between nodes of the two indicated graphs.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_75get_all_map(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_74get_all_map[] = "\n\t\t\tReturns a vector which contains the forward and the backward maps between nodes of the two indicated graphs. \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:return: The forward and backward maps to the adjacence matrix between nodes of the two graphs\n\t\t\t:rtype: list[list[npy_uint32]]\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_upper_bound(), get_lower_bound(), get_forward_map(), get_backward_map(), get_runtime(), quasimetric_cost()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: This function duplicates data so please don't use it. I also don't know how to connect the two map to reconstruct the adjacence matrix. Please come back when I know how it's work ! \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_75get_all_map(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_all_map (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_all_map", 1, 2, 2, 1); __PYX_ERR(0, 783, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_all_map") < 0)) __PYX_ERR(0, 783, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_all_map", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 783, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_all_map", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_74get_all_map(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_74get_all_map(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ std::vector > __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ __Pyx_RefNannySetupContext("get_all_map", 0);
+
+ /* "gedlibpy.pyx":798
+ * .. note:: This function duplicates data so please don't use it. I also don't know how to connect the two map to reconstruct the adjacence matrix. Please come back when I know how it's work !
+ * """
+ * return self.c_env.getAllMap(g, h) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 798, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 798, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getAllMap(__pyx_t_1, __pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 798, __pyx_L1_error)
+ }
+ __pyx_t_4 = __pyx_convert_vector_to_py_std_3a__3a_vector_3c_npy_uint64_3e___(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 798, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_r = __pyx_t_4;
+ __pyx_t_4 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":783
+ *
+ *
+ * def get_all_map(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns a vector which contains the forward and the backward maps between nodes of the two indicated graphs.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_all_map", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":801
+ *
+ *
+ * def get_runtime(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the runtime to compute the edit distance cost between two graphs g and h
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_77get_runtime(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_76get_runtime[] = "\n\t\t\tReturns the runtime to compute the edit distance cost between two graphs g and h \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:return: The runtime of the computation of edit distance cost between the two selected graphs\n\t\t\t:rtype: double\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_upper_bound(), get_lower_bound(), get_forward_map(), get_backward_map(), quasimetric_cost()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t\t.. note:: Python is a bit longer than C++ due to the functions's encapsulate.\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_77get_runtime(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_runtime (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_h,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_runtime", 1, 2, 2, 1); __PYX_ERR(0, 801, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_runtime") < 0)) __PYX_ERR(0, 801, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_h = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_runtime", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 801, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_runtime", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_76get_runtime(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_h);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_76get_runtime(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_h) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ size_t __pyx_t_2;
+ double __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ __Pyx_RefNannySetupContext("get_runtime", 0);
+
+ /* "gedlibpy.pyx":816
+ * .. note:: Python is a bit longer than C++ due to the functions's encapsulate.
+ * """
+ * return self.c_env.getRuntime(g,h) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_As_size_t(__pyx_v_g); if (unlikely((__pyx_t_1 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 816, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyInt_As_size_t(__pyx_v_h); if (unlikely((__pyx_t_2 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 816, __pyx_L1_error)
+ try {
+ __pyx_t_3 = __pyx_v_self->c_env->getRuntime(__pyx_t_1, __pyx_t_2);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 816, __pyx_L1_error)
+ }
+ __pyx_t_4 = PyFloat_FromDouble(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 816, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_r = __pyx_t_4;
+ __pyx_t_4 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":801
+ *
+ *
+ * def get_runtime(self, g, h) : # <<<<<<<<<<<<<<
+ * """
+ * Returns the runtime to compute the edit distance cost between two graphs g and h
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_runtime", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":819
+ *
+ *
+ * def quasimetric_cost(self) : # <<<<<<<<<<<<<<
+ * """
+ * Checks and returns if the edit costs are quasimetric.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_79quasimetric_cost(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_78quasimetric_cost[] = "\n\t\t\tChecks and returns if the edit costs are quasimetric. \n\t\n\t\t\t:param g: The Id of the first compared graph \n\t\t\t:param h: The Id of the second compared graph\n\t\t\t:type g: size_t\n\t\t\t:type h: size_t\n\t\t\t:return: True if it's verified, False otherwise\n\t\t\t:rtype: bool\n\t\t\t\n\t\t\t.. seealso:: run_method(), get_upper_bound(), get_lower_bound(), get_forward_map(), get_backward_map(), get_runtime()\n\t\t\t.. warning:: run_method() between the same two graph must be called before this function. \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_79quasimetric_cost(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("quasimetric_cost (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_78quasimetric_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_78quasimetric_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ bool __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("quasimetric_cost", 0);
+
+ /* "gedlibpy.pyx":833
+ * .. warning:: run_method() between the same two graph must be called before this function.
+ * """
+ * return self.c_env.quasimetricCosts() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ try {
+ __pyx_t_1 = __pyx_v_self->c_env->quasimetricCosts();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 833, __pyx_L1_error)
+ }
+ __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 833, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":819
+ *
+ *
+ * def quasimetric_cost(self) : # <<<<<<<<<<<<<<
+ * """
+ * Checks and returns if the edit costs are quasimetric.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.quasimetric_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":836
+ *
+ *
+ * def hungarian_LSAP(self, matrix_cost) : # <<<<<<<<<<<<<<
+ * """
+ * Applies the hungarian algorithm (LSAP) on a matrix Cost.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_81hungarian_LSAP(PyObject *__pyx_v_self, PyObject *__pyx_v_matrix_cost); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_80hungarian_LSAP[] = "\n\t\t\tApplies the hungarian algorithm (LSAP) on a matrix Cost. \n\t\n\t\t\t:param matrix_cost: The matrix Cost \n\t\t\t:type matrix_cost: vector[vector[size_t]]\n\t\t\t:return: The values of rho, varrho, u and v, in this order\n\t\t\t:rtype: vector[vector[size_t]]\n\t\t\t\n\t\t\t.. seealso:: hungarian_LSAPE() \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_81hungarian_LSAP(PyObject *__pyx_v_self, PyObject *__pyx_v_matrix_cost) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("hungarian_LSAP (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_80hungarian_LSAP(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_matrix_cost));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_80hungarian_LSAP(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_matrix_cost) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ std::vector > __pyx_t_1;
+ std::vector > __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ __Pyx_RefNannySetupContext("hungarian_LSAP", 0);
+
+ /* "gedlibpy.pyx":847
+ * .. seealso:: hungarian_LSAPE()
+ * """
+ * return self.c_env.hungarianLSAP(matrix_cost) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __pyx_convert_vector_from_py_std_3a__3a_vector_3c_size_t_3e___(__pyx_v_matrix_cost); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 847, __pyx_L1_error)
+ try {
+ __pyx_t_2 = __pyx_v_self->c_env->hungarianLSAP(__pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 847, __pyx_L1_error)
+ }
+ __pyx_t_3 = __pyx_convert_vector_to_py_std_3a__3a_vector_3c_size_t_3e___(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 847, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_r = __pyx_t_3;
+ __pyx_t_3 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":836
+ *
+ *
+ * def hungarian_LSAP(self, matrix_cost) : # <<<<<<<<<<<<<<
+ * """
+ * Applies the hungarian algorithm (LSAP) on a matrix Cost.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.hungarian_LSAP", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":850
+ *
+ *
+ * def hungarian_LSAPE(self, matrix_cost) : # <<<<<<<<<<<<<<
+ * """
+ * Applies the hungarian algorithm (LSAPE) on a matrix Cost.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_83hungarian_LSAPE(PyObject *__pyx_v_self, PyObject *__pyx_v_matrix_cost); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_82hungarian_LSAPE[] = "\n\t\t\tApplies the hungarian algorithm (LSAPE) on a matrix Cost. \n\t\n\t\t\t:param matrix_cost: The matrix Cost \n\t\t\t:type matrix_cost: vector[vector[double]]\n\t\t\t:return: The values of rho, varrho, u and v, in this order\n\t\t\t:rtype: vector[vector[double]]\n\t\t\t\n\t\t\t.. seealso:: hungarian_LSAP() \n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_83hungarian_LSAPE(PyObject *__pyx_v_self, PyObject *__pyx_v_matrix_cost) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("hungarian_LSAPE (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_82hungarian_LSAPE(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_matrix_cost));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_82hungarian_LSAPE(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_matrix_cost) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ std::vector > __pyx_t_1;
+ std::vector > __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ __Pyx_RefNannySetupContext("hungarian_LSAPE", 0);
+
+ /* "gedlibpy.pyx":861
+ * .. seealso:: hungarian_LSAP()
+ * """
+ * return self.c_env.hungarianLSAPE(matrix_cost) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __pyx_convert_vector_from_py_std_3a__3a_vector_3c_double_3e___(__pyx_v_matrix_cost); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 861, __pyx_L1_error)
+ try {
+ __pyx_t_2 = __pyx_v_self->c_env->hungarianLSAPE(__pyx_t_1);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 861, __pyx_L1_error)
+ }
+ __pyx_t_3 = __pyx_convert_vector_to_py_std_3a__3a_vector_3c_double_3e___(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 861, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_r = __pyx_t_3;
+ __pyx_t_3 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":850
+ *
+ *
+ * def hungarian_LSAPE(self, matrix_cost) : # <<<<<<<<<<<<<<
+ * """
+ * Applies the hungarian algorithm (LSAPE) on a matrix Cost.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.hungarian_LSAPE", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":864
+ *
+ *
+ * def add_random_graph(self, name, classe, list_of_nodes, list_of_edges, ignore_duplicates=True) : # <<<<<<<<<<<<<<
+ * """
+ * Add a Graph (not GXL) on the environment. Be careful to respect the same format as GXL graphs for labelling nodes and edges.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_85add_random_graph(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_84add_random_graph[] = "\n\t\t\tAdd a Graph (not GXL) on the environment. Be careful to respect the same format as GXL graphs for labelling nodes and edges. \n\t\n\t\t\t:param name: The name of the graph to add, can be an empty string\n\t\t\t:param classe: The classe of the graph to add, can be an empty string\n\t\t\t:param list_of_nodes: The list of nodes to add\n\t\t\t:param list_of_edges: The list of edges to add\n\t\t\t:param ignore_duplicates: If True, duplicate edges are ignored, otherwise it's raise an error if an existing edge is added. True by default\n\t\t\t:type name: string\n\t\t\t:type classe: string\n\t\t\t:type list_of_nodes: list[tuple(size_t, dict{string : string})]\n\t\t\t:type list_of_edges: list[tuple(tuple(size_t,size_t), dict{string : string})]\n\t\t\t:type ignore_duplicates: bool\n\t\t\t:return: The ID of the newly added graphe\n\t\t\t:rtype: size_t\n\t\n\t\t\t.. note:: The graph must respect the GXL structure. Please see how a GXL graph is construct. \n\t\t\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_85add_random_graph(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_name = 0;
+ PyObject *__pyx_v_classe = 0;
+ PyObject *__pyx_v_list_of_nodes = 0;
+ PyObject *__pyx_v_list_of_edges = 0;
+ PyObject *__pyx_v_ignore_duplicates = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("add_random_graph (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,&__pyx_n_s_classe,&__pyx_n_s_list_of_nodes,&__pyx_n_s_list_of_edges,&__pyx_n_s_ignore_duplicates,0};
+ PyObject* values[5] = {0,0,0,0,0};
+ values[4] = ((PyObject *)Py_True);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_classe)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_random_graph", 0, 4, 5, 1); __PYX_ERR(0, 864, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_list_of_nodes)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_random_graph", 0, 4, 5, 2); __PYX_ERR(0, 864, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_list_of_edges)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_random_graph", 0, 4, 5, 3); __PYX_ERR(0, 864, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 4:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_ignore_duplicates);
+ if (value) { values[4] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "add_random_graph") < 0)) __PYX_ERR(0, 864, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_name = values[0];
+ __pyx_v_classe = values[1];
+ __pyx_v_list_of_nodes = values[2];
+ __pyx_v_list_of_edges = values[3];
+ __pyx_v_ignore_duplicates = values[4];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("add_random_graph", 0, 4, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 864, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_random_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_84add_random_graph(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_name, __pyx_v_classe, __pyx_v_list_of_nodes, __pyx_v_list_of_edges, __pyx_v_ignore_duplicates);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_84add_random_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_name, PyObject *__pyx_v_classe, PyObject *__pyx_v_list_of_nodes, PyObject *__pyx_v_list_of_edges, PyObject *__pyx_v_ignore_duplicates) {
+ PyObject *__pyx_v_id = NULL;
+ PyObject *__pyx_v_node = NULL;
+ PyObject *__pyx_v_edge = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ int __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ Py_ssize_t __pyx_t_6;
+ PyObject *(*__pyx_t_7)(PyObject *);
+ PyObject *__pyx_t_8 = NULL;
+ PyObject *__pyx_t_9 = NULL;
+ PyObject *__pyx_t_10 = NULL;
+ PyObject *__pyx_t_11 = NULL;
+ __Pyx_RefNannySetupContext("add_random_graph", 0);
+
+ /* "gedlibpy.pyx":884
+ *
+ * """
+ * id = self.add_graph(name, classe) # <<<<<<<<<<<<<<
+ * for node in list_of_nodes:
+ * self.add_node(id, node[0], node[1])
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_graph); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 884, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ __pyx_t_4 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_4 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_name, __pyx_v_classe};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_4, 2+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 884, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_name, __pyx_v_classe};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_4, 2+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 884, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_5 = PyTuple_New(2+__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 884, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ if (__pyx_t_3) {
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_name);
+ __Pyx_GIVEREF(__pyx_v_name);
+ PyTuple_SET_ITEM(__pyx_t_5, 0+__pyx_t_4, __pyx_v_name);
+ __Pyx_INCREF(__pyx_v_classe);
+ __Pyx_GIVEREF(__pyx_v_classe);
+ PyTuple_SET_ITEM(__pyx_t_5, 1+__pyx_t_4, __pyx_v_classe);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 884, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_id = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":885
+ * """
+ * id = self.add_graph(name, classe)
+ * for node in list_of_nodes: # <<<<<<<<<<<<<<
+ * self.add_node(id, node[0], node[1])
+ * for edge in list_of_edges:
+ */
+ if (likely(PyList_CheckExact(__pyx_v_list_of_nodes)) || PyTuple_CheckExact(__pyx_v_list_of_nodes)) {
+ __pyx_t_1 = __pyx_v_list_of_nodes; __Pyx_INCREF(__pyx_t_1); __pyx_t_6 = 0;
+ __pyx_t_7 = NULL;
+ } else {
+ __pyx_t_6 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_v_list_of_nodes); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 885, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_7 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 885, __pyx_L1_error)
+ }
+ for (;;) {
+ if (likely(!__pyx_t_7)) {
+ if (likely(PyList_CheckExact(__pyx_t_1))) {
+ if (__pyx_t_6 >= PyList_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_2); __pyx_t_6++; if (unlikely(0 < 0)) __PYX_ERR(0, 885, __pyx_L1_error)
+ #else
+ __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 885, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ #endif
+ } else {
+ if (__pyx_t_6 >= PyTuple_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_2); __pyx_t_6++; if (unlikely(0 < 0)) __PYX_ERR(0, 885, __pyx_L1_error)
+ #else
+ __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 885, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ #endif
+ }
+ } else {
+ __pyx_t_2 = __pyx_t_7(__pyx_t_1);
+ if (unlikely(!__pyx_t_2)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 885, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_2);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_node, __pyx_t_2);
+ __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":886
+ * id = self.add_graph(name, classe)
+ * for node in list_of_nodes:
+ * self.add_node(id, node[0], node[1]) # <<<<<<<<<<<<<<
+ * for edge in list_of_edges:
+ * self.add_edge(id, edge[0], edge[1], edge[2], ignore_duplicates)
+ */
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_node); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 886, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_node, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 886, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_node, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 886, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_9 = NULL;
+ __pyx_t_4 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
+ __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_5);
+ if (likely(__pyx_t_9)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
+ __Pyx_INCREF(__pyx_t_9);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_5, function);
+ __pyx_t_4 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_5)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_9, __pyx_v_id, __pyx_t_3, __pyx_t_8};
+ __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_4, 3+__pyx_t_4); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 886, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_9, __pyx_v_id, __pyx_t_3, __pyx_t_8};
+ __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_4, 3+__pyx_t_4); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 886, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_10 = PyTuple_New(3+__pyx_t_4); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 886, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ if (__pyx_t_9) {
+ __Pyx_GIVEREF(__pyx_t_9); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_9); __pyx_t_9 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_id);
+ __Pyx_GIVEREF(__pyx_v_id);
+ PyTuple_SET_ITEM(__pyx_t_10, 0+__pyx_t_4, __pyx_v_id);
+ __Pyx_GIVEREF(__pyx_t_3);
+ PyTuple_SET_ITEM(__pyx_t_10, 1+__pyx_t_4, __pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_8);
+ PyTuple_SET_ITEM(__pyx_t_10, 2+__pyx_t_4, __pyx_t_8);
+ __pyx_t_3 = 0;
+ __pyx_t_8 = 0;
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_10, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 886, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":885
+ * """
+ * id = self.add_graph(name, classe)
+ * for node in list_of_nodes: # <<<<<<<<<<<<<<
+ * self.add_node(id, node[0], node[1])
+ * for edge in list_of_edges:
+ */
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":887
+ * for node in list_of_nodes:
+ * self.add_node(id, node[0], node[1])
+ * for edge in list_of_edges: # <<<<<<<<<<<<<<
+ * self.add_edge(id, edge[0], edge[1], edge[2], ignore_duplicates)
+ * return id
+ */
+ if (likely(PyList_CheckExact(__pyx_v_list_of_edges)) || PyTuple_CheckExact(__pyx_v_list_of_edges)) {
+ __pyx_t_1 = __pyx_v_list_of_edges; __Pyx_INCREF(__pyx_t_1); __pyx_t_6 = 0;
+ __pyx_t_7 = NULL;
+ } else {
+ __pyx_t_6 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_v_list_of_edges); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 887, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_7 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 887, __pyx_L1_error)
+ }
+ for (;;) {
+ if (likely(!__pyx_t_7)) {
+ if (likely(PyList_CheckExact(__pyx_t_1))) {
+ if (__pyx_t_6 >= PyList_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_2); __pyx_t_6++; if (unlikely(0 < 0)) __PYX_ERR(0, 887, __pyx_L1_error)
+ #else
+ __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 887, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ #endif
+ } else {
+ if (__pyx_t_6 >= PyTuple_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_2); __pyx_t_6++; if (unlikely(0 < 0)) __PYX_ERR(0, 887, __pyx_L1_error)
+ #else
+ __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 887, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ #endif
+ }
+ } else {
+ __pyx_t_2 = __pyx_t_7(__pyx_t_1);
+ if (unlikely(!__pyx_t_2)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 887, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_2);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_edge, __pyx_t_2);
+ __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":888
+ * self.add_node(id, node[0], node[1])
+ * for edge in list_of_edges:
+ * self.add_edge(id, edge[0], edge[1], edge[2], ignore_duplicates) # <<<<<<<<<<<<<<
+ * return id
+ *
+ */
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_edge); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 888, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_10 = __Pyx_GetItemInt(__pyx_v_edge, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 888, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_edge, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 888, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_edge, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 888, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_9 = NULL;
+ __pyx_t_4 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
+ __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_5);
+ if (likely(__pyx_t_9)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
+ __Pyx_INCREF(__pyx_t_9);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_5, function);
+ __pyx_t_4 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_5)) {
+ PyObject *__pyx_temp[6] = {__pyx_t_9, __pyx_v_id, __pyx_t_10, __pyx_t_8, __pyx_t_3, __pyx_v_ignore_duplicates};
+ __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_4, 5+__pyx_t_4); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 888, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {
+ PyObject *__pyx_temp[6] = {__pyx_t_9, __pyx_v_id, __pyx_t_10, __pyx_t_8, __pyx_t_3, __pyx_v_ignore_duplicates};
+ __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_4, 5+__pyx_t_4); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 888, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_11 = PyTuple_New(5+__pyx_t_4); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 888, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ if (__pyx_t_9) {
+ __Pyx_GIVEREF(__pyx_t_9); PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_9); __pyx_t_9 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_id);
+ __Pyx_GIVEREF(__pyx_v_id);
+ PyTuple_SET_ITEM(__pyx_t_11, 0+__pyx_t_4, __pyx_v_id);
+ __Pyx_GIVEREF(__pyx_t_10);
+ PyTuple_SET_ITEM(__pyx_t_11, 1+__pyx_t_4, __pyx_t_10);
+ __Pyx_GIVEREF(__pyx_t_8);
+ PyTuple_SET_ITEM(__pyx_t_11, 2+__pyx_t_4, __pyx_t_8);
+ __Pyx_GIVEREF(__pyx_t_3);
+ PyTuple_SET_ITEM(__pyx_t_11, 3+__pyx_t_4, __pyx_t_3);
+ __Pyx_INCREF(__pyx_v_ignore_duplicates);
+ __Pyx_GIVEREF(__pyx_v_ignore_duplicates);
+ PyTuple_SET_ITEM(__pyx_t_11, 4+__pyx_t_4, __pyx_v_ignore_duplicates);
+ __pyx_t_10 = 0;
+ __pyx_t_8 = 0;
+ __pyx_t_3 = 0;
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_11, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 888, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":887
+ * for node in list_of_nodes:
+ * self.add_node(id, node[0], node[1])
+ * for edge in list_of_edges: # <<<<<<<<<<<<<<
+ * self.add_edge(id, edge[0], edge[1], edge[2], ignore_duplicates)
+ * return id
+ */
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":889
+ * for edge in list_of_edges:
+ * self.add_edge(id, edge[0], edge[1], edge[2], ignore_duplicates)
+ * return id # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_id);
+ __pyx_r = __pyx_v_id;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":864
+ *
+ *
+ * def add_random_graph(self, name, classe, list_of_nodes, list_of_edges, ignore_duplicates=True) : # <<<<<<<<<<<<<<
+ * """
+ * Add a Graph (not GXL) on the environment. Be careful to respect the same format as GXL graphs for labelling nodes and edges.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_8);
+ __Pyx_XDECREF(__pyx_t_9);
+ __Pyx_XDECREF(__pyx_t_10);
+ __Pyx_XDECREF(__pyx_t_11);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_random_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_id);
+ __Pyx_XDECREF(__pyx_v_node);
+ __Pyx_XDECREF(__pyx_v_edge);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":892
+ *
+ *
+ * def add_nx_graph(self, g, classe, ignore_duplicates=True) : # <<<<<<<<<<<<<<
+ * """
+ * Add a Graph (made by networkx) on the environment. Be careful to respect the same format as GXL graphs for labelling nodes and edges.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_87add_nx_graph(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_86add_nx_graph[] = "\n\t\t\tAdd a Graph (made by networkx) on the environment. Be careful to respect the same format as GXL graphs for labelling nodes and edges. \n\t\n\t\t\t:param g: The graph to add (networkx graph)\n\t\t\t:param ignore_duplicates: If True, duplicate edges are ignored, otherwise it's raise an error if an existing edge is added. True by default\n\t\t\t:type g: networkx.graph\n\t\t\t:type ignore_duplicates: bool\n\t\t\t:return: The ID of the newly added graphe\n\t\t\t:rtype: size_t\n\t\n\t\t\t.. note:: The NX graph must respect the GXL structure. Please see how a GXL graph is construct. \n\t\t\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_87add_nx_graph(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g = 0;
+ PyObject *__pyx_v_classe = 0;
+ PyObject *__pyx_v_ignore_duplicates = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("add_nx_graph (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g,&__pyx_n_s_classe,&__pyx_n_s_ignore_duplicates,0};
+ PyObject* values[3] = {0,0,0};
+ values[2] = ((PyObject *)Py_True);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_classe)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("add_nx_graph", 0, 2, 3, 1); __PYX_ERR(0, 892, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_ignore_duplicates);
+ if (value) { values[2] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "add_nx_graph") < 0)) __PYX_ERR(0, 892, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_g = values[0];
+ __pyx_v_classe = values[1];
+ __pyx_v_ignore_duplicates = values[2];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("add_nx_graph", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 892, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_nx_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_86add_nx_graph(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g, __pyx_v_classe, __pyx_v_ignore_duplicates);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_86add_nx_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g, PyObject *__pyx_v_classe, PyObject *__pyx_v_ignore_duplicates) {
+ PyObject *__pyx_v_id = NULL;
+ PyObject *__pyx_v_node = NULL;
+ PyObject *__pyx_v_edge = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ int __pyx_t_5;
+ PyObject *__pyx_t_6 = NULL;
+ Py_ssize_t __pyx_t_7;
+ PyObject *(*__pyx_t_8)(PyObject *);
+ PyObject *__pyx_t_9 = NULL;
+ PyObject *__pyx_t_10 = NULL;
+ PyObject *__pyx_t_11 = NULL;
+ PyObject *__pyx_t_12 = NULL;
+ PyObject *__pyx_t_13 = NULL;
+ PyObject *__pyx_t_14 = NULL;
+ __Pyx_RefNannySetupContext("add_nx_graph", 0);
+
+ /* "gedlibpy.pyx":906
+ *
+ * """
+ * id = self.add_graph(g.name, classe) # <<<<<<<<<<<<<<
+ * for node in g.nodes:
+ * self.add_node(id, str(node), g.nodes[node])
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_graph); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 906, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_g, __pyx_n_s_name); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 906, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_4, __pyx_t_3, __pyx_v_classe};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 906, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_4, __pyx_t_3, __pyx_v_classe};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 906, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_6 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 906, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (__pyx_t_4) {
+ __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_4); __pyx_t_4 = NULL;
+ }
+ __Pyx_GIVEREF(__pyx_t_3);
+ PyTuple_SET_ITEM(__pyx_t_6, 0+__pyx_t_5, __pyx_t_3);
+ __Pyx_INCREF(__pyx_v_classe);
+ __Pyx_GIVEREF(__pyx_v_classe);
+ PyTuple_SET_ITEM(__pyx_t_6, 1+__pyx_t_5, __pyx_v_classe);
+ __pyx_t_3 = 0;
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 906, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_id = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":907
+ * """
+ * id = self.add_graph(g.name, classe)
+ * for node in g.nodes: # <<<<<<<<<<<<<<
+ * self.add_node(id, str(node), g.nodes[node])
+ * for edge in g.edges:
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_g, __pyx_n_s_nodes); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 907, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) {
+ __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); __pyx_t_7 = 0;
+ __pyx_t_8 = NULL;
+ } else {
+ __pyx_t_7 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 907, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_8 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 907, __pyx_L1_error)
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ for (;;) {
+ if (likely(!__pyx_t_8)) {
+ if (likely(PyList_CheckExact(__pyx_t_2))) {
+ if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_7); __Pyx_INCREF(__pyx_t_1); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(0, 907, __pyx_L1_error)
+ #else
+ __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 907, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ #endif
+ } else {
+ if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_7); __Pyx_INCREF(__pyx_t_1); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(0, 907, __pyx_L1_error)
+ #else
+ __pyx_t_1 = PySequence_ITEM(__pyx_t_2, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 907, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ #endif
+ }
+ } else {
+ __pyx_t_1 = __pyx_t_8(__pyx_t_2);
+ if (unlikely(!__pyx_t_1)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 907, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_1);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_node, __pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":908
+ * id = self.add_graph(g.name, classe)
+ * for node in g.nodes:
+ * self.add_node(id, str(node), g.nodes[node]) # <<<<<<<<<<<<<<
+ * for edge in g.edges:
+ * self.add_edge(id, str(edge[0]), str(edge[1]), g.get_edge_data(edge[0], edge[1]), ignore_duplicates)
+ */
+ __pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_node); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 908, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyUnicode_Type)), __pyx_v_node); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 908, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_g, __pyx_n_s_nodes); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 908, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_9 = __Pyx_PyObject_GetItem(__pyx_t_4, __pyx_v_node); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 908, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_6);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_6, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_6)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_v_id, __pyx_t_3, __pyx_t_9};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 908, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_v_id, __pyx_t_3, __pyx_t_9};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 908, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_10 = PyTuple_New(3+__pyx_t_5); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 908, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ if (__pyx_t_4) {
+ __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_4); __pyx_t_4 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_id);
+ __Pyx_GIVEREF(__pyx_v_id);
+ PyTuple_SET_ITEM(__pyx_t_10, 0+__pyx_t_5, __pyx_v_id);
+ __Pyx_GIVEREF(__pyx_t_3);
+ PyTuple_SET_ITEM(__pyx_t_10, 1+__pyx_t_5, __pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_9);
+ PyTuple_SET_ITEM(__pyx_t_10, 2+__pyx_t_5, __pyx_t_9);
+ __pyx_t_3 = 0;
+ __pyx_t_9 = 0;
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_10, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 908, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":907
+ * """
+ * id = self.add_graph(g.name, classe)
+ * for node in g.nodes: # <<<<<<<<<<<<<<
+ * self.add_node(id, str(node), g.nodes[node])
+ * for edge in g.edges:
+ */
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":909
+ * for node in g.nodes:
+ * self.add_node(id, str(node), g.nodes[node])
+ * for edge in g.edges: # <<<<<<<<<<<<<<
+ * self.add_edge(id, str(edge[0]), str(edge[1]), g.get_edge_data(edge[0], edge[1]), ignore_duplicates)
+ * return id
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_g, __pyx_n_s_edges); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 909, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) {
+ __pyx_t_1 = __pyx_t_2; __Pyx_INCREF(__pyx_t_1); __pyx_t_7 = 0;
+ __pyx_t_8 = NULL;
+ } else {
+ __pyx_t_7 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 909, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_8 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 909, __pyx_L1_error)
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ for (;;) {
+ if (likely(!__pyx_t_8)) {
+ if (likely(PyList_CheckExact(__pyx_t_1))) {
+ if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_7); __Pyx_INCREF(__pyx_t_2); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(0, 909, __pyx_L1_error)
+ #else
+ __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 909, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ #endif
+ } else {
+ if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_7); __Pyx_INCREF(__pyx_t_2); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(0, 909, __pyx_L1_error)
+ #else
+ __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 909, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ #endif
+ }
+ } else {
+ __pyx_t_2 = __pyx_t_8(__pyx_t_1);
+ if (unlikely(!__pyx_t_2)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 909, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_2);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_edge, __pyx_t_2);
+ __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":910
+ * self.add_node(id, str(node), g.nodes[node])
+ * for edge in g.edges:
+ * self.add_edge(id, str(edge[0]), str(edge[1]), g.get_edge_data(edge[0], edge[1]), ignore_duplicates) # <<<<<<<<<<<<<<
+ * return id
+ *
+ */
+ __pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_edge); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_10 = __Pyx_GetItemInt(__pyx_v_edge, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __pyx_t_9 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyUnicode_Type)), __pyx_t_10); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __pyx_t_10 = __Pyx_GetItemInt(__pyx_v_edge, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyUnicode_Type)), __pyx_t_10); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_g, __pyx_n_s_get_edge_data); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_11 = __Pyx_GetItemInt(__pyx_v_edge, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __pyx_t_12 = __Pyx_GetItemInt(__pyx_v_edge, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ __pyx_t_13 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_13 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_13)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_13);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_13, __pyx_t_11, __pyx_t_12};
+ __pyx_t_10 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0;
+ __Pyx_GOTREF(__pyx_t_10);
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_13, __pyx_t_11, __pyx_t_12};
+ __pyx_t_10 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0;
+ __Pyx_GOTREF(__pyx_t_10);
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_14 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_14);
+ if (__pyx_t_13) {
+ __Pyx_GIVEREF(__pyx_t_13); PyTuple_SET_ITEM(__pyx_t_14, 0, __pyx_t_13); __pyx_t_13 = NULL;
+ }
+ __Pyx_GIVEREF(__pyx_t_11);
+ PyTuple_SET_ITEM(__pyx_t_14, 0+__pyx_t_5, __pyx_t_11);
+ __Pyx_GIVEREF(__pyx_t_12);
+ PyTuple_SET_ITEM(__pyx_t_14, 1+__pyx_t_5, __pyx_t_12);
+ __pyx_t_11 = 0;
+ __pyx_t_12 = 0;
+ __pyx_t_10 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_14, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_6);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_6, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_6)) {
+ PyObject *__pyx_temp[6] = {__pyx_t_4, __pyx_v_id, __pyx_t_9, __pyx_t_3, __pyx_t_10, __pyx_v_ignore_duplicates};
+ __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_5, 5+__pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) {
+ PyObject *__pyx_temp[6] = {__pyx_t_4, __pyx_v_id, __pyx_t_9, __pyx_t_3, __pyx_t_10, __pyx_v_ignore_duplicates};
+ __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_5, 5+__pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_14 = PyTuple_New(5+__pyx_t_5); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_14);
+ if (__pyx_t_4) {
+ __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_14, 0, __pyx_t_4); __pyx_t_4 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_id);
+ __Pyx_GIVEREF(__pyx_v_id);
+ PyTuple_SET_ITEM(__pyx_t_14, 0+__pyx_t_5, __pyx_v_id);
+ __Pyx_GIVEREF(__pyx_t_9);
+ PyTuple_SET_ITEM(__pyx_t_14, 1+__pyx_t_5, __pyx_t_9);
+ __Pyx_GIVEREF(__pyx_t_3);
+ PyTuple_SET_ITEM(__pyx_t_14, 2+__pyx_t_5, __pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_10);
+ PyTuple_SET_ITEM(__pyx_t_14, 3+__pyx_t_5, __pyx_t_10);
+ __Pyx_INCREF(__pyx_v_ignore_duplicates);
+ __Pyx_GIVEREF(__pyx_v_ignore_duplicates);
+ PyTuple_SET_ITEM(__pyx_t_14, 4+__pyx_t_5, __pyx_v_ignore_duplicates);
+ __pyx_t_9 = 0;
+ __pyx_t_3 = 0;
+ __pyx_t_10 = 0;
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_14, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 910, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":909
+ * for node in g.nodes:
+ * self.add_node(id, str(node), g.nodes[node])
+ * for edge in g.edges: # <<<<<<<<<<<<<<
+ * self.add_edge(id, str(edge[0]), str(edge[1]), g.get_edge_data(edge[0], edge[1]), ignore_duplicates)
+ * return id
+ */
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":911
+ * for edge in g.edges:
+ * self.add_edge(id, str(edge[0]), str(edge[1]), g.get_edge_data(edge[0], edge[1]), ignore_duplicates)
+ * return id # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_id);
+ __pyx_r = __pyx_v_id;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":892
+ *
+ *
+ * def add_nx_graph(self, g, classe, ignore_duplicates=True) : # <<<<<<<<<<<<<<
+ * """
+ * Add a Graph (made by networkx) on the environment. Be careful to respect the same format as GXL graphs for labelling nodes and edges.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_9);
+ __Pyx_XDECREF(__pyx_t_10);
+ __Pyx_XDECREF(__pyx_t_11);
+ __Pyx_XDECREF(__pyx_t_12);
+ __Pyx_XDECREF(__pyx_t_13);
+ __Pyx_XDECREF(__pyx_t_14);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.add_nx_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_id);
+ __Pyx_XDECREF(__pyx_v_node);
+ __Pyx_XDECREF(__pyx_v_edge);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":914
+ *
+ *
+ * def compute_ged_on_two_graphs(self, g1, g2, edit_cost, method, options, init_option="EAGER_WITHOUT_SHUFFLED_COPIES") : # <<<<<<<<<<<<<<
+ * """
+ * Computes the edit distance between two NX graphs.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_89compute_ged_on_two_graphs(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_88compute_ged_on_two_graphs[] = "\n\t\t\tComputes the edit distance between two NX graphs. \n\t\t\t\n\t\t\t:param g1: The first graph to add and compute\n\t\t\t:param g2: The second graph to add and compute\n\t\t\t:param edit_cost: The name of the edit cost function\n\t\t\t:param method: The name of the computation method\n\t\t\t:param options: The options of the method (like bash options), an empty string by default\n\t\t\t:param init_option: The name of the init option, \"EAGER_WITHOUT_SHUFFLED_COPIES\" by default\n\t\t\t:type g1: networksx.graph\n\t\t\t:type g2: networksx.graph\n\t\t\t:type edit_cost: string\n\t\t\t:type method: string\n\t\t\t:type options: string\n\t\t\t:type init_option: string\n\t\t\t:return: The edit distance between the two graphs and the nodeMap between them. \n\t\t\t:rtype: double, list[tuple(size_t, size_t)]\n\t\n\t\t\t.. seealso:: list_of_edit_cost_options, list_of_method_options, list_of_init_options \n\t\t\t.. note:: Make sure each parameter exists with your architecture and these lists : list_of_edit_cost_options, list_of_method_options, list_of_init_options. The structure of graphs must be similar as GXL. \n\t\t\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_89compute_ged_on_two_graphs(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g1 = 0;
+ PyObject *__pyx_v_g2 = 0;
+ PyObject *__pyx_v_edit_cost = 0;
+ PyObject *__pyx_v_method = 0;
+ PyObject *__pyx_v_options = 0;
+ PyObject *__pyx_v_init_option = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("compute_ged_on_two_graphs (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g1,&__pyx_n_s_g2,&__pyx_n_s_edit_cost,&__pyx_n_s_method,&__pyx_n_s_options,&__pyx_n_s_init_option,0};
+ PyObject* values[6] = {0,0,0,0,0,0};
+ values[5] = ((PyObject *)__pyx_n_u_EAGER_WITHOUT_SHUFFLED_COPIES);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ CYTHON_FALLTHROUGH;
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g1)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g2)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_ged_on_two_graphs", 0, 5, 6, 1); __PYX_ERR(0, 914, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edit_cost)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_ged_on_two_graphs", 0, 5, 6, 2); __PYX_ERR(0, 914, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_method)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_ged_on_two_graphs", 0, 5, 6, 3); __PYX_ERR(0, 914, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 4:
+ if (likely((values[4] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_options)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_ged_on_two_graphs", 0, 5, 6, 4); __PYX_ERR(0, 914, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 5:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_init_option);
+ if (value) { values[5] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "compute_ged_on_two_graphs") < 0)) __PYX_ERR(0, 914, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ CYTHON_FALLTHROUGH;
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_g1 = values[0];
+ __pyx_v_g2 = values[1];
+ __pyx_v_edit_cost = values[2];
+ __pyx_v_method = values[3];
+ __pyx_v_options = values[4];
+ __pyx_v_init_option = values[5];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("compute_ged_on_two_graphs", 0, 5, 6, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 914, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.compute_ged_on_two_graphs", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_88compute_ged_on_two_graphs(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g1, __pyx_v_g2, __pyx_v_edit_cost, __pyx_v_method, __pyx_v_options, __pyx_v_init_option);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_88compute_ged_on_two_graphs(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g1, PyObject *__pyx_v_g2, PyObject *__pyx_v_edit_cost, PyObject *__pyx_v_method, PyObject *__pyx_v_options, PyObject *__pyx_v_init_option) {
+ PyObject *__pyx_v_g = NULL;
+ PyObject *__pyx_v_h = NULL;
+ PyObject *__pyx_v_resDistance = NULL;
+ PyObject *__pyx_v_resMapping = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ int __pyx_t_4;
+ int __pyx_t_5;
+ PyObject *__pyx_t_6 = NULL;
+ __Pyx_RefNannySetupContext("compute_ged_on_two_graphs", 0);
+
+ /* "gedlibpy.pyx":937
+ *
+ * """
+ * if self.is_initialized() : # <<<<<<<<<<<<<<
+ * self.restart_env()
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_is_initialized); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 937, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 937, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 937, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_4) {
+
+ /* "gedlibpy.pyx":938
+ * """
+ * if self.is_initialized() :
+ * self.restart_env() # <<<<<<<<<<<<<<
+ *
+ * g = self.add_nx_graph(g1, "")
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_restart_env); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 938, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 938, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":937
+ *
+ * """
+ * if self.is_initialized() : # <<<<<<<<<<<<<<
+ * self.restart_env()
+ *
+ */
+ }
+
+ /* "gedlibpy.pyx":940
+ * self.restart_env()
+ *
+ * g = self.add_nx_graph(g1, "") # <<<<<<<<<<<<<<
+ * h = self.add_nx_graph(g2, "")
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_nx_graph); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 940, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_g1, __pyx_kp_u_};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 940, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_g1, __pyx_kp_u_};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 940, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_6 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 940, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (__pyx_t_3) {
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_g1);
+ __Pyx_GIVEREF(__pyx_v_g1);
+ PyTuple_SET_ITEM(__pyx_t_6, 0+__pyx_t_5, __pyx_v_g1);
+ __Pyx_INCREF(__pyx_kp_u_);
+ __Pyx_GIVEREF(__pyx_kp_u_);
+ PyTuple_SET_ITEM(__pyx_t_6, 1+__pyx_t_5, __pyx_kp_u_);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 940, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_g = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":941
+ *
+ * g = self.add_nx_graph(g1, "")
+ * h = self.add_nx_graph(g2, "") # <<<<<<<<<<<<<<
+ *
+ * self.set_edit_cost(edit_cost)
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_nx_graph); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 941, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_6 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_6, __pyx_v_g2, __pyx_kp_u_};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 941, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_6, __pyx_v_g2, __pyx_kp_u_};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 941, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_3 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 941, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ if (__pyx_t_6) {
+ __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6); __pyx_t_6 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_g2);
+ __Pyx_GIVEREF(__pyx_v_g2);
+ PyTuple_SET_ITEM(__pyx_t_3, 0+__pyx_t_5, __pyx_v_g2);
+ __Pyx_INCREF(__pyx_kp_u_);
+ __Pyx_GIVEREF(__pyx_kp_u_);
+ PyTuple_SET_ITEM(__pyx_t_3, 1+__pyx_t_5, __pyx_kp_u_);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 941, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_h = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":943
+ * h = self.add_nx_graph(g2, "")
+ *
+ * self.set_edit_cost(edit_cost) # <<<<<<<<<<<<<<
+ * self.init(init_option)
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_set_edit_cost); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 943, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_edit_cost) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_edit_cost);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 943, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":944
+ *
+ * self.set_edit_cost(edit_cost)
+ * self.init(init_option) # <<<<<<<<<<<<<<
+ *
+ * self.set_method(method, options)
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_init); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 944, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_init_option) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_init_option);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 944, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":946
+ * self.init(init_option)
+ *
+ * self.set_method(method, options) # <<<<<<<<<<<<<<
+ * self.init_method()
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_set_method); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 946, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_method, __pyx_v_options};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 946, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_method, __pyx_v_options};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 946, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_6 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 946, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (__pyx_t_3) {
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_method);
+ __Pyx_GIVEREF(__pyx_v_method);
+ PyTuple_SET_ITEM(__pyx_t_6, 0+__pyx_t_5, __pyx_v_method);
+ __Pyx_INCREF(__pyx_v_options);
+ __Pyx_GIVEREF(__pyx_v_options);
+ PyTuple_SET_ITEM(__pyx_t_6, 1+__pyx_t_5, __pyx_v_options);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 946, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":947
+ *
+ * self.set_method(method, options)
+ * self.init_method() # <<<<<<<<<<<<<<
+ *
+ * resDistance = 0
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_init_method); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 947, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_6) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_6) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 947, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":949
+ * self.init_method()
+ *
+ * resDistance = 0 # <<<<<<<<<<<<<<
+ * resMapping = []
+ * self.run_method(g, h)
+ */
+ __Pyx_INCREF(__pyx_int_0);
+ __pyx_v_resDistance = __pyx_int_0;
+
+ /* "gedlibpy.pyx":950
+ *
+ * resDistance = 0
+ * resMapping = [] # <<<<<<<<<<<<<<
+ * self.run_method(g, h)
+ * resDistance = self.get_upper_bound(g, h)
+ */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 950, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v_resMapping = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":951
+ * resDistance = 0
+ * resMapping = []
+ * self.run_method(g, h) # <<<<<<<<<<<<<<
+ * resDistance = self.get_upper_bound(g, h)
+ * resMapping = self.get_node_map(g, h)
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_run_method); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 951, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_6 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_6, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 951, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_6, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 951, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_3 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 951, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ if (__pyx_t_6) {
+ __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6); __pyx_t_6 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_g);
+ __Pyx_GIVEREF(__pyx_v_g);
+ PyTuple_SET_ITEM(__pyx_t_3, 0+__pyx_t_5, __pyx_v_g);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_3, 1+__pyx_t_5, __pyx_v_h);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 951, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":952
+ * resMapping = []
+ * self.run_method(g, h)
+ * resDistance = self.get_upper_bound(g, h) # <<<<<<<<<<<<<<
+ * resMapping = self.get_node_map(g, h)
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_get_upper_bound); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 952, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 952, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 952, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_6 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 952, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (__pyx_t_3) {
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_g);
+ __Pyx_GIVEREF(__pyx_v_g);
+ PyTuple_SET_ITEM(__pyx_t_6, 0+__pyx_t_5, __pyx_v_g);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_6, 1+__pyx_t_5, __pyx_v_h);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 952, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF_SET(__pyx_v_resDistance, __pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":953
+ * self.run_method(g, h)
+ * resDistance = self.get_upper_bound(g, h)
+ * resMapping = self.get_node_map(g, h) # <<<<<<<<<<<<<<
+ *
+ * return resDistance, resMapping
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_get_node_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 953, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_6 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_6, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 953, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_6, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 953, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_3 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 953, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ if (__pyx_t_6) {
+ __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6); __pyx_t_6 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_g);
+ __Pyx_GIVEREF(__pyx_v_g);
+ PyTuple_SET_ITEM(__pyx_t_3, 0+__pyx_t_5, __pyx_v_g);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_3, 1+__pyx_t_5, __pyx_v_h);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 953, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF_SET(__pyx_v_resMapping, __pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":955
+ * resMapping = self.get_node_map(g, h)
+ *
+ * return resDistance, resMapping # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 955, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_INCREF(__pyx_v_resDistance);
+ __Pyx_GIVEREF(__pyx_v_resDistance);
+ PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_resDistance);
+ __Pyx_INCREF(__pyx_v_resMapping);
+ __Pyx_GIVEREF(__pyx_v_resMapping);
+ PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_resMapping);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":914
+ *
+ *
+ * def compute_ged_on_two_graphs(self, g1, g2, edit_cost, method, options, init_option="EAGER_WITHOUT_SHUFFLED_COPIES") : # <<<<<<<<<<<<<<
+ * """
+ * Computes the edit distance between two NX graphs.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.compute_ged_on_two_graphs", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_g);
+ __Pyx_XDECREF(__pyx_v_h);
+ __Pyx_XDECREF(__pyx_v_resDistance);
+ __Pyx_XDECREF(__pyx_v_resMapping);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":958
+ *
+ *
+ * def compute_edit_distance_on_nx_graphs(self, dataset, classes, edit_cost, method, options, init_option="EAGER_WITHOUT_SHUFFLED_COPIES") : # <<<<<<<<<<<<<<
+ * """
+ *
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_91compute_edit_distance_on_nx_graphs(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_90compute_edit_distance_on_nx_graphs[] = "\n\t\n\t\t\tComputes all the edit distance between each NX graphs on the dataset. \n\t\t\t\n\t\t\t:param dataset: The list of graphs to add and compute\n\t\t\t:param classes: The classe of all the graph, can be an empty string\n\t\t\t:param edit_cost: The name of the edit cost function\n\t\t\t:param method: The name of the computation method\n\t\t\t:param options: The options of the method (like bash options), an empty string by default\n\t\t\t:param init_option: The name of the init option, \"EAGER_WITHOUT_SHUFFLED_COPIES\" by default\n\t\t\t:type dataset: list[networksx.graph]\n\t\t\t:type classes: string\n\t\t\t:type edit_cost: string\n\t\t\t:type method: string\n\t\t\t:type options: string\n\t\t\t:type init_option: string\n\t\t\t:return: Two matrix, the first with edit distances between graphs and the second the nodeMap between graphs. The result between g and h is one the [g][h] coordinates.\n\t\t\t:rtype: list[list[double]], list[list[list[tuple(size_t, size_t)]]]\n\t\n\t\t\t.. seealso:: list_of_edit_cost_options, list_of_method_options, list_of_init_options\n\t\t\t.. note:: Make sure each parameter exists with your architecture and these lists : list_of_edit_cost_options, list_of_method_options, list_of_init_options. The structure of graphs must be similar as GXL. \n\t\t\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_91compute_edit_distance_on_nx_graphs(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_dataset = 0;
+ PyObject *__pyx_v_classes = 0;
+ PyObject *__pyx_v_edit_cost = 0;
+ PyObject *__pyx_v_method = 0;
+ PyObject *__pyx_v_options = 0;
+ PyObject *__pyx_v_init_option = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("compute_edit_distance_on_nx_graphs (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dataset,&__pyx_n_s_classes,&__pyx_n_s_edit_cost,&__pyx_n_s_method,&__pyx_n_s_options,&__pyx_n_s_init_option,0};
+ PyObject* values[6] = {0,0,0,0,0,0};
+ values[5] = ((PyObject *)__pyx_n_u_EAGER_WITHOUT_SHUFFLED_COPIES);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ CYTHON_FALLTHROUGH;
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dataset)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_classes)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_edit_distance_on_nx_graphs", 0, 5, 6, 1); __PYX_ERR(0, 958, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edit_cost)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_edit_distance_on_nx_graphs", 0, 5, 6, 2); __PYX_ERR(0, 958, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_method)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_edit_distance_on_nx_graphs", 0, 5, 6, 3); __PYX_ERR(0, 958, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 4:
+ if (likely((values[4] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_options)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_edit_distance_on_nx_graphs", 0, 5, 6, 4); __PYX_ERR(0, 958, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 5:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_init_option);
+ if (value) { values[5] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "compute_edit_distance_on_nx_graphs") < 0)) __PYX_ERR(0, 958, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ CYTHON_FALLTHROUGH;
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_dataset = values[0];
+ __pyx_v_classes = values[1];
+ __pyx_v_edit_cost = values[2];
+ __pyx_v_method = values[3];
+ __pyx_v_options = values[4];
+ __pyx_v_init_option = values[5];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("compute_edit_distance_on_nx_graphs", 0, 5, 6, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 958, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.compute_edit_distance_on_nx_graphs", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_90compute_edit_distance_on_nx_graphs(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_dataset, __pyx_v_classes, __pyx_v_edit_cost, __pyx_v_method, __pyx_v_options, __pyx_v_init_option);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_90compute_edit_distance_on_nx_graphs(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_dataset, PyObject *__pyx_v_classes, PyObject *__pyx_v_edit_cost, PyObject *__pyx_v_method, PyObject *__pyx_v_options, PyObject *__pyx_v_init_option) {
+ PyObject *__pyx_v_graph = NULL;
+ PyObject *__pyx_v_listID = NULL;
+ PyObject *__pyx_v_resDistance = NULL;
+ PyObject *__pyx_v_resMapping = NULL;
+ PyObject *__pyx_v_g = NULL;
+ PyObject *__pyx_v_h = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ int __pyx_t_4;
+ Py_ssize_t __pyx_t_5;
+ PyObject *(*__pyx_t_6)(PyObject *);
+ PyObject *__pyx_t_7 = NULL;
+ int __pyx_t_8;
+ PyObject *__pyx_t_9 = NULL;
+ Py_ssize_t __pyx_t_10;
+ PyObject *(*__pyx_t_11)(PyObject *);
+ PyObject *__pyx_t_12 = NULL;
+ __Pyx_RefNannySetupContext("compute_edit_distance_on_nx_graphs", 0);
+
+ /* "gedlibpy.pyx":982
+ *
+ * """
+ * if self.is_initialized() : # <<<<<<<<<<<<<<
+ * self.restart_env()
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_is_initialized); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 982, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 982, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 982, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_4) {
+
+ /* "gedlibpy.pyx":983
+ * """
+ * if self.is_initialized() :
+ * self.restart_env() # <<<<<<<<<<<<<<
+ *
+ * print("Loading graphs in progress...")
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_restart_env); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 983, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 983, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":982
+ *
+ * """
+ * if self.is_initialized() : # <<<<<<<<<<<<<<
+ * self.restart_env()
+ *
+ */
+ }
+
+ /* "gedlibpy.pyx":985
+ * self.restart_env()
+ *
+ * print("Loading graphs in progress...") # <<<<<<<<<<<<<<
+ * for graph in dataset :
+ * self.add_nx_graph(graph, classes)
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 985, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":986
+ *
+ * print("Loading graphs in progress...")
+ * for graph in dataset : # <<<<<<<<<<<<<<
+ * self.add_nx_graph(graph, classes)
+ * listID = self.graph_ids()
+ */
+ if (likely(PyList_CheckExact(__pyx_v_dataset)) || PyTuple_CheckExact(__pyx_v_dataset)) {
+ __pyx_t_1 = __pyx_v_dataset; __Pyx_INCREF(__pyx_t_1); __pyx_t_5 = 0;
+ __pyx_t_6 = NULL;
+ } else {
+ __pyx_t_5 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_v_dataset); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 986, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_6 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 986, __pyx_L1_error)
+ }
+ for (;;) {
+ if (likely(!__pyx_t_6)) {
+ if (likely(PyList_CheckExact(__pyx_t_1))) {
+ if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_5); __Pyx_INCREF(__pyx_t_2); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(0, 986, __pyx_L1_error)
+ #else
+ __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 986, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ #endif
+ } else {
+ if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_5); __Pyx_INCREF(__pyx_t_2); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(0, 986, __pyx_L1_error)
+ #else
+ __pyx_t_2 = PySequence_ITEM(__pyx_t_1, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 986, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ #endif
+ }
+ } else {
+ __pyx_t_2 = __pyx_t_6(__pyx_t_1);
+ if (unlikely(!__pyx_t_2)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 986, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_2);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_graph, __pyx_t_2);
+ __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":987
+ * print("Loading graphs in progress...")
+ * for graph in dataset :
+ * self.add_nx_graph(graph, classes) # <<<<<<<<<<<<<<
+ * listID = self.graph_ids()
+ * print("Graphs loaded ! ")
+ */
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_nx_graph); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 987, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_7 = NULL;
+ __pyx_t_8 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ __pyx_t_8 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_v_graph, __pyx_v_classes};
+ __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 987, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_v_graph, __pyx_v_classes};
+ __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 987, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ } else
+ #endif
+ {
+ __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 987, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ if (__pyx_t_7) {
+ __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_graph);
+ __Pyx_GIVEREF(__pyx_v_graph);
+ PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_v_graph);
+ __Pyx_INCREF(__pyx_v_classes);
+ __Pyx_GIVEREF(__pyx_v_classes);
+ PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_classes);
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 987, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":986
+ *
+ * print("Loading graphs in progress...")
+ * for graph in dataset : # <<<<<<<<<<<<<<
+ * self.add_nx_graph(graph, classes)
+ * listID = self.graph_ids()
+ */
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":988
+ * for graph in dataset :
+ * self.add_nx_graph(graph, classes)
+ * listID = self.graph_ids() # <<<<<<<<<<<<<<
+ * print("Graphs loaded ! ")
+ * print("Number of graphs = " + str(listID[1]))
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_graph_ids); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 988, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 988, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_listID = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":989
+ * self.add_nx_graph(graph, classes)
+ * listID = self.graph_ids()
+ * print("Graphs loaded ! ") # <<<<<<<<<<<<<<
+ * print("Number of graphs = " + str(listID[1]))
+ *
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 989, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":990
+ * listID = self.graph_ids()
+ * print("Graphs loaded ! ")
+ * print("Number of graphs = " + str(listID[1])) # <<<<<<<<<<<<<<
+ *
+ * self.set_edit_cost(edit_cost)
+ */
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_listID, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 990, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyUnicode_Type)), __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 990, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyUnicode_Concat(__pyx_kp_u_Number_of_graphs, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 990, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_print, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 990, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":992
+ * print("Number of graphs = " + str(listID[1]))
+ *
+ * self.set_edit_cost(edit_cost) # <<<<<<<<<<<<<<
+ * print("Initialization in progress...")
+ * self.init(init_option)
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_set_edit_cost); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 992, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_3, __pyx_v_edit_cost) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_v_edit_cost);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 992, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":993
+ *
+ * self.set_edit_cost(edit_cost)
+ * print("Initialization in progress...") # <<<<<<<<<<<<<<
+ * self.init(init_option)
+ * print("Initialization terminated !")
+ */
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 993, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":994
+ * self.set_edit_cost(edit_cost)
+ * print("Initialization in progress...")
+ * self.init(init_option) # <<<<<<<<<<<<<<
+ * print("Initialization terminated !")
+ *
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_init); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 994, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_3, __pyx_v_init_option) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_v_init_option);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 994, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":995
+ * print("Initialization in progress...")
+ * self.init(init_option)
+ * print("Initialization terminated !") # <<<<<<<<<<<<<<
+ *
+ * self.set_method(method, options)
+ */
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 995, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":997
+ * print("Initialization terminated !")
+ *
+ * self.set_method(method, options) # <<<<<<<<<<<<<<
+ * self.init_method()
+ *
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_set_method); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 997, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_3 = NULL;
+ __pyx_t_8 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ __pyx_t_8 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_method, __pyx_v_options};
+ __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 997, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_method, __pyx_v_options};
+ __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 997, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ } else
+ #endif
+ {
+ __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 997, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ if (__pyx_t_3) {
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_method);
+ __Pyx_GIVEREF(__pyx_v_method);
+ PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_v_method);
+ __Pyx_INCREF(__pyx_v_options);
+ __Pyx_GIVEREF(__pyx_v_options);
+ PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_options);
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 997, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":998
+ *
+ * self.set_method(method, options)
+ * self.init_method() # <<<<<<<<<<<<<<
+ *
+ * resDistance = [[]]
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_init_method); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 998, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_9 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_9)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_9);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_9) ? __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_9) : __Pyx_PyObject_CallNoArg(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 998, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1000
+ * self.init_method()
+ *
+ * resDistance = [[]] # <<<<<<<<<<<<<<
+ * resMapping = [[]]
+ * for g in range(listID[0], listID[1]) :
+ */
+ __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1000, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1000, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_GIVEREF(__pyx_t_2);
+ PyList_SET_ITEM(__pyx_t_1, 0, __pyx_t_2);
+ __pyx_t_2 = 0;
+ __pyx_v_resDistance = ((PyObject*)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1001
+ *
+ * resDistance = [[]]
+ * resMapping = [[]] # <<<<<<<<<<<<<<
+ * for g in range(listID[0], listID[1]) :
+ * print("Computation between graph " + str(g) + " with all the others including himself.")
+ */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1001, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1001, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyList_SET_ITEM(__pyx_t_2, 0, __pyx_t_1);
+ __pyx_t_1 = 0;
+ __pyx_v_resMapping = ((PyObject*)__pyx_t_2);
+ __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1002
+ * resDistance = [[]]
+ * resMapping = [[]]
+ * for g in range(listID[0], listID[1]) : # <<<<<<<<<<<<<<
+ * print("Computation between graph " + str(g) + " with all the others including himself.")
+ * for h in range(listID[0], listID[1]) :
+ */
+ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_listID, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1002, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_listID, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1002, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_9 = PyTuple_New(2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1002, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __Pyx_GIVEREF(__pyx_t_2);
+ PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_2);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_t_1);
+ __pyx_t_2 = 0;
+ __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_range, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1002, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) {
+ __pyx_t_9 = __pyx_t_1; __Pyx_INCREF(__pyx_t_9); __pyx_t_5 = 0;
+ __pyx_t_6 = NULL;
+ } else {
+ __pyx_t_5 = -1; __pyx_t_9 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1002, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __pyx_t_6 = Py_TYPE(__pyx_t_9)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1002, __pyx_L1_error)
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ for (;;) {
+ if (likely(!__pyx_t_6)) {
+ if (likely(PyList_CheckExact(__pyx_t_9))) {
+ if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_9)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_1 = PyList_GET_ITEM(__pyx_t_9, __pyx_t_5); __Pyx_INCREF(__pyx_t_1); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(0, 1002, __pyx_L1_error)
+ #else
+ __pyx_t_1 = PySequence_ITEM(__pyx_t_9, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1002, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ #endif
+ } else {
+ if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_9)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_9, __pyx_t_5); __Pyx_INCREF(__pyx_t_1); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(0, 1002, __pyx_L1_error)
+ #else
+ __pyx_t_1 = PySequence_ITEM(__pyx_t_9, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1002, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ #endif
+ }
+ } else {
+ __pyx_t_1 = __pyx_t_6(__pyx_t_9);
+ if (unlikely(!__pyx_t_1)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 1002, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_1);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_g, __pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1003
+ * resMapping = [[]]
+ * for g in range(listID[0], listID[1]) :
+ * print("Computation between graph " + str(g) + " with all the others including himself.") # <<<<<<<<<<<<<<
+ * for h in range(listID[0], listID[1]) :
+ * #print("Computation between graph " + str(g) + " and graph " + str(h))
+ */
+ __pyx_t_1 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyUnicode_Type)), __pyx_v_g); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1003, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyUnicode_Concat(__pyx_kp_u_Computation_between_graph, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1003, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyUnicode_Concat(__pyx_t_2, __pyx_kp_u_with_all_the_others_including_h); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1003, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_print, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1003, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1004
+ * for g in range(listID[0], listID[1]) :
+ * print("Computation between graph " + str(g) + " with all the others including himself.")
+ * for h in range(listID[0], listID[1]) : # <<<<<<<<<<<<<<
+ * #print("Computation between graph " + str(g) + " and graph " + str(h))
+ * self.run_method(g, h)
+ */
+ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_listID, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1004, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_listID, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1004, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1004, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_2);
+ PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
+ __pyx_t_2 = 0;
+ __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_range, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1004, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) {
+ __pyx_t_3 = __pyx_t_1; __Pyx_INCREF(__pyx_t_3); __pyx_t_10 = 0;
+ __pyx_t_11 = NULL;
+ } else {
+ __pyx_t_10 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1004, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_11 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 1004, __pyx_L1_error)
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ for (;;) {
+ if (likely(!__pyx_t_11)) {
+ if (likely(PyList_CheckExact(__pyx_t_3))) {
+ if (__pyx_t_10 >= PyList_GET_SIZE(__pyx_t_3)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_1 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_10); __Pyx_INCREF(__pyx_t_1); __pyx_t_10++; if (unlikely(0 < 0)) __PYX_ERR(0, 1004, __pyx_L1_error)
+ #else
+ __pyx_t_1 = PySequence_ITEM(__pyx_t_3, __pyx_t_10); __pyx_t_10++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1004, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ #endif
+ } else {
+ if (__pyx_t_10 >= PyTuple_GET_SIZE(__pyx_t_3)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_10); __Pyx_INCREF(__pyx_t_1); __pyx_t_10++; if (unlikely(0 < 0)) __PYX_ERR(0, 1004, __pyx_L1_error)
+ #else
+ __pyx_t_1 = PySequence_ITEM(__pyx_t_3, __pyx_t_10); __pyx_t_10++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1004, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ #endif
+ }
+ } else {
+ __pyx_t_1 = __pyx_t_11(__pyx_t_3);
+ if (unlikely(!__pyx_t_1)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 1004, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_1);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_h, __pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1006
+ * for h in range(listID[0], listID[1]) :
+ * #print("Computation between graph " + str(g) + " and graph " + str(h))
+ * self.run_method(g, h) # <<<<<<<<<<<<<<
+ * resDistance[g][h] = self.get_upper_bound(g, h)
+ * resMapping[g][h] = self.get_node_map(g, h)
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_run_method); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1006, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_7 = NULL;
+ __pyx_t_8 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_8 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1006, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1006, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_12 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 1006, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ if (__pyx_t_7) {
+ __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_7); __pyx_t_7 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_g);
+ __Pyx_GIVEREF(__pyx_v_g);
+ PyTuple_SET_ITEM(__pyx_t_12, 0+__pyx_t_8, __pyx_v_g);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_12, 1+__pyx_t_8, __pyx_v_h);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_12, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1006, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1007
+ * #print("Computation between graph " + str(g) + " and graph " + str(h))
+ * self.run_method(g, h)
+ * resDistance[g][h] = self.get_upper_bound(g, h) # <<<<<<<<<<<<<<
+ * resMapping[g][h] = self.get_node_map(g, h)
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_get_upper_bound); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1007, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_12 = NULL;
+ __pyx_t_8 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_12)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_12);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_8 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_12, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1007, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_12, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1007, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_7 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1007, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ if (__pyx_t_12) {
+ __Pyx_GIVEREF(__pyx_t_12); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_12); __pyx_t_12 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_g);
+ __Pyx_GIVEREF(__pyx_v_g);
+ PyTuple_SET_ITEM(__pyx_t_7, 0+__pyx_t_8, __pyx_v_g);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_7, 1+__pyx_t_8, __pyx_v_h);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1007, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_v_resDistance, __pyx_v_g); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1007, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ if (unlikely(PyObject_SetItem(__pyx_t_2, __pyx_v_h, __pyx_t_1) < 0)) __PYX_ERR(0, 1007, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1008
+ * self.run_method(g, h)
+ * resDistance[g][h] = self.get_upper_bound(g, h)
+ * resMapping[g][h] = self.get_node_map(g, h) # <<<<<<<<<<<<<<
+ *
+ * print("Finish ! The return contains edit distances and NodeMap but you can check the result with graphs'ID until you restart the environment")
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_get_node_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1008, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_7 = NULL;
+ __pyx_t_8 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_8 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1008, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1008, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_12 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 1008, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ if (__pyx_t_7) {
+ __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_7); __pyx_t_7 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_g);
+ __Pyx_GIVEREF(__pyx_v_g);
+ PyTuple_SET_ITEM(__pyx_t_12, 0+__pyx_t_8, __pyx_v_g);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_12, 1+__pyx_t_8, __pyx_v_h);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_12, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1008, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_v_resMapping, __pyx_v_g); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1008, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ if (unlikely(PyObject_SetItem(__pyx_t_2, __pyx_v_h, __pyx_t_1) < 0)) __PYX_ERR(0, 1008, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1004
+ * for g in range(listID[0], listID[1]) :
+ * print("Computation between graph " + str(g) + " with all the others including himself.")
+ * for h in range(listID[0], listID[1]) : # <<<<<<<<<<<<<<
+ * #print("Computation between graph " + str(g) + " and graph " + str(h))
+ * self.run_method(g, h)
+ */
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1002
+ * resDistance = [[]]
+ * resMapping = [[]]
+ * for g in range(listID[0], listID[1]) : # <<<<<<<<<<<<<<
+ * print("Computation between graph " + str(g) + " with all the others including himself.")
+ * for h in range(listID[0], listID[1]) :
+ */
+ }
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+
+ /* "gedlibpy.pyx":1010
+ * resMapping[g][h] = self.get_node_map(g, h)
+ *
+ * print("Finish ! The return contains edit distances and NodeMap but you can check the result with graphs'ID until you restart the environment") # <<<<<<<<<<<<<<
+ * return resDistance, resMapping
+ *
+ */
+ __pyx_t_9 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1010, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+
+ /* "gedlibpy.pyx":1011
+ *
+ * print("Finish ! The return contains edit distances and NodeMap but you can check the result with graphs'ID until you restart the environment")
+ * return resDistance, resMapping # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_9 = PyTuple_New(2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1011, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __Pyx_INCREF(__pyx_v_resDistance);
+ __Pyx_GIVEREF(__pyx_v_resDistance);
+ PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_v_resDistance);
+ __Pyx_INCREF(__pyx_v_resMapping);
+ __Pyx_GIVEREF(__pyx_v_resMapping);
+ PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_v_resMapping);
+ __pyx_r = __pyx_t_9;
+ __pyx_t_9 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":958
+ *
+ *
+ * def compute_edit_distance_on_nx_graphs(self, dataset, classes, edit_cost, method, options, init_option="EAGER_WITHOUT_SHUFFLED_COPIES") : # <<<<<<<<<<<<<<
+ * """
+ *
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_XDECREF(__pyx_t_9);
+ __Pyx_XDECREF(__pyx_t_12);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.compute_edit_distance_on_nx_graphs", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_graph);
+ __Pyx_XDECREF(__pyx_v_listID);
+ __Pyx_XDECREF(__pyx_v_resDistance);
+ __Pyx_XDECREF(__pyx_v_resMapping);
+ __Pyx_XDECREF(__pyx_v_g);
+ __Pyx_XDECREF(__pyx_v_h);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1014
+ *
+ *
+ * def compute_edit_distance_on_GXl_graphs(self, path_folder, path_XML, edit_cost, method, options="", init_option="EAGER_WITHOUT_SHUFFLED_COPIES") : # <<<<<<<<<<<<<<
+ * """
+ * Computes all the edit distance between each GXL graphs on the folder and the XMl file.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_93compute_edit_distance_on_GXl_graphs(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_92compute_edit_distance_on_GXl_graphs[] = "\n\t\t\tComputes all the edit distance between each GXL graphs on the folder and the XMl file. \n\t\t\t\n\t\t\t:param path_folder: The folder's path which contains GXL graphs\n\t\t\t:param path_XML: The XML's path which indicates which graphes you want to load\n\t\t\t:param edit_cost: The name of the edit cost function\n\t\t\t:param method: The name of the computation method\n\t\t\t:param options: The options of the method (like bash options), an empty string by default\n\t\t\t:param init_option: The name of the init option, \"EAGER_WITHOUT_SHUFFLED_COPIES\" by default\n\t\t\t:type path_folder: string\n\t\t\t:type path_XML: string\n\t\t\t:type edit_cost: string\n\t\t\t:type method: string\n\t\t\t:type options: string\n\t\t\t:type init_option: string\n\t\t\t:return: The list of the first and last-1 ID of graphs\n\t\t\t:rtype: tuple(size_t, size_t)\n\t\n\t\t\t.. seealso:: list_of_edit_cost_options, list_of_method_options, list_of_init_options\n\t\t\t.. note:: Make sure each parameter exists with your architecture and these lists : list_of_edit_cost_options, list_of_method_options, list_of_init_options. \n\t\t\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_93compute_edit_distance_on_GXl_graphs(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_path_folder = 0;
+ PyObject *__pyx_v_path_XML = 0;
+ PyObject *__pyx_v_edit_cost = 0;
+ PyObject *__pyx_v_method = 0;
+ PyObject *__pyx_v_options = 0;
+ PyObject *__pyx_v_init_option = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("compute_edit_distance_on_GXl_graphs (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path_folder,&__pyx_n_s_path_XML,&__pyx_n_s_edit_cost,&__pyx_n_s_method,&__pyx_n_s_options,&__pyx_n_s_init_option,0};
+ PyObject* values[6] = {0,0,0,0,0,0};
+ values[4] = ((PyObject *)__pyx_kp_u_);
+ values[5] = ((PyObject *)__pyx_n_u_EAGER_WITHOUT_SHUFFLED_COPIES);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ CYTHON_FALLTHROUGH;
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_path_folder)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_path_XML)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_edit_distance_on_GXl_graphs", 0, 4, 6, 1); __PYX_ERR(0, 1014, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edit_cost)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_edit_distance_on_GXl_graphs", 0, 4, 6, 2); __PYX_ERR(0, 1014, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_method)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_edit_distance_on_GXl_graphs", 0, 4, 6, 3); __PYX_ERR(0, 1014, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 4:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_options);
+ if (value) { values[4] = value; kw_args--; }
+ }
+ CYTHON_FALLTHROUGH;
+ case 5:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_init_option);
+ if (value) { values[5] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "compute_edit_distance_on_GXl_graphs") < 0)) __PYX_ERR(0, 1014, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ CYTHON_FALLTHROUGH;
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_path_folder = values[0];
+ __pyx_v_path_XML = values[1];
+ __pyx_v_edit_cost = values[2];
+ __pyx_v_method = values[3];
+ __pyx_v_options = values[4];
+ __pyx_v_init_option = values[5];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("compute_edit_distance_on_GXl_graphs", 0, 4, 6, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 1014, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.compute_edit_distance_on_GXl_graphs", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_92compute_edit_distance_on_GXl_graphs(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_path_folder, __pyx_v_path_XML, __pyx_v_edit_cost, __pyx_v_method, __pyx_v_options, __pyx_v_init_option);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_92compute_edit_distance_on_GXl_graphs(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_path_folder, PyObject *__pyx_v_path_XML, PyObject *__pyx_v_edit_cost, PyObject *__pyx_v_method, PyObject *__pyx_v_options, PyObject *__pyx_v_init_option) {
+ PyObject *__pyx_v_listID = NULL;
+ PyObject *__pyx_v_g = NULL;
+ PyObject *__pyx_v_h = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ int __pyx_t_4;
+ int __pyx_t_5;
+ PyObject *__pyx_t_6 = NULL;
+ Py_ssize_t __pyx_t_7;
+ PyObject *(*__pyx_t_8)(PyObject *);
+ Py_ssize_t __pyx_t_9;
+ PyObject *(*__pyx_t_10)(PyObject *);
+ PyObject *__pyx_t_11 = NULL;
+ PyObject *__pyx_t_12 = NULL;
+ __Pyx_RefNannySetupContext("compute_edit_distance_on_GXl_graphs", 0);
+
+ /* "gedlibpy.pyx":1038
+ * """
+ *
+ * if self.is_initialized() : # <<<<<<<<<<<<<<
+ * self.restart_env()
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_is_initialized); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1038, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1038, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 1038, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_4) {
+
+ /* "gedlibpy.pyx":1039
+ *
+ * if self.is_initialized() :
+ * self.restart_env() # <<<<<<<<<<<<<<
+ *
+ * print("Loading graphs in progress...")
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_restart_env); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1039, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1039, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1038
+ * """
+ *
+ * if self.is_initialized() : # <<<<<<<<<<<<<<
+ * self.restart_env()
+ *
+ */
+ }
+
+ /* "gedlibpy.pyx":1041
+ * self.restart_env()
+ *
+ * print("Loading graphs in progress...") # <<<<<<<<<<<<<<
+ * self.load_GXL_graphs(path_folder, path_XML)
+ * listID = self.graph_ids()
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1041, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1042
+ *
+ * print("Loading graphs in progress...")
+ * self.load_GXL_graphs(path_folder, path_XML) # <<<<<<<<<<<<<<
+ * listID = self.graph_ids()
+ * print("Graphs loaded ! ")
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_load_GXL_graphs); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1042, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_path_folder, __pyx_v_path_XML};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1042, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_3, __pyx_v_path_folder, __pyx_v_path_XML};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1042, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_6 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1042, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (__pyx_t_3) {
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_path_folder);
+ __Pyx_GIVEREF(__pyx_v_path_folder);
+ PyTuple_SET_ITEM(__pyx_t_6, 0+__pyx_t_5, __pyx_v_path_folder);
+ __Pyx_INCREF(__pyx_v_path_XML);
+ __Pyx_GIVEREF(__pyx_v_path_XML);
+ PyTuple_SET_ITEM(__pyx_t_6, 1+__pyx_t_5, __pyx_v_path_XML);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1042, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1043
+ * print("Loading graphs in progress...")
+ * self.load_GXL_graphs(path_folder, path_XML)
+ * listID = self.graph_ids() # <<<<<<<<<<<<<<
+ * print("Graphs loaded ! ")
+ * print("Number of graphs = " + str(listID[1]))
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_graph_ids); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1043, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_6) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_6) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1043, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_listID = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1044
+ * self.load_GXL_graphs(path_folder, path_XML)
+ * listID = self.graph_ids()
+ * print("Graphs loaded ! ") # <<<<<<<<<<<<<<
+ * print("Number of graphs = " + str(listID[1]))
+ *
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1044, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1045
+ * listID = self.graph_ids()
+ * print("Graphs loaded ! ")
+ * print("Number of graphs = " + str(listID[1])) # <<<<<<<<<<<<<<
+ *
+ * self.set_edit_cost(edit_cost)
+ */
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_listID, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1045, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyUnicode_Type)), __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1045, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyUnicode_Concat(__pyx_kp_u_Number_of_graphs, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1045, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_print, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1045, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1047
+ * print("Number of graphs = " + str(listID[1]))
+ *
+ * self.set_edit_cost(edit_cost) # <<<<<<<<<<<<<<
+ * print("Initialization in progress...")
+ * self.init(init_option)
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_set_edit_cost); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1047, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_6, __pyx_v_edit_cost) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_v_edit_cost);
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1047, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1048
+ *
+ * self.set_edit_cost(edit_cost)
+ * print("Initialization in progress...") # <<<<<<<<<<<<<<
+ * self.init(init_option)
+ * print("Initialization terminated !")
+ */
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1048, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1049
+ * self.set_edit_cost(edit_cost)
+ * print("Initialization in progress...")
+ * self.init(init_option) # <<<<<<<<<<<<<<
+ * print("Initialization terminated !")
+ *
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_init); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1049, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_6, __pyx_v_init_option) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_v_init_option);
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1049, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1050
+ * print("Initialization in progress...")
+ * self.init(init_option)
+ * print("Initialization terminated !") # <<<<<<<<<<<<<<
+ *
+ * self.set_method(method, options)
+ */
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1050, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1052
+ * print("Initialization terminated !")
+ *
+ * self.set_method(method, options) # <<<<<<<<<<<<<<
+ * self.init_method()
+ *
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_set_method); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1052, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_6 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_6, __pyx_v_method, __pyx_v_options};
+ __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1052, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_6, __pyx_v_method, __pyx_v_options};
+ __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1052, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ } else
+ #endif
+ {
+ __pyx_t_3 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1052, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ if (__pyx_t_6) {
+ __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6); __pyx_t_6 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_method);
+ __Pyx_GIVEREF(__pyx_v_method);
+ PyTuple_SET_ITEM(__pyx_t_3, 0+__pyx_t_5, __pyx_v_method);
+ __Pyx_INCREF(__pyx_v_options);
+ __Pyx_GIVEREF(__pyx_v_options);
+ PyTuple_SET_ITEM(__pyx_t_3, 1+__pyx_t_5, __pyx_v_options);
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1052, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1053
+ *
+ * self.set_method(method, options)
+ * self.init_method() # <<<<<<<<<<<<<<
+ *
+ * #res = []
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_init_method); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1053, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ }
+ }
+ __pyx_t_2 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1053, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1056
+ *
+ * #res = []
+ * for g in range(listID[0], listID[1]) : # <<<<<<<<<<<<<<
+ * print("Computation between graph " + str(g) + " with all the others including himself.")
+ * for h in range(listID[0], listID[1]) :
+ */
+ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_listID, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1056, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_listID, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1056, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1056, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_2);
+ PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
+ __pyx_t_2 = 0;
+ __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_range, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1056, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) {
+ __pyx_t_3 = __pyx_t_1; __Pyx_INCREF(__pyx_t_3); __pyx_t_7 = 0;
+ __pyx_t_8 = NULL;
+ } else {
+ __pyx_t_7 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1056, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_8 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1056, __pyx_L1_error)
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ for (;;) {
+ if (likely(!__pyx_t_8)) {
+ if (likely(PyList_CheckExact(__pyx_t_3))) {
+ if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_3)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_1 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_1); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(0, 1056, __pyx_L1_error)
+ #else
+ __pyx_t_1 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1056, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ #endif
+ } else {
+ if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_3)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_1); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(0, 1056, __pyx_L1_error)
+ #else
+ __pyx_t_1 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1056, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ #endif
+ }
+ } else {
+ __pyx_t_1 = __pyx_t_8(__pyx_t_3);
+ if (unlikely(!__pyx_t_1)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 1056, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_1);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_g, __pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1057
+ * #res = []
+ * for g in range(listID[0], listID[1]) :
+ * print("Computation between graph " + str(g) + " with all the others including himself.") # <<<<<<<<<<<<<<
+ * for h in range(listID[0], listID[1]) :
+ * #print("Computation between graph " + str(g) + " and graph " + str(h))
+ */
+ __pyx_t_1 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyUnicode_Type)), __pyx_v_g); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1057, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyUnicode_Concat(__pyx_kp_u_Computation_between_graph, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1057, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyUnicode_Concat(__pyx_t_2, __pyx_kp_u_with_all_the_others_including_h); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1057, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_print, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1057, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+
+ /* "gedlibpy.pyx":1058
+ * for g in range(listID[0], listID[1]) :
+ * print("Computation between graph " + str(g) + " with all the others including himself.")
+ * for h in range(listID[0], listID[1]) : # <<<<<<<<<<<<<<
+ * #print("Computation between graph " + str(g) + " and graph " + str(h))
+ * self.run_method(g,h)
+ */
+ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_listID, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1058, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_listID, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1058, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1058, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_GIVEREF(__pyx_t_2);
+ PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_2);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_1);
+ __pyx_t_2 = 0;
+ __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_range, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1058, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) {
+ __pyx_t_6 = __pyx_t_1; __Pyx_INCREF(__pyx_t_6); __pyx_t_9 = 0;
+ __pyx_t_10 = NULL;
+ } else {
+ __pyx_t_9 = -1; __pyx_t_6 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1058, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_10 = Py_TYPE(__pyx_t_6)->tp_iternext; if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 1058, __pyx_L1_error)
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ for (;;) {
+ if (likely(!__pyx_t_10)) {
+ if (likely(PyList_CheckExact(__pyx_t_6))) {
+ if (__pyx_t_9 >= PyList_GET_SIZE(__pyx_t_6)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_1 = PyList_GET_ITEM(__pyx_t_6, __pyx_t_9); __Pyx_INCREF(__pyx_t_1); __pyx_t_9++; if (unlikely(0 < 0)) __PYX_ERR(0, 1058, __pyx_L1_error)
+ #else
+ __pyx_t_1 = PySequence_ITEM(__pyx_t_6, __pyx_t_9); __pyx_t_9++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1058, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ #endif
+ } else {
+ if (__pyx_t_9 >= PyTuple_GET_SIZE(__pyx_t_6)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_6, __pyx_t_9); __Pyx_INCREF(__pyx_t_1); __pyx_t_9++; if (unlikely(0 < 0)) __PYX_ERR(0, 1058, __pyx_L1_error)
+ #else
+ __pyx_t_1 = PySequence_ITEM(__pyx_t_6, __pyx_t_9); __pyx_t_9++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1058, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ #endif
+ }
+ } else {
+ __pyx_t_1 = __pyx_t_10(__pyx_t_6);
+ if (unlikely(!__pyx_t_1)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 1058, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_1);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_h, __pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1060
+ * for h in range(listID[0], listID[1]) :
+ * #print("Computation between graph " + str(g) + " and graph " + str(h))
+ * self.run_method(g,h) # <<<<<<<<<<<<<<
+ * #res.append((get_upper_bound(g,h), get_node_map(g,h), get_runtime(g,h)))
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_run_method); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1060, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_11 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_11)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_11);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_11, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1060, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_11, __pyx_v_g, __pyx_v_h};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_5, 2+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1060, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_12 = PyTuple_New(2+__pyx_t_5); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 1060, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ if (__pyx_t_11) {
+ __Pyx_GIVEREF(__pyx_t_11); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_11); __pyx_t_11 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_g);
+ __Pyx_GIVEREF(__pyx_v_g);
+ PyTuple_SET_ITEM(__pyx_t_12, 0+__pyx_t_5, __pyx_v_g);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_12, 1+__pyx_t_5, __pyx_v_h);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_12, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1060, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1058
+ * for g in range(listID[0], listID[1]) :
+ * print("Computation between graph " + str(g) + " with all the others including himself.")
+ * for h in range(listID[0], listID[1]) : # <<<<<<<<<<<<<<
+ * #print("Computation between graph " + str(g) + " and graph " + str(h))
+ * self.run_method(g,h)
+ */
+ }
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+
+ /* "gedlibpy.pyx":1056
+ *
+ * #res = []
+ * for g in range(listID[0], listID[1]) : # <<<<<<<<<<<<<<
+ * print("Computation between graph " + str(g) + " with all the others including himself.")
+ * for h in range(listID[0], listID[1]) :
+ */
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1065
+ * #return res
+ *
+ * print ("Finish ! You can check the result with each ID of graphs ! There are in the return") # <<<<<<<<<<<<<<
+ * print ("Please don't restart the environment or recall this function, you will lose your results !")
+ * return listID
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1065, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1066
+ *
+ * print ("Finish ! You can check the result with each ID of graphs ! There are in the return")
+ * print ("Please don't restart the environment or recall this function, you will lose your results !") # <<<<<<<<<<<<<<
+ * return listID
+ *
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_print, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1066, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1067
+ * print ("Finish ! You can check the result with each ID of graphs ! There are in the return")
+ * print ("Please don't restart the environment or recall this function, you will lose your results !")
+ * return listID # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_listID);
+ __pyx_r = __pyx_v_listID;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1014
+ *
+ *
+ * def compute_edit_distance_on_GXl_graphs(self, path_folder, path_XML, edit_cost, method, options="", init_option="EAGER_WITHOUT_SHUFFLED_COPIES") : # <<<<<<<<<<<<<<
+ * """
+ * Computes all the edit distance between each GXL graphs on the folder and the XMl file.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_11);
+ __Pyx_XDECREF(__pyx_t_12);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.compute_edit_distance_on_GXl_graphs", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_listID);
+ __Pyx_XDECREF(__pyx_v_g);
+ __Pyx_XDECREF(__pyx_v_h);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1070
+ *
+ *
+ * def get_num_node_labels(self): # <<<<<<<<<<<<<<
+ * """
+ * Returns the number of node labels.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_95get_num_node_labels(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_94get_num_node_labels[] = "\n\t\t\tReturns the number of node labels.\n\t\t\t\n\t\t\t:return: Number of pairwise different node labels contained in the environment.\n\t\t\t:rtype: size_t\n\t\t\t\n\t\t\t.. note:: If 1 is returned, the nodes are unlabeled.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_95get_num_node_labels(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_num_node_labels (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_94get_num_node_labels(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_94get_num_node_labels(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("get_num_node_labels", 0);
+
+ /* "gedlibpy.pyx":1079
+ * .. note:: If 1 is returned, the nodes are unlabeled.
+ * """
+ * return self.c_env.getNumNodeLabels() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ try {
+ __pyx_t_1 = __pyx_v_self->c_env->getNumNodeLabels();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1079, __pyx_L1_error)
+ }
+ __pyx_t_2 = __Pyx_PyInt_FromSize_t(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1079, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1070
+ *
+ *
+ * def get_num_node_labels(self): # <<<<<<<<<<<<<<
+ * """
+ * Returns the number of node labels.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_num_node_labels", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1082
+ *
+ *
+ * def get_node_label(self, label_id): # <<<<<<<<<<<<<<
+ * """
+ * Returns node label.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_97get_node_label(PyObject *__pyx_v_self, PyObject *__pyx_v_label_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_96get_node_label[] = "\n\t\t\tReturns node label.\n\t\t\t\n\t\t\t:param label_id: ID of node label that should be returned. Must be between 1 and get_num_node_labels().\n\t\t\t:type label_id: size_t\n\t\t\t:return: Node label for selected label ID.\n\t\t\t:rtype: dict{string : string}\n\t \t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_97get_node_label(PyObject *__pyx_v_self, PyObject *__pyx_v_label_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_node_label (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_96get_node_label(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_label_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_96get_node_label(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_label_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ size_t __pyx_t_3;
+ std::map __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ __Pyx_RefNannySetupContext("get_node_label", 0);
+
+ /* "gedlibpy.pyx":1091
+ * :rtype: dict{string : string}
+ * """
+ * return decode_your_map(self.c_env.getNodeLabel(label_id)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_decode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1091, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = __Pyx_PyInt_As_size_t(__pyx_v_label_id); if (unlikely((__pyx_t_3 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 1091, __pyx_L1_error)
+ try {
+ __pyx_t_4 = __pyx_v_self->c_env->getNodeLabel(__pyx_t_3);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1091, __pyx_L1_error)
+ }
+ __pyx_t_5 = __pyx_convert_map_to_py_std_3a__3a_string____std_3a__3a_string(__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1091, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_6, __pyx_t_5) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1091, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1082
+ *
+ *
+ * def get_node_label(self, label_id): # <<<<<<<<<<<<<<
+ * """
+ * Returns node label.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_label", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1094
+ *
+ *
+ * def get_num_edge_labels(self): # <<<<<<<<<<<<<<
+ * """
+ * Returns the number of edge labels.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_99get_num_edge_labels(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_98get_num_edge_labels[] = "\n\t\t\tReturns the number of edge labels.\n\t\t\t\n\t\t\t:return: Number of pairwise different edge labels contained in the environment.\n\t\t\t:rtype: size_t\n\t\t\t\n\t\t\t.. note:: If 1 is returned, the edges are unlabeled.\n\t \t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_99get_num_edge_labels(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_num_edge_labels (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_98get_num_edge_labels(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_98get_num_edge_labels(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ size_t __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("get_num_edge_labels", 0);
+
+ /* "gedlibpy.pyx":1103
+ * .. note:: If 1 is returned, the edges are unlabeled.
+ * """
+ * return self.c_env.getNumEdgeLabels() # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ try {
+ __pyx_t_1 = __pyx_v_self->c_env->getNumEdgeLabels();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1103, __pyx_L1_error)
+ }
+ __pyx_t_2 = __Pyx_PyInt_FromSize_t(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1103, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1094
+ *
+ *
+ * def get_num_edge_labels(self): # <<<<<<<<<<<<<<
+ * """
+ * Returns the number of edge labels.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_num_edge_labels", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1106
+ *
+ *
+ * def get_edge_label(self, label_id): # <<<<<<<<<<<<<<
+ * """
+ * Returns edge label.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_101get_edge_label(PyObject *__pyx_v_self, PyObject *__pyx_v_label_id); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_100get_edge_label[] = "\n\t\t\tReturns edge label.\n\t\t\t\n\t\t\t:param label_id: ID of edge label that should be returned. Must be between 1 and get_num_edge_labels().\n\t\t\t:type label_id: size_t\n\t\t\t:return: Edge label for selected label ID.\n\t\t\t:rtype: dict{string : string}\n\t \t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_101get_edge_label(PyObject *__pyx_v_self, PyObject *__pyx_v_label_id) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_edge_label (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_100get_edge_label(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_label_id));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_100get_edge_label(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_label_id) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ size_t __pyx_t_3;
+ std::map __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ __Pyx_RefNannySetupContext("get_edge_label", 0);
+
+ /* "gedlibpy.pyx":1115
+ * :rtype: dict{string : string}
+ * """
+ * return decode_your_map(self.c_env.getEdgeLabel(label_id)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_decode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1115, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = __Pyx_PyInt_As_size_t(__pyx_v_label_id); if (unlikely((__pyx_t_3 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 1115, __pyx_L1_error)
+ try {
+ __pyx_t_4 = __pyx_v_self->c_env->getEdgeLabel(__pyx_t_3);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1115, __pyx_L1_error)
+ }
+ __pyx_t_5 = __pyx_convert_map_to_py_std_3a__3a_string____std_3a__3a_string(__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1115, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_6, __pyx_t_5) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1115, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1106
+ *
+ *
+ * def get_edge_label(self, label_id): # <<<<<<<<<<<<<<
+ * """
+ * Returns edge label.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_edge_label", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1129
+ * # return self.c_env.getNumNodes(graph_id)
+ *
+ * def get_avg_num_nodes(self): # <<<<<<<<<<<<<<
+ * """
+ * Returns average number of nodes.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_103get_avg_num_nodes(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_102get_avg_num_nodes[] = "\n\t\t\tReturns average number of nodes.\n\t\t\t \n\t\t\t:return: Average number of nodes of the graphs contained in the environment.\n\t\t\t:rtype: double\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_103get_avg_num_nodes(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_avg_num_nodes (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_102get_avg_num_nodes(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_102get_avg_num_nodes(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ double __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("get_avg_num_nodes", 0);
+
+ /* "gedlibpy.pyx":1136
+ * :rtype: double
+ * """
+ * return self.c_env.getAvgNumNodes() # <<<<<<<<<<<<<<
+ *
+ * def get_node_rel_cost(self, node_label_1, node_label_2):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ try {
+ __pyx_t_1 = __pyx_v_self->c_env->getAvgNumNodes();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1136, __pyx_L1_error)
+ }
+ __pyx_t_2 = PyFloat_FromDouble(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1136, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1129
+ * # return self.c_env.getNumNodes(graph_id)
+ *
+ * def get_avg_num_nodes(self): # <<<<<<<<<<<<<<
+ * """
+ * Returns average number of nodes.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_avg_num_nodes", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1138
+ * return self.c_env.getAvgNumNodes()
+ *
+ * def get_node_rel_cost(self, node_label_1, node_label_2): # <<<<<<<<<<<<<<
+ * """
+ * Returns node relabeling cost.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_105get_node_rel_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_104get_node_rel_cost[] = "\n\t\t\tReturns node relabeling cost.\n\t\t\t\n\t\t\t:param node_label_1: First node label.\n\t\t\t:param node_label_2: Second node label.\n\t\t\t:type node_label_1: dict{string : string}\n\t\t\t:type node_label_2: dict{string : string}\n\t\t\t:return: Node relabeling cost for the given node labels.\n\t\t\t:rtype: double\n\t \t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_105get_node_rel_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_node_label_1 = 0;
+ PyObject *__pyx_v_node_label_2 = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_node_rel_cost (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_node_label_1,&__pyx_n_s_node_label_2,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_node_label_1)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_node_label_2)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_node_rel_cost", 1, 2, 2, 1); __PYX_ERR(0, 1138, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_node_rel_cost") < 0)) __PYX_ERR(0, 1138, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_node_label_1 = values[0];
+ __pyx_v_node_label_2 = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_node_rel_cost", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 1138, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_rel_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_104get_node_rel_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_node_label_1, __pyx_v_node_label_2);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_104get_node_rel_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_node_label_1, PyObject *__pyx_v_node_label_2) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ std::map __pyx_t_4;
+ std::map __pyx_t_5;
+ double __pyx_t_6;
+ __Pyx_RefNannySetupContext("get_node_rel_cost", 0);
+
+ /* "gedlibpy.pyx":1149
+ * :rtype: double
+ * """
+ * return self.c_env.getNodeRelCost(encode_your_map(node_label_1), encode_your_map(node_label_2)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1149, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_node_label_1) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_node_label_1);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1149, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1149, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1149, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_node_label_2) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_node_label_2);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1149, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_5 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1149, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ try {
+ __pyx_t_6 = __pyx_v_self->c_env->getNodeRelCost(__pyx_t_4, __pyx_t_5);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1149, __pyx_L1_error)
+ }
+ __pyx_t_1 = PyFloat_FromDouble(__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1149, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1138
+ * return self.c_env.getAvgNumNodes()
+ *
+ * def get_node_rel_cost(self, node_label_1, node_label_2): # <<<<<<<<<<<<<<
+ * """
+ * Returns node relabeling cost.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_rel_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1152
+ *
+ *
+ * def get_node_del_cost(self, node_label): # <<<<<<<<<<<<<<
+ * """
+ * Returns node deletion cost.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_107get_node_del_cost(PyObject *__pyx_v_self, PyObject *__pyx_v_node_label); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_106get_node_del_cost[] = "\n\t\t\tReturns node deletion cost.\n\t\t\t\n\t\t\t:param node_label: Node label.\n\t\t\t:type node_label: dict{string : string}\n\t\t\t:return: Cost of deleting node with given label.\n\t\t\t:rtype: double\n\t \t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_107get_node_del_cost(PyObject *__pyx_v_self, PyObject *__pyx_v_node_label) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_node_del_cost (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_106get_node_del_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_node_label));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_106get_node_del_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_node_label) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ std::map __pyx_t_4;
+ double __pyx_t_5;
+ __Pyx_RefNannySetupContext("get_node_del_cost", 0);
+
+ /* "gedlibpy.pyx":1161
+ * :rtype: double
+ * """
+ * return self.c_env.getNodeDelCost(encode_your_map(node_label)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1161, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_node_label) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_node_label);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1161, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1161, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ try {
+ __pyx_t_5 = __pyx_v_self->c_env->getNodeDelCost(__pyx_t_4);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1161, __pyx_L1_error)
+ }
+ __pyx_t_1 = PyFloat_FromDouble(__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1161, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1152
+ *
+ *
+ * def get_node_del_cost(self, node_label): # <<<<<<<<<<<<<<
+ * """
+ * Returns node deletion cost.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_del_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1164
+ *
+ *
+ * def get_node_ins_cost(self, node_label): # <<<<<<<<<<<<<<
+ * """
+ * Returns node insertion cost.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_109get_node_ins_cost(PyObject *__pyx_v_self, PyObject *__pyx_v_node_label); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_108get_node_ins_cost[] = "\n\t\t\tReturns node insertion cost.\n\t\t\t\n\t\t\t:param node_label: Node label.\n\t\t\t:type node_label: dict{string : string}\n\t\t\t:return: Cost of inserting node with given label.\n\t\t\t:rtype: double\n\t \t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_109get_node_ins_cost(PyObject *__pyx_v_self, PyObject *__pyx_v_node_label) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_node_ins_cost (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_108get_node_ins_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_node_label));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_108get_node_ins_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_node_label) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ std::map __pyx_t_4;
+ double __pyx_t_5;
+ __Pyx_RefNannySetupContext("get_node_ins_cost", 0);
+
+ /* "gedlibpy.pyx":1173
+ * :rtype: double
+ * """
+ * return self.c_env.getNodeInsCost(encode_your_map(node_label)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1173, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_node_label) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_node_label);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1173, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1173, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ try {
+ __pyx_t_5 = __pyx_v_self->c_env->getNodeInsCost(__pyx_t_4);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1173, __pyx_L1_error)
+ }
+ __pyx_t_1 = PyFloat_FromDouble(__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1173, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1164
+ *
+ *
+ * def get_node_ins_cost(self, node_label): # <<<<<<<<<<<<<<
+ * """
+ * Returns node insertion cost.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_node_ins_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1176
+ *
+ *
+ * def get_median_node_label(self, node_labels): # <<<<<<<<<<<<<<
+ * """
+ * Computes median node label.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_111get_median_node_label(PyObject *__pyx_v_self, PyObject *__pyx_v_node_labels); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_110get_median_node_label[] = "\n\t\t\tComputes median node label.\n\t\t\t\n\t\t\t:param node_labels: The node labels whose median should be computed.\n\t\t\t:type node_labels: list[dict{string : string}]\n\t\t\t:return: Median of the given node labels.\n\t\t\t:rtype: dict{string : string}\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_111get_median_node_label(PyObject *__pyx_v_self, PyObject *__pyx_v_node_labels) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_median_node_label (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_110get_median_node_label(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_node_labels));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_110get_median_node_label(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_node_labels) {
+ PyObject *__pyx_v_node_labels_b = NULL;
+ PyObject *__pyx_8genexpr9__pyx_v_node_label = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ Py_ssize_t __pyx_t_3;
+ PyObject *(*__pyx_t_4)(PyObject *);
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ PyObject *__pyx_t_7 = NULL;
+ std::vector > __pyx_t_8;
+ std::map __pyx_t_9;
+ __Pyx_RefNannySetupContext("get_median_node_label", 0);
+
+ /* "gedlibpy.pyx":1185
+ * :rtype: dict{string : string}
+ * """
+ * node_labels_b = [encode_your_map(node_label) for node_label in node_labels] # <<<<<<<<<<<<<<
+ * return decode_your_map(self.c_env.getMedianNodeLabel(node_labels_b))
+ *
+ */
+ { /* enter inner scope */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1185, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ if (likely(PyList_CheckExact(__pyx_v_node_labels)) || PyTuple_CheckExact(__pyx_v_node_labels)) {
+ __pyx_t_2 = __pyx_v_node_labels; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0;
+ __pyx_t_4 = NULL;
+ } else {
+ __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_node_labels); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1185, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1185, __pyx_L5_error)
+ }
+ for (;;) {
+ if (likely(!__pyx_t_4)) {
+ if (likely(PyList_CheckExact(__pyx_t_2))) {
+ if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(0, 1185, __pyx_L5_error)
+ #else
+ __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1185, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ #endif
+ } else {
+ if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(0, 1185, __pyx_L5_error)
+ #else
+ __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1185, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ #endif
+ }
+ } else {
+ __pyx_t_5 = __pyx_t_4(__pyx_t_2);
+ if (unlikely(!__pyx_t_5)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 1185, __pyx_L5_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_5);
+ }
+ __Pyx_XDECREF_SET(__pyx_8genexpr9__pyx_v_node_label, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1185, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_7 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_6);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_6, function);
+ }
+ }
+ __pyx_t_5 = (__pyx_t_7) ? __Pyx_PyObject_Call2Args(__pyx_t_6, __pyx_t_7, __pyx_8genexpr9__pyx_v_node_label) : __Pyx_PyObject_CallOneArg(__pyx_t_6, __pyx_8genexpr9__pyx_v_node_label);
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1185, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(0, 1185, __pyx_L5_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_XDECREF(__pyx_8genexpr9__pyx_v_node_label); __pyx_8genexpr9__pyx_v_node_label = 0;
+ goto __pyx_L8_exit_scope;
+ __pyx_L5_error:;
+ __Pyx_XDECREF(__pyx_8genexpr9__pyx_v_node_label); __pyx_8genexpr9__pyx_v_node_label = 0;
+ goto __pyx_L1_error;
+ __pyx_L8_exit_scope:;
+ } /* exit inner scope */
+ __pyx_v_node_labels_b = ((PyObject*)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1186
+ * """
+ * node_labels_b = [encode_your_map(node_label) for node_label in node_labels]
+ * return decode_your_map(self.c_env.getMedianNodeLabel(node_labels_b)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_decode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1186, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_8 = __pyx_convert_vector_from_py_std_3a__3a_map_3c_std_3a__3a_string_2c_std_3a__3a_string_3e___(__pyx_v_node_labels_b); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1186, __pyx_L1_error)
+ try {
+ __pyx_t_9 = __pyx_v_self->c_env->getMedianNodeLabel(__pyx_t_8);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1186, __pyx_L1_error)
+ }
+ __pyx_t_5 = __pyx_convert_map_to_py_std_3a__3a_string____std_3a__3a_string(__pyx_t_9); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1186, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_6, __pyx_t_5) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1186, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1176
+ *
+ *
+ * def get_median_node_label(self, node_labels): # <<<<<<<<<<<<<<
+ * """
+ * Computes median node label.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_median_node_label", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_node_labels_b);
+ __Pyx_XDECREF(__pyx_8genexpr9__pyx_v_node_label);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1189
+ *
+ *
+ * def get_edge_rel_cost(self, edge_label_1, edge_label_2): # <<<<<<<<<<<<<<
+ * """
+ * Returns edge relabeling cost.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_113get_edge_rel_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_112get_edge_rel_cost[] = "\n\t\t\tReturns edge relabeling cost.\n\t\t\t\n\t\t\t:param edge_label_1: First edge label.\n\t\t\t:param edge_label_2: Second edge label.\n\t\t\t:type edge_label_1: dict{string : string}\n\t\t\t:type edge_label_2: dict{string : string}\n\t\t\t:return: Edge relabeling cost for the given edge labels.\n\t\t\t:rtype: double\n\t \t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_113get_edge_rel_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_edge_label_1 = 0;
+ PyObject *__pyx_v_edge_label_2 = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_edge_rel_cost (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_edge_label_1,&__pyx_n_s_edge_label_2,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edge_label_1)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edge_label_2)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("get_edge_rel_cost", 1, 2, 2, 1); __PYX_ERR(0, 1189, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_edge_rel_cost") < 0)) __PYX_ERR(0, 1189, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_edge_label_1 = values[0];
+ __pyx_v_edge_label_2 = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_edge_rel_cost", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 1189, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_edge_rel_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_112get_edge_rel_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_edge_label_1, __pyx_v_edge_label_2);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_112get_edge_rel_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edge_label_1, PyObject *__pyx_v_edge_label_2) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ std::map __pyx_t_4;
+ std::map __pyx_t_5;
+ double __pyx_t_6;
+ __Pyx_RefNannySetupContext("get_edge_rel_cost", 0);
+
+ /* "gedlibpy.pyx":1200
+ * :rtype: double
+ * """
+ * return self.c_env.getEdgeRelCost(encode_your_map(edge_label_1), encode_your_map(edge_label_2)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1200, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_edge_label_1) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_edge_label_1);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1200, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1200, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1200, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_edge_label_2) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_edge_label_2);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1200, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_5 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1200, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ try {
+ __pyx_t_6 = __pyx_v_self->c_env->getEdgeRelCost(__pyx_t_4, __pyx_t_5);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1200, __pyx_L1_error)
+ }
+ __pyx_t_1 = PyFloat_FromDouble(__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1200, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1189
+ *
+ *
+ * def get_edge_rel_cost(self, edge_label_1, edge_label_2): # <<<<<<<<<<<<<<
+ * """
+ * Returns edge relabeling cost.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_edge_rel_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1203
+ *
+ *
+ * def get_edge_del_cost(self, edge_label): # <<<<<<<<<<<<<<
+ * """
+ * Returns edge deletion cost.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_115get_edge_del_cost(PyObject *__pyx_v_self, PyObject *__pyx_v_edge_label); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_114get_edge_del_cost[] = "\n\t\t\tReturns edge deletion cost.\n\t\t\t\n\t\t\t:param edge_label: Edge label.\n\t\t\t:type edge_label: dict{string : string}\n\t\t\t:return: Cost of deleting edge with given label.\n\t\t\t:rtype: double\n\t \t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_115get_edge_del_cost(PyObject *__pyx_v_self, PyObject *__pyx_v_edge_label) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_edge_del_cost (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_114get_edge_del_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_edge_label));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_114get_edge_del_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edge_label) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ std::map __pyx_t_4;
+ double __pyx_t_5;
+ __Pyx_RefNannySetupContext("get_edge_del_cost", 0);
+
+ /* "gedlibpy.pyx":1212
+ * :rtype: double
+ * """
+ * return self.c_env.getEdgeDelCost(encode_your_map(edge_label)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1212, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_edge_label) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_edge_label);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1212, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1212, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ try {
+ __pyx_t_5 = __pyx_v_self->c_env->getEdgeDelCost(__pyx_t_4);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1212, __pyx_L1_error)
+ }
+ __pyx_t_1 = PyFloat_FromDouble(__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1212, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1203
+ *
+ *
+ * def get_edge_del_cost(self, edge_label): # <<<<<<<<<<<<<<
+ * """
+ * Returns edge deletion cost.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_edge_del_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1215
+ *
+ *
+ * def get_edge_ins_cost(self, edge_label): # <<<<<<<<<<<<<<
+ * """
+ * Returns edge insertion cost.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_117get_edge_ins_cost(PyObject *__pyx_v_self, PyObject *__pyx_v_edge_label); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_116get_edge_ins_cost[] = "\n\t\t\tReturns edge insertion cost.\n\t\t\t\n\t\t\t:param edge_label: Edge label.\n\t\t\t:type edge_label: dict{string : string}\n\t\t\t:return: Cost of inserting edge with given label.\n\t\t\t:rtype: double\n\t \t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_117get_edge_ins_cost(PyObject *__pyx_v_self, PyObject *__pyx_v_edge_label) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_edge_ins_cost (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_116get_edge_ins_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_edge_label));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_116get_edge_ins_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edge_label) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ std::map __pyx_t_4;
+ double __pyx_t_5;
+ __Pyx_RefNannySetupContext("get_edge_ins_cost", 0);
+
+ /* "gedlibpy.pyx":1224
+ * :rtype: double
+ * """
+ * return self.c_env.getEdgeInsCost(encode_your_map(edge_label)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1224, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_edge_label) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_edge_label);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1224, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_4 = __pyx_convert_map_from_py_std_3a__3a_string__and_std_3a__3a_string(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1224, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ try {
+ __pyx_t_5 = __pyx_v_self->c_env->getEdgeInsCost(__pyx_t_4);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1224, __pyx_L1_error)
+ }
+ __pyx_t_1 = PyFloat_FromDouble(__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1224, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1215
+ *
+ *
+ * def get_edge_ins_cost(self, edge_label): # <<<<<<<<<<<<<<
+ * """
+ * Returns edge insertion cost.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_edge_ins_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1227
+ *
+ *
+ * def get_median_edge_label(self, edge_labels): # <<<<<<<<<<<<<<
+ * """
+ * Computes median edge label.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_119get_median_edge_label(PyObject *__pyx_v_self, PyObject *__pyx_v_edge_labels); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_118get_median_edge_label[] = "\n\t\t\tComputes median edge label.\n\t\t\t\n\t\t\t:param edge_labels: The edge labels whose median should be computed.\n\t\t\t:type edge_labels: list[dict{string : string}]\n\t\t\t:return: Median of the given edge labels.\n\t\t\t:rtype: dict{string : string}\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_119get_median_edge_label(PyObject *__pyx_v_self, PyObject *__pyx_v_edge_labels) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_median_edge_label (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_118get_median_edge_label(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v_edge_labels));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_118get_median_edge_label(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_edge_labels) {
+ CYTHON_UNUSED PyObject *__pyx_v_edge_labels_b = NULL;
+ PyObject *__pyx_9genexpr10__pyx_v_edge_label = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ Py_ssize_t __pyx_t_3;
+ PyObject *(*__pyx_t_4)(PyObject *);
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ PyObject *__pyx_t_7 = NULL;
+ std::vector > __pyx_t_8;
+ std::map __pyx_t_9;
+ __Pyx_RefNannySetupContext("get_median_edge_label", 0);
+
+ /* "gedlibpy.pyx":1236
+ * :rtype: dict{string : string}
+ * """
+ * edge_labels_b = [encode_your_map(edge_label) for edge_label in edge_labels] # <<<<<<<<<<<<<<
+ * return decode_your_map(self.c_env.getMedianEdgeLabel(edge_label_b))
+ *
+ */
+ { /* enter inner scope */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1236, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ if (likely(PyList_CheckExact(__pyx_v_edge_labels)) || PyTuple_CheckExact(__pyx_v_edge_labels)) {
+ __pyx_t_2 = __pyx_v_edge_labels; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0;
+ __pyx_t_4 = NULL;
+ } else {
+ __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_edge_labels); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1236, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1236, __pyx_L5_error)
+ }
+ for (;;) {
+ if (likely(!__pyx_t_4)) {
+ if (likely(PyList_CheckExact(__pyx_t_2))) {
+ if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(0, 1236, __pyx_L5_error)
+ #else
+ __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1236, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ #endif
+ } else {
+ if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(0, 1236, __pyx_L5_error)
+ #else
+ __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1236, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ #endif
+ }
+ } else {
+ __pyx_t_5 = __pyx_t_4(__pyx_t_2);
+ if (unlikely(!__pyx_t_5)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 1236, __pyx_L5_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_5);
+ }
+ __Pyx_XDECREF_SET(__pyx_9genexpr10__pyx_v_edge_label, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_encode_your_map); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1236, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_7 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_6);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_6, function);
+ }
+ }
+ __pyx_t_5 = (__pyx_t_7) ? __Pyx_PyObject_Call2Args(__pyx_t_6, __pyx_t_7, __pyx_9genexpr10__pyx_v_edge_label) : __Pyx_PyObject_CallOneArg(__pyx_t_6, __pyx_9genexpr10__pyx_v_edge_label);
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1236, __pyx_L5_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(0, 1236, __pyx_L5_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_XDECREF(__pyx_9genexpr10__pyx_v_edge_label); __pyx_9genexpr10__pyx_v_edge_label = 0;
+ goto __pyx_L8_exit_scope;
+ __pyx_L5_error:;
+ __Pyx_XDECREF(__pyx_9genexpr10__pyx_v_edge_label); __pyx_9genexpr10__pyx_v_edge_label = 0;
+ goto __pyx_L1_error;
+ __pyx_L8_exit_scope:;
+ } /* exit inner scope */
+ __pyx_v_edge_labels_b = ((PyObject*)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1237
+ * """
+ * edge_labels_b = [encode_your_map(edge_label) for edge_label in edge_labels]
+ * return decode_your_map(self.c_env.getMedianEdgeLabel(edge_label_b)) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_decode_your_map); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1237, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_edge_label_b); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1237, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_8 = __pyx_convert_vector_from_py_std_3a__3a_map_3c_std_3a__3a_string_2c_std_3a__3a_string_3e___(__pyx_t_5); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1237, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ try {
+ __pyx_t_9 = __pyx_v_self->c_env->getMedianEdgeLabel(__pyx_t_8);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1237, __pyx_L1_error)
+ }
+ __pyx_t_5 = __pyx_convert_map_to_py_std_3a__3a_string____std_3a__3a_string(__pyx_t_9); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1237, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_6, __pyx_t_5) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1237, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1227
+ *
+ *
+ * def get_median_edge_label(self, edge_labels): # <<<<<<<<<<<<<<
+ * """
+ * Computes median edge label.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_median_edge_label", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_edge_labels_b);
+ __Pyx_XDECREF(__pyx_9genexpr10__pyx_v_edge_label);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1240
+ *
+ *
+ * def get_nx_graph(self, graph_id, adj_matrix=True, adj_lists=False, edge_list=False): # @todo # <<<<<<<<<<<<<<
+ * """
+ * Get graph with id `graph_id` in the form of the NetworkX Graph.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_121get_nx_graph(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_120get_nx_graph[] = "\n\t\tGet graph with id `graph_id` in the form of the NetworkX Graph.\n\n\t\tParameters\n\t\t----------\n\t\tgraph_id : int\n\t\t\tID of the selected graph.\n\t\t\t\n\t\tadj_matrix : bool\n\t\t\tSet to `True` to construct an adjacency matrix `adj_matrix` and a hash-map `edge_labels`, which has a key for each pair `(i,j)` such that `adj_matrix[i][j]` equals 1. No effect for now.\n\t\t\t\n\t\tadj_lists : bool\n\t\t\tNo effect for now.\n\t\t\t\n\t\tedge_list : bool\n\t\t\tNo effect for now.\n\n\t\tReturns\n\t\t-------\n\t\tNetworkX Graph object\n\t\t\tThe obtained graph.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_121get_nx_graph(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_graph_id = 0;
+ CYTHON_UNUSED PyObject *__pyx_v_adj_matrix = 0;
+ CYTHON_UNUSED PyObject *__pyx_v_adj_lists = 0;
+ CYTHON_UNUSED PyObject *__pyx_v_edge_list = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_nx_graph (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_graph_id,&__pyx_n_s_adj_matrix,&__pyx_n_s_adj_lists,&__pyx_n_s_edge_list,0};
+ PyObject* values[4] = {0,0,0,0};
+ values[1] = ((PyObject *)Py_True);
+ values[2] = ((PyObject *)Py_False);
+ values[3] = ((PyObject *)Py_False);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_graph_id)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_adj_matrix);
+ if (value) { values[1] = value; kw_args--; }
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_adj_lists);
+ if (value) { values[2] = value; kw_args--; }
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_edge_list);
+ if (value) { values[3] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get_nx_graph") < 0)) __PYX_ERR(0, 1240, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_graph_id = values[0];
+ __pyx_v_adj_matrix = values[1];
+ __pyx_v_adj_lists = values[2];
+ __pyx_v_edge_list = values[3];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("get_nx_graph", 0, 1, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 1240, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_nx_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_120get_nx_graph(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_graph_id, __pyx_v_adj_matrix, __pyx_v_adj_lists, __pyx_v_edge_list);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_120get_nx_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_graph_id, CYTHON_UNUSED PyObject *__pyx_v_adj_matrix, CYTHON_UNUSED PyObject *__pyx_v_adj_lists, CYTHON_UNUSED PyObject *__pyx_v_edge_list) {
+ PyObject *__pyx_v_graph = NULL;
+ PyObject *__pyx_v_nb_nodes = NULL;
+ PyObject *__pyx_v_original_node_ids = NULL;
+ PyObject *__pyx_v_node_labels = NULL;
+ PyObject *__pyx_v_node_id = NULL;
+ PyObject *__pyx_v_edges = NULL;
+ PyObject *__pyx_v_head = NULL;
+ PyObject *__pyx_v_tail = NULL;
+ PyObject *__pyx_v_labels = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ Py_ssize_t __pyx_t_4;
+ PyObject *(*__pyx_t_5)(PyObject *);
+ PyObject *__pyx_t_6 = NULL;
+ PyObject *__pyx_t_7 = NULL;
+ Py_ssize_t __pyx_t_8;
+ int __pyx_t_9;
+ int __pyx_t_10;
+ PyObject *__pyx_t_11 = NULL;
+ PyObject *(*__pyx_t_12)(PyObject *);
+ __Pyx_RefNannySetupContext("get_nx_graph", 0);
+
+ /* "gedlibpy.pyx":1263
+ * The obtained graph.
+ * """
+ * graph = nx.Graph() # <<<<<<<<<<<<<<
+ * graph.graph['id'] = graph_id
+ *
+ */
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_nx); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1263, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_Graph); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1263, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_2)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_2);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_2) : __Pyx_PyObject_CallNoArg(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1263, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_graph = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1264
+ * """
+ * graph = nx.Graph()
+ * graph.graph['id'] = graph_id # <<<<<<<<<<<<<<
+ *
+ * nb_nodes = self.get_graph_num_nodes(graph_id)
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_graph, __pyx_n_s_graph); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1264, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_n_u_id, __pyx_v_graph_id) < 0)) __PYX_ERR(0, 1264, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1266
+ * graph.graph['id'] = graph_id
+ *
+ * nb_nodes = self.get_graph_num_nodes(graph_id) # <<<<<<<<<<<<<<
+ * original_node_ids = self.get_original_node_ids(graph_id)
+ * node_labels = self.get_graph_node_labels(graph_id)
+ */
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_get_graph_num_nodes); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1266, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_2 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_2)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_2);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_2, __pyx_v_graph_id) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_v_graph_id);
+ __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1266, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_nb_nodes = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1267
+ *
+ * nb_nodes = self.get_graph_num_nodes(graph_id)
+ * original_node_ids = self.get_original_node_ids(graph_id) # <<<<<<<<<<<<<<
+ * node_labels = self.get_graph_node_labels(graph_id)
+ * # print(original_node_ids)
+ */
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_get_original_node_ids); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1267, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_2 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_2)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_2);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_2, __pyx_v_graph_id) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_v_graph_id);
+ __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1267, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_original_node_ids = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1268
+ * nb_nodes = self.get_graph_num_nodes(graph_id)
+ * original_node_ids = self.get_original_node_ids(graph_id)
+ * node_labels = self.get_graph_node_labels(graph_id) # <<<<<<<<<<<<<<
+ * # print(original_node_ids)
+ * # print(node_labels)
+ */
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_get_graph_node_labels); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1268, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_2 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_2)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_2);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_2, __pyx_v_graph_id) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_v_graph_id);
+ __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1268, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_node_labels = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1271
+ * # print(original_node_ids)
+ * # print(node_labels)
+ * graph.graph['original_node_ids'] = original_node_ids # <<<<<<<<<<<<<<
+ *
+ * for node_id in range(0, nb_nodes):
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_graph, __pyx_n_s_graph); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1271, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_n_u_original_node_ids, __pyx_v_original_node_ids) < 0)) __PYX_ERR(0, 1271, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1273
+ * graph.graph['original_node_ids'] = original_node_ids
+ *
+ * for node_id in range(0, nb_nodes): # <<<<<<<<<<<<<<
+ * graph.add_node(node_id, **node_labels[node_id])
+ * # graph.nodes[node_id]['original_node_id'] = original_node_ids[node_id]
+ */
+ __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1273, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_INCREF(__pyx_int_0);
+ __Pyx_GIVEREF(__pyx_int_0);
+ PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_int_0);
+ __Pyx_INCREF(__pyx_v_nb_nodes);
+ __Pyx_GIVEREF(__pyx_v_nb_nodes);
+ PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_nb_nodes);
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_range, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1273, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (likely(PyList_CheckExact(__pyx_t_3)) || PyTuple_CheckExact(__pyx_t_3)) {
+ __pyx_t_1 = __pyx_t_3; __Pyx_INCREF(__pyx_t_1); __pyx_t_4 = 0;
+ __pyx_t_5 = NULL;
+ } else {
+ __pyx_t_4 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1273, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_5 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1273, __pyx_L1_error)
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ for (;;) {
+ if (likely(!__pyx_t_5)) {
+ if (likely(PyList_CheckExact(__pyx_t_1))) {
+ if (__pyx_t_4 >= PyList_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_3 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_4); __Pyx_INCREF(__pyx_t_3); __pyx_t_4++; if (unlikely(0 < 0)) __PYX_ERR(0, 1273, __pyx_L1_error)
+ #else
+ __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1273, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ #endif
+ } else {
+ if (__pyx_t_4 >= PyTuple_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_4); __Pyx_INCREF(__pyx_t_3); __pyx_t_4++; if (unlikely(0 < 0)) __PYX_ERR(0, 1273, __pyx_L1_error)
+ #else
+ __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1273, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ #endif
+ }
+ } else {
+ __pyx_t_3 = __pyx_t_5(__pyx_t_1);
+ if (unlikely(!__pyx_t_3)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 1273, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_3);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_node_id, __pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1274
+ *
+ * for node_id in range(0, nb_nodes):
+ * graph.add_node(node_id, **node_labels[node_id]) # <<<<<<<<<<<<<<
+ * # graph.nodes[node_id]['original_node_id'] = original_node_ids[node_id]
+ *
+ */
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_graph, __pyx_n_s_add_node); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1274, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1274, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_INCREF(__pyx_v_node_id);
+ __Pyx_GIVEREF(__pyx_v_node_id);
+ PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_node_id);
+ __pyx_t_7 = __Pyx_PyObject_GetItem(__pyx_v_node_labels, __pyx_v_node_id); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1274, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ if (unlikely(__pyx_t_7 == Py_None)) {
+ PyErr_SetString(PyExc_TypeError, "argument after ** must be a mapping, not NoneType");
+ __PYX_ERR(0, 1274, __pyx_L1_error)
+ }
+ if (likely(PyDict_CheckExact(__pyx_t_7))) {
+ __pyx_t_6 = PyDict_Copy(__pyx_t_7); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1274, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ } else {
+ __pyx_t_6 = PyObject_CallFunctionObjArgs((PyObject*)&PyDict_Type, __pyx_t_7, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1274, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ }
+ __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_2, __pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1274, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+
+ /* "gedlibpy.pyx":1273
+ * graph.graph['original_node_ids'] = original_node_ids
+ *
+ * for node_id in range(0, nb_nodes): # <<<<<<<<<<<<<<
+ * graph.add_node(node_id, **node_labels[node_id])
+ * # graph.nodes[node_id]['original_node_id'] = original_node_ids[node_id]
+ */
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1277
+ * # graph.nodes[node_id]['original_node_id'] = original_node_ids[node_id]
+ *
+ * edges = self.get_graph_edges(graph_id) # <<<<<<<<<<<<<<
+ * for (head, tail), labels in edges.items():
+ * graph.add_edge(head, tail, **labels)
+ */
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_get_graph_edges); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1277, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_7);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_7, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_6, __pyx_v_graph_id) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_v_graph_id);
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1277, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_v_edges = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1278
+ *
+ * edges = self.get_graph_edges(graph_id)
+ * for (head, tail), labels in edges.items(): # <<<<<<<<<<<<<<
+ * graph.add_edge(head, tail, **labels)
+ * # print(edges)
+ */
+ __pyx_t_4 = 0;
+ if (unlikely(__pyx_v_edges == Py_None)) {
+ PyErr_Format(PyExc_AttributeError, "'NoneType' object has no attribute '%.30s'", "items");
+ __PYX_ERR(0, 1278, __pyx_L1_error)
+ }
+ __pyx_t_7 = __Pyx_dict_iterator(__pyx_v_edges, 0, __pyx_n_s_items, (&__pyx_t_8), (&__pyx_t_9)); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_XDECREF(__pyx_t_1);
+ __pyx_t_1 = __pyx_t_7;
+ __pyx_t_7 = 0;
+ while (1) {
+ __pyx_t_10 = __Pyx_dict_iter_next(__pyx_t_1, __pyx_t_8, &__pyx_t_4, &__pyx_t_7, &__pyx_t_6, NULL, __pyx_t_9);
+ if (unlikely(__pyx_t_10 == 0)) break;
+ if (unlikely(__pyx_t_10 == -1)) __PYX_ERR(0, 1278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_GOTREF(__pyx_t_6);
+ if ((likely(PyTuple_CheckExact(__pyx_t_7))) || (PyList_CheckExact(__pyx_t_7))) {
+ PyObject* sequence = __pyx_t_7;
+ Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
+ if (unlikely(size != 2)) {
+ if (size > 2) __Pyx_RaiseTooManyValuesError(2);
+ else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
+ __PYX_ERR(0, 1278, __pyx_L1_error)
+ }
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ if (likely(PyTuple_CheckExact(sequence))) {
+ __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0);
+ __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1);
+ } else {
+ __pyx_t_2 = PyList_GET_ITEM(sequence, 0);
+ __pyx_t_3 = PyList_GET_ITEM(sequence, 1);
+ }
+ __Pyx_INCREF(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ #else
+ __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ #endif
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ } else {
+ Py_ssize_t index = -1;
+ __pyx_t_11 = PyObject_GetIter(__pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 1278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_12 = Py_TYPE(__pyx_t_11)->tp_iternext;
+ index = 0; __pyx_t_2 = __pyx_t_12(__pyx_t_11); if (unlikely(!__pyx_t_2)) goto __pyx_L7_unpacking_failed;
+ __Pyx_GOTREF(__pyx_t_2);
+ index = 1; __pyx_t_3 = __pyx_t_12(__pyx_t_11); if (unlikely(!__pyx_t_3)) goto __pyx_L7_unpacking_failed;
+ __Pyx_GOTREF(__pyx_t_3);
+ if (__Pyx_IternextUnpackEndCheck(__pyx_t_12(__pyx_t_11), 2) < 0) __PYX_ERR(0, 1278, __pyx_L1_error)
+ __pyx_t_12 = NULL;
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ goto __pyx_L8_unpacking_done;
+ __pyx_L7_unpacking_failed:;
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __pyx_t_12 = NULL;
+ if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);
+ __PYX_ERR(0, 1278, __pyx_L1_error)
+ __pyx_L8_unpacking_done:;
+ }
+ __Pyx_XDECREF_SET(__pyx_v_head, __pyx_t_2);
+ __pyx_t_2 = 0;
+ __Pyx_XDECREF_SET(__pyx_v_tail, __pyx_t_3);
+ __pyx_t_3 = 0;
+ __Pyx_XDECREF_SET(__pyx_v_labels, __pyx_t_6);
+ __pyx_t_6 = 0;
+
+ /* "gedlibpy.pyx":1279
+ * edges = self.get_graph_edges(graph_id)
+ * for (head, tail), labels in edges.items():
+ * graph.add_edge(head, tail, **labels) # <<<<<<<<<<<<<<
+ * # print(edges)
+ *
+ */
+ __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_graph, __pyx_n_s_add_edge); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1279, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_7 = PyTuple_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1279, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_INCREF(__pyx_v_head);
+ __Pyx_GIVEREF(__pyx_v_head);
+ PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_v_head);
+ __Pyx_INCREF(__pyx_v_tail);
+ __Pyx_GIVEREF(__pyx_v_tail);
+ PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_v_tail);
+ if (unlikely(__pyx_v_labels == Py_None)) {
+ PyErr_SetString(PyExc_TypeError, "argument after ** must be a mapping, not NoneType");
+ __PYX_ERR(0, 1279, __pyx_L1_error)
+ }
+ if (likely(PyDict_CheckExact(__pyx_v_labels))) {
+ __pyx_t_3 = PyDict_Copy(__pyx_v_labels); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1279, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ } else {
+ __pyx_t_3 = PyObject_CallFunctionObjArgs((PyObject*)&PyDict_Type, __pyx_v_labels, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1279, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ }
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_7, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1279, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1282
+ * # print(edges)
+ *
+ * return graph # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_graph);
+ __pyx_r = __pyx_v_graph;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1240
+ *
+ *
+ * def get_nx_graph(self, graph_id, adj_matrix=True, adj_lists=False, edge_list=False): # @todo # <<<<<<<<<<<<<<
+ * """
+ * Get graph with id `graph_id` in the form of the NetworkX Graph.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_XDECREF(__pyx_t_11);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_nx_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_graph);
+ __Pyx_XDECREF(__pyx_v_nb_nodes);
+ __Pyx_XDECREF(__pyx_v_original_node_ids);
+ __Pyx_XDECREF(__pyx_v_node_labels);
+ __Pyx_XDECREF(__pyx_v_node_id);
+ __Pyx_XDECREF(__pyx_v_edges);
+ __Pyx_XDECREF(__pyx_v_head);
+ __Pyx_XDECREF(__pyx_v_tail);
+ __Pyx_XDECREF(__pyx_v_labels);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1285
+ *
+ *
+ * def get_init_type(self): # <<<<<<<<<<<<<<
+ * """
+ * Returns the initialization type of the last initialization in string.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_123get_init_type(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_122get_init_type[] = "\n\t\tReturns the initialization type of the last initialization in string.\n\n\t\tReturns\n\t\t-------\n\t\tstring\n\t\t\tInitialization type in string.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_123get_init_type(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("get_init_type (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_122get_init_type(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_122get_init_type(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ std::string __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("get_init_type", 0);
+
+ /* "gedlibpy.pyx":1294
+ * Initialization type in string.
+ * """
+ * return self.c_env.getInitType().decode('utf-8') # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ try {
+ __pyx_t_1 = __pyx_v_self->c_env->getInitType();
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1294, __pyx_L1_error)
+ }
+ __pyx_t_2 = __Pyx_decode_cpp_string(__pyx_t_1, 0, PY_SSIZE_T_MAX, NULL, NULL, PyUnicode_DecodeUTF8); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1294, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1285
+ *
+ *
+ * def get_init_type(self): # <<<<<<<<<<<<<<
+ * """
+ * Returns the initialization type of the last initialization in string.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.get_init_type", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1316
+ *
+ *
+ * def load_nx_graph(self, nx_graph, graph_id, graph_name='', graph_class=''): # <<<<<<<<<<<<<<
+ * """
+ * Loads NetworkX Graph into the GED environment.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_125load_nx_graph(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_124load_nx_graph[] = "\n\t\tLoads NetworkX Graph into the GED environment.\n\n\t\tParameters\n\t\t----------\n\t\tnx_graph : NetworkX Graph object\n\t\t\tThe graph that should be loaded.\n\t\t\t\n\t\tgraph_id : int or None\n\t\t\tThe ID of a graph contained the environment (overwrite existing graph) or add new graph if `None`.\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\tgraph_name : string, optional\n\t\t\tThe name of newly added graph. The default is ''. Has no effect unless `graph_id` equals `None`.\n\t\t\t\n\t\tgraph_class : string, optional\n\t\t\tThe class of newly added graph. The default is ''. Has no effect unless `graph_id` equals `None`.\n\n\t\tReturns\n\t\t-------\n\t\tint\n\t\t\tThe ID of the newly loaded graph.\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_125load_nx_graph(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_nx_graph = 0;
+ PyObject *__pyx_v_graph_id = 0;
+ PyObject *__pyx_v_graph_name = 0;
+ PyObject *__pyx_v_graph_class = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("load_nx_graph (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_nx_graph,&__pyx_n_s_graph_id,&__pyx_n_s_graph_name,&__pyx_n_s_graph_class,0};
+ PyObject* values[4] = {0,0,0,0};
+ values[2] = ((PyObject *)__pyx_kp_u_);
+ values[3] = ((PyObject *)__pyx_kp_u_);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_nx_graph)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_graph_id)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("load_nx_graph", 0, 2, 4, 1); __PYX_ERR(0, 1316, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_graph_name);
+ if (value) { values[2] = value; kw_args--; }
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_graph_class);
+ if (value) { values[3] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "load_nx_graph") < 0)) __PYX_ERR(0, 1316, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_nx_graph = values[0];
+ __pyx_v_graph_id = values[1];
+ __pyx_v_graph_name = values[2];
+ __pyx_v_graph_class = values[3];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("load_nx_graph", 0, 2, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 1316, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.load_nx_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_124load_nx_graph(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_nx_graph, __pyx_v_graph_id, __pyx_v_graph_name, __pyx_v_graph_class);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_124load_nx_graph(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_nx_graph, PyObject *__pyx_v_graph_id, PyObject *__pyx_v_graph_name, PyObject *__pyx_v_graph_class) {
+ PyObject *__pyx_v_node = NULL;
+ PyObject *__pyx_v_edge = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ int __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ int __pyx_t_6;
+ PyObject *__pyx_t_7 = NULL;
+ Py_ssize_t __pyx_t_8;
+ PyObject *(*__pyx_t_9)(PyObject *);
+ PyObject *__pyx_t_10 = NULL;
+ PyObject *__pyx_t_11 = NULL;
+ PyObject *__pyx_t_12 = NULL;
+ PyObject *__pyx_t_13 = NULL;
+ PyObject *__pyx_t_14 = NULL;
+ PyObject *__pyx_t_15 = NULL;
+ PyObject *__pyx_t_16 = NULL;
+ __Pyx_RefNannySetupContext("load_nx_graph", 0);
+ __Pyx_INCREF(__pyx_v_graph_id);
+
+ /* "gedlibpy.pyx":1339
+ * The ID of the newly loaded graph.
+ * """
+ * if graph_id is None: # <<<<<<<<<<<<<<
+ * graph_id = self.add_graph(graph_name, graph_class)
+ * else:
+ */
+ __pyx_t_1 = (__pyx_v_graph_id == Py_None);
+ __pyx_t_2 = (__pyx_t_1 != 0);
+ if (__pyx_t_2) {
+
+ /* "gedlibpy.pyx":1340
+ * """
+ * if graph_id is None:
+ * graph_id = self.add_graph(graph_name, graph_class) # <<<<<<<<<<<<<<
+ * else:
+ * self.clear_graph(graph_id)
+ */
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_graph); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1340, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = NULL;
+ __pyx_t_6 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ __pyx_t_6 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_v_graph_name, __pyx_v_graph_class};
+ __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_6, 2+__pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1340, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_v_graph_name, __pyx_v_graph_class};
+ __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_6, 2+__pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1340, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ } else
+ #endif
+ {
+ __pyx_t_7 = PyTuple_New(2+__pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1340, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ if (__pyx_t_5) {
+ __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5); __pyx_t_5 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_graph_name);
+ __Pyx_GIVEREF(__pyx_v_graph_name);
+ PyTuple_SET_ITEM(__pyx_t_7, 0+__pyx_t_6, __pyx_v_graph_name);
+ __Pyx_INCREF(__pyx_v_graph_class);
+ __Pyx_GIVEREF(__pyx_v_graph_class);
+ PyTuple_SET_ITEM(__pyx_t_7, 1+__pyx_t_6, __pyx_v_graph_class);
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1340, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF_SET(__pyx_v_graph_id, __pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1339
+ * The ID of the newly loaded graph.
+ * """
+ * if graph_id is None: # <<<<<<<<<<<<<<
+ * graph_id = self.add_graph(graph_name, graph_class)
+ * else:
+ */
+ goto __pyx_L3;
+ }
+
+ /* "gedlibpy.pyx":1342
+ * graph_id = self.add_graph(graph_name, graph_class)
+ * else:
+ * self.clear_graph(graph_id) # <<<<<<<<<<<<<<
+ * for node in nx_graph.nodes:
+ * self.add_node(graph_id, str(node), nx_graph.nodes[node])
+ */
+ /*else*/ {
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_clear_graph); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1342, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_7 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ __pyx_t_3 = (__pyx_t_7) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_7, __pyx_v_graph_id) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_v_graph_id);
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1342, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ }
+ __pyx_L3:;
+
+ /* "gedlibpy.pyx":1343
+ * else:
+ * self.clear_graph(graph_id)
+ * for node in nx_graph.nodes: # <<<<<<<<<<<<<<
+ * self.add_node(graph_id, str(node), nx_graph.nodes[node])
+ * for edge in nx_graph.edges:
+ */
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_nx_graph, __pyx_n_s_nodes); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1343, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ if (likely(PyList_CheckExact(__pyx_t_3)) || PyTuple_CheckExact(__pyx_t_3)) {
+ __pyx_t_4 = __pyx_t_3; __Pyx_INCREF(__pyx_t_4); __pyx_t_8 = 0;
+ __pyx_t_9 = NULL;
+ } else {
+ __pyx_t_8 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1343, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_9 = Py_TYPE(__pyx_t_4)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1343, __pyx_L1_error)
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ for (;;) {
+ if (likely(!__pyx_t_9)) {
+ if (likely(PyList_CheckExact(__pyx_t_4))) {
+ if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_4)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_3 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_8); __Pyx_INCREF(__pyx_t_3); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 1343, __pyx_L1_error)
+ #else
+ __pyx_t_3 = PySequence_ITEM(__pyx_t_4, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1343, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ #endif
+ } else {
+ if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_4)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_8); __Pyx_INCREF(__pyx_t_3); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 1343, __pyx_L1_error)
+ #else
+ __pyx_t_3 = PySequence_ITEM(__pyx_t_4, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1343, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ #endif
+ }
+ } else {
+ __pyx_t_3 = __pyx_t_9(__pyx_t_4);
+ if (unlikely(!__pyx_t_3)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 1343, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_3);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_node, __pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1344
+ * self.clear_graph(graph_id)
+ * for node in nx_graph.nodes:
+ * self.add_node(graph_id, str(node), nx_graph.nodes[node]) # <<<<<<<<<<<<<<
+ * for edge in nx_graph.edges:
+ * self.add_edge(graph_id, str(edge[0]), str(edge[1]), nx_graph.get_edge_data(edge[0], edge[1]))
+ */
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_node); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1344, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_5 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyUnicode_Type)), __pyx_v_node); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1344, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_v_nx_graph, __pyx_n_s_nodes); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 1344, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __pyx_t_11 = __Pyx_PyObject_GetItem(__pyx_t_10, __pyx_v_node); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 1344, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __pyx_t_10 = NULL;
+ __pyx_t_6 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) {
+ __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_7);
+ if (likely(__pyx_t_10)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);
+ __Pyx_INCREF(__pyx_t_10);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_7, function);
+ __pyx_t_6 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_7)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_10, __pyx_v_graph_id, __pyx_t_5, __pyx_t_11};
+ __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1344, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_10, __pyx_v_graph_id, __pyx_t_5, __pyx_t_11};
+ __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1344, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_12 = PyTuple_New(3+__pyx_t_6); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 1344, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ if (__pyx_t_10) {
+ __Pyx_GIVEREF(__pyx_t_10); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_10); __pyx_t_10 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_graph_id);
+ __Pyx_GIVEREF(__pyx_v_graph_id);
+ PyTuple_SET_ITEM(__pyx_t_12, 0+__pyx_t_6, __pyx_v_graph_id);
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_12, 1+__pyx_t_6, __pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_11);
+ PyTuple_SET_ITEM(__pyx_t_12, 2+__pyx_t_6, __pyx_t_11);
+ __pyx_t_5 = 0;
+ __pyx_t_11 = 0;
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_12, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1344, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1343
+ * else:
+ * self.clear_graph(graph_id)
+ * for node in nx_graph.nodes: # <<<<<<<<<<<<<<
+ * self.add_node(graph_id, str(node), nx_graph.nodes[node])
+ * for edge in nx_graph.edges:
+ */
+ }
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+
+ /* "gedlibpy.pyx":1345
+ * for node in nx_graph.nodes:
+ * self.add_node(graph_id, str(node), nx_graph.nodes[node])
+ * for edge in nx_graph.edges: # <<<<<<<<<<<<<<
+ * self.add_edge(graph_id, str(edge[0]), str(edge[1]), nx_graph.get_edge_data(edge[0], edge[1]))
+ * return graph_id
+ */
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_nx_graph, __pyx_n_s_edges); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1345, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ if (likely(PyList_CheckExact(__pyx_t_4)) || PyTuple_CheckExact(__pyx_t_4)) {
+ __pyx_t_3 = __pyx_t_4; __Pyx_INCREF(__pyx_t_3); __pyx_t_8 = 0;
+ __pyx_t_9 = NULL;
+ } else {
+ __pyx_t_8 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1345, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_9 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1345, __pyx_L1_error)
+ }
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ for (;;) {
+ if (likely(!__pyx_t_9)) {
+ if (likely(PyList_CheckExact(__pyx_t_3))) {
+ if (__pyx_t_8 >= PyList_GET_SIZE(__pyx_t_3)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_4 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_8); __Pyx_INCREF(__pyx_t_4); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 1345, __pyx_L1_error)
+ #else
+ __pyx_t_4 = PySequence_ITEM(__pyx_t_3, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1345, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ #endif
+ } else {
+ if (__pyx_t_8 >= PyTuple_GET_SIZE(__pyx_t_3)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_8); __Pyx_INCREF(__pyx_t_4); __pyx_t_8++; if (unlikely(0 < 0)) __PYX_ERR(0, 1345, __pyx_L1_error)
+ #else
+ __pyx_t_4 = PySequence_ITEM(__pyx_t_3, __pyx_t_8); __pyx_t_8++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1345, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ #endif
+ }
+ } else {
+ __pyx_t_4 = __pyx_t_9(__pyx_t_3);
+ if (unlikely(!__pyx_t_4)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 1345, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_4);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_edge, __pyx_t_4);
+ __pyx_t_4 = 0;
+
+ /* "gedlibpy.pyx":1346
+ * self.add_node(graph_id, str(node), nx_graph.nodes[node])
+ * for edge in nx_graph.edges:
+ * self.add_edge(graph_id, str(edge[0]), str(edge[1]), nx_graph.get_edge_data(edge[0], edge[1])) # <<<<<<<<<<<<<<
+ * return graph_id
+ *
+ */
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_add_edge); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_12 = __Pyx_GetItemInt(__pyx_v_edge, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ __pyx_t_11 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyUnicode_Type)), __pyx_t_12); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ __pyx_t_12 = __Pyx_GetItemInt(__pyx_v_edge, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ __pyx_t_5 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyUnicode_Type)), __pyx_t_12); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_v_nx_graph, __pyx_n_s_get_edge_data); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __pyx_t_13 = __Pyx_GetItemInt(__pyx_v_edge, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_13);
+ __pyx_t_14 = __Pyx_GetItemInt(__pyx_v_edge, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_14);
+ __pyx_t_15 = NULL;
+ __pyx_t_6 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_10))) {
+ __pyx_t_15 = PyMethod_GET_SELF(__pyx_t_10);
+ if (likely(__pyx_t_15)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_10);
+ __Pyx_INCREF(__pyx_t_15);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_10, function);
+ __pyx_t_6 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_10)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_15, __pyx_t_13, __pyx_t_14};
+ __pyx_t_12 = __Pyx_PyFunction_FastCall(__pyx_t_10, __pyx_temp+1-__pyx_t_6, 2+__pyx_t_6); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0;
+ __Pyx_GOTREF(__pyx_t_12);
+ __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0;
+ __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_10)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_15, __pyx_t_13, __pyx_t_14};
+ __pyx_t_12 = __Pyx_PyCFunction_FastCall(__pyx_t_10, __pyx_temp+1-__pyx_t_6, 2+__pyx_t_6); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_15); __pyx_t_15 = 0;
+ __Pyx_GOTREF(__pyx_t_12);
+ __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0;
+ __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_16 = PyTuple_New(2+__pyx_t_6); if (unlikely(!__pyx_t_16)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_16);
+ if (__pyx_t_15) {
+ __Pyx_GIVEREF(__pyx_t_15); PyTuple_SET_ITEM(__pyx_t_16, 0, __pyx_t_15); __pyx_t_15 = NULL;
+ }
+ __Pyx_GIVEREF(__pyx_t_13);
+ PyTuple_SET_ITEM(__pyx_t_16, 0+__pyx_t_6, __pyx_t_13);
+ __Pyx_GIVEREF(__pyx_t_14);
+ PyTuple_SET_ITEM(__pyx_t_16, 1+__pyx_t_6, __pyx_t_14);
+ __pyx_t_13 = 0;
+ __pyx_t_14 = 0;
+ __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_10, __pyx_t_16, NULL); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ __Pyx_DECREF(__pyx_t_16); __pyx_t_16 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __pyx_t_10 = NULL;
+ __pyx_t_6 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) {
+ __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_7);
+ if (likely(__pyx_t_10)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);
+ __Pyx_INCREF(__pyx_t_10);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_7, function);
+ __pyx_t_6 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_7)) {
+ PyObject *__pyx_temp[5] = {__pyx_t_10, __pyx_v_graph_id, __pyx_t_11, __pyx_t_5, __pyx_t_12};
+ __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-__pyx_t_6, 4+__pyx_t_6); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {
+ PyObject *__pyx_temp[5] = {__pyx_t_10, __pyx_v_graph_id, __pyx_t_11, __pyx_t_5, __pyx_t_12};
+ __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-__pyx_t_6, 4+__pyx_t_6); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_16 = PyTuple_New(4+__pyx_t_6); if (unlikely(!__pyx_t_16)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_16);
+ if (__pyx_t_10) {
+ __Pyx_GIVEREF(__pyx_t_10); PyTuple_SET_ITEM(__pyx_t_16, 0, __pyx_t_10); __pyx_t_10 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_graph_id);
+ __Pyx_GIVEREF(__pyx_v_graph_id);
+ PyTuple_SET_ITEM(__pyx_t_16, 0+__pyx_t_6, __pyx_v_graph_id);
+ __Pyx_GIVEREF(__pyx_t_11);
+ PyTuple_SET_ITEM(__pyx_t_16, 1+__pyx_t_6, __pyx_t_11);
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_16, 2+__pyx_t_6, __pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_12);
+ PyTuple_SET_ITEM(__pyx_t_16, 3+__pyx_t_6, __pyx_t_12);
+ __pyx_t_11 = 0;
+ __pyx_t_5 = 0;
+ __pyx_t_12 = 0;
+ __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_16, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1346, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_16); __pyx_t_16 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+
+ /* "gedlibpy.pyx":1345
+ * for node in nx_graph.nodes:
+ * self.add_node(graph_id, str(node), nx_graph.nodes[node])
+ * for edge in nx_graph.edges: # <<<<<<<<<<<<<<
+ * self.add_edge(graph_id, str(edge[0]), str(edge[1]), nx_graph.get_edge_data(edge[0], edge[1]))
+ * return graph_id
+ */
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1347
+ * for edge in nx_graph.edges:
+ * self.add_edge(graph_id, str(edge[0]), str(edge[1]), nx_graph.get_edge_data(edge[0], edge[1]))
+ * return graph_id # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_graph_id);
+ __pyx_r = __pyx_v_graph_id;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1316
+ *
+ *
+ * def load_nx_graph(self, nx_graph, graph_id, graph_name='', graph_class=''): # <<<<<<<<<<<<<<
+ * """
+ * Loads NetworkX Graph into the GED environment.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_XDECREF(__pyx_t_10);
+ __Pyx_XDECREF(__pyx_t_11);
+ __Pyx_XDECREF(__pyx_t_12);
+ __Pyx_XDECREF(__pyx_t_13);
+ __Pyx_XDECREF(__pyx_t_14);
+ __Pyx_XDECREF(__pyx_t_15);
+ __Pyx_XDECREF(__pyx_t_16);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.load_nx_graph", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_node);
+ __Pyx_XDECREF(__pyx_v_edge);
+ __Pyx_XDECREF(__pyx_v_graph_id);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1350
+ *
+ *
+ * def compute_induced_cost(self, g_id, h_id, node_map): # <<<<<<<<<<<<<<
+ * """
+ * Computes the edit cost between two graphs induced by a node map.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_127compute_induced_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_6GEDEnv_126compute_induced_cost[] = "\n\t\tComputes the edit cost between two graphs induced by a node map.\n\n\t\tParameters\n\t\t----------\n\t\tg_id : int\n\t\t\tID of input graph.\n\t\th_id : int\n\t\t\tID of input graph.\n\t\tnode_map: gklearn.ged.env.NodeMap.\n\t\t\tThe NodeMap instance whose reduced cost will be computed and re-assigned.\n\n\t\tReturns\n\t\t-------\n\t\tNone.\t\t\n\t\t";
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_127compute_induced_cost(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_g_id = 0;
+ PyObject *__pyx_v_h_id = 0;
+ PyObject *__pyx_v_node_map = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("compute_induced_cost (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_g_id,&__pyx_n_s_h_id,&__pyx_n_s_node_map,0};
+ PyObject* values[3] = {0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_g_id)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h_id)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_induced_cost", 1, 3, 3, 1); __PYX_ERR(0, 1350, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_node_map)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("compute_induced_cost", 1, 3, 3, 2); __PYX_ERR(0, 1350, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "compute_induced_cost") < 0)) __PYX_ERR(0, 1350, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ }
+ __pyx_v_g_id = values[0];
+ __pyx_v_h_id = values[1];
+ __pyx_v_node_map = values[2];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("compute_induced_cost", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 1350, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.compute_induced_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_126compute_induced_cost(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), __pyx_v_g_id, __pyx_v_h_id, __pyx_v_node_map);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_126compute_induced_cost(struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, PyObject *__pyx_v_g_id, PyObject *__pyx_v_h_id, PyObject *__pyx_v_node_map) {
+ PyObject *__pyx_v_relation = NULL;
+ PyObject *__pyx_v_dummy_node = NULL;
+ PyObject *__pyx_v_i = NULL;
+ PyObject *__pyx_v_val = NULL;
+ PyObject *__pyx_v_val1 = NULL;
+ PyObject *__pyx_v_val2 = NULL;
+ double __pyx_v_induced_cost;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ Py_ssize_t __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ PyObject *__pyx_t_7 = NULL;
+ int __pyx_t_8;
+ size_t __pyx_t_9;
+ size_t __pyx_t_10;
+ std::vector > __pyx_t_11;
+ double __pyx_t_12;
+ __Pyx_RefNannySetupContext("compute_induced_cost", 0);
+
+ /* "gedlibpy.pyx":1367
+ * None.
+ * """
+ * relation = [] # <<<<<<<<<<<<<<
+ * node_map.as_relation(relation)
+ * # print(relation)
+ */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1367, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v_relation = ((PyObject*)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1368
+ * """
+ * relation = []
+ * node_map.as_relation(relation) # <<<<<<<<<<<<<<
+ * # print(relation)
+ * dummy_node = get_dummy_node()
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_node_map, __pyx_n_s_as_relation); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1368, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_relation) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_relation);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1368, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1370
+ * node_map.as_relation(relation)
+ * # print(relation)
+ * dummy_node = get_dummy_node() # <<<<<<<<<<<<<<
+ * # print(dummy_node)
+ * for i, val in enumerate(relation):
+ */
+ __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_get_dummy_node); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1370, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1370, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_dummy_node = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1372
+ * dummy_node = get_dummy_node()
+ * # print(dummy_node)
+ * for i, val in enumerate(relation): # <<<<<<<<<<<<<<
+ * val1 = dummy_node if val[0] == np.inf else val[0]
+ * val2 = dummy_node if val[1] == np.inf else val[1]
+ */
+ __Pyx_INCREF(__pyx_int_0);
+ __pyx_t_1 = __pyx_int_0;
+ __pyx_t_2 = __pyx_v_relation; __Pyx_INCREF(__pyx_t_2); __pyx_t_4 = 0;
+ for (;;) {
+ if (__pyx_t_4 >= PyList_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_3 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_4); __Pyx_INCREF(__pyx_t_3); __pyx_t_4++; if (unlikely(0 < 0)) __PYX_ERR(0, 1372, __pyx_L1_error)
+ #else
+ __pyx_t_3 = PySequence_ITEM(__pyx_t_2, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1372, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ #endif
+ __Pyx_XDECREF_SET(__pyx_v_val, __pyx_t_3);
+ __pyx_t_3 = 0;
+ __Pyx_INCREF(__pyx_t_1);
+ __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_1);
+ __pyx_t_3 = __Pyx_PyInt_AddObjC(__pyx_t_1, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1372, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_1);
+ __pyx_t_1 = __pyx_t_3;
+ __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1373
+ * # print(dummy_node)
+ * for i, val in enumerate(relation):
+ * val1 = dummy_node if val[0] == np.inf else val[0] # <<<<<<<<<<<<<<
+ * val2 = dummy_node if val[1] == np.inf else val[1]
+ * relation[i] = tuple((val1, val2))
+ */
+ __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_val, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1373, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1373, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_inf); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1373, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_6 = PyObject_RichCompare(__pyx_t_5, __pyx_t_7, Py_EQ); __Pyx_XGOTREF(__pyx_t_6); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1373, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 1373, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (__pyx_t_8) {
+ __Pyx_INCREF(__pyx_v_dummy_node);
+ __pyx_t_3 = __pyx_v_dummy_node;
+ } else {
+ __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_val, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1373, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_3 = __pyx_t_6;
+ __pyx_t_6 = 0;
+ }
+ __Pyx_XDECREF_SET(__pyx_v_val1, __pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1374
+ * for i, val in enumerate(relation):
+ * val1 = dummy_node if val[0] == np.inf else val[0]
+ * val2 = dummy_node if val[1] == np.inf else val[1] # <<<<<<<<<<<<<<
+ * relation[i] = tuple((val1, val2))
+ * # print(relation)
+ */
+ __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_val, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1374, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1374, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_inf); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1374, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_7 = PyObject_RichCompare(__pyx_t_6, __pyx_t_5, Py_EQ); __Pyx_XGOTREF(__pyx_t_7); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1374, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_7); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 1374, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ if (__pyx_t_8) {
+ __Pyx_INCREF(__pyx_v_dummy_node);
+ __pyx_t_3 = __pyx_v_dummy_node;
+ } else {
+ __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_val, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1374, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_3 = __pyx_t_7;
+ __pyx_t_7 = 0;
+ }
+ __Pyx_XDECREF_SET(__pyx_v_val2, __pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1375
+ * val1 = dummy_node if val[0] == np.inf else val[0]
+ * val2 = dummy_node if val[1] == np.inf else val[1]
+ * relation[i] = tuple((val1, val2)) # <<<<<<<<<<<<<<
+ * # print(relation)
+ * induced_cost = self.c_env.computeInducedCost(g_id, h_id, relation)
+ */
+ __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1375, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_INCREF(__pyx_v_val1);
+ __Pyx_GIVEREF(__pyx_v_val1);
+ PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_val1);
+ __Pyx_INCREF(__pyx_v_val2);
+ __Pyx_GIVEREF(__pyx_v_val2);
+ PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_val2);
+ if (unlikely(PyObject_SetItem(__pyx_v_relation, __pyx_v_i, __pyx_t_3) < 0)) __PYX_ERR(0, 1375, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+
+ /* "gedlibpy.pyx":1372
+ * dummy_node = get_dummy_node()
+ * # print(dummy_node)
+ * for i, val in enumerate(relation): # <<<<<<<<<<<<<<
+ * val1 = dummy_node if val[0] == np.inf else val[0]
+ * val2 = dummy_node if val[1] == np.inf else val[1]
+ */
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1377
+ * relation[i] = tuple((val1, val2))
+ * # print(relation)
+ * induced_cost = self.c_env.computeInducedCost(g_id, h_id, relation) # <<<<<<<<<<<<<<
+ * node_map.set_induced_cost(induced_cost)
+ *
+ */
+ __pyx_t_9 = __Pyx_PyInt_As_size_t(__pyx_v_g_id); if (unlikely((__pyx_t_9 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 1377, __pyx_L1_error)
+ __pyx_t_10 = __Pyx_PyInt_As_size_t(__pyx_v_h_id); if (unlikely((__pyx_t_10 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 1377, __pyx_L1_error)
+ __pyx_t_11 = __pyx_convert_vector_from_py_std_3a__3a_pair_3c_size_t_2c_size_t_3e___(__pyx_v_relation); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1377, __pyx_L1_error)
+ try {
+ __pyx_t_12 = __pyx_v_self->c_env->computeInducedCost(__pyx_t_9, __pyx_t_10, __pyx_t_11);
+ } catch(...) {
+ __Pyx_CppExn2PyErr();
+ __PYX_ERR(0, 1377, __pyx_L1_error)
+ }
+ __pyx_v_induced_cost = __pyx_t_12;
+
+ /* "gedlibpy.pyx":1378
+ * # print(relation)
+ * induced_cost = self.c_env.computeInducedCost(g_id, h_id, relation)
+ * node_map.set_induced_cost(induced_cost) # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_node_map, __pyx_n_s_set_induced_cost); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1378, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = PyFloat_FromDouble(__pyx_v_induced_cost); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1378, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_7 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ __pyx_t_1 = (__pyx_t_7) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_7, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1378, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1350
+ *
+ *
+ * def compute_induced_cost(self, g_id, h_id, node_map): # <<<<<<<<<<<<<<
+ * """
+ * Computes the edit cost between two graphs induced by a node map.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.compute_induced_cost", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_relation);
+ __Pyx_XDECREF(__pyx_v_dummy_node);
+ __Pyx_XDECREF(__pyx_v_i);
+ __Pyx_XDECREF(__pyx_v_val);
+ __Pyx_XDECREF(__pyx_v_val1);
+ __Pyx_XDECREF(__pyx_v_val2);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "(tree fragment)":1
+ * def __reduce_cython__(self): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_129__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_129__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_128__reduce_cython__(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_128__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__reduce_cython__", 0);
+
+ /* "(tree fragment)":2
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
+ * def __setstate_cython__(self, __pyx_state):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(1, 2, __pyx_L1_error)
+
+ /* "(tree fragment)":1
+ * def __reduce_cython__(self): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "(tree fragment)":3
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_131__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
+static PyObject *__pyx_pw_8gedlibpy_6GEDEnv_131__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_6GEDEnv_130__setstate_cython__(((struct __pyx_obj_8gedlibpy_GEDEnv *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_6GEDEnv_130__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_8gedlibpy_GEDEnv *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__setstate_cython__", 0);
+
+ /* "(tree fragment)":4
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(1, 4, __pyx_L1_error)
+
+ /* "(tree fragment)":3
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("gedlibpy.GEDEnv.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1408
+ * :type message: string
+ * """
+ * def __init__(self, message): # <<<<<<<<<<<<<<
+ * """
+ * Inits the error with its message.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_13EditCostError_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_13EditCostError___init__[] = "\n\t\t\tInits the error with its message. \n\n\t\t\t:param message: The message to print when the error is detected\n\t\t\t:type message: string\n\t\t";
+static PyMethodDef __pyx_mdef_8gedlibpy_13EditCostError_1__init__ = {"__init__", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_8gedlibpy_13EditCostError_1__init__, METH_VARARGS|METH_KEYWORDS, __pyx_doc_8gedlibpy_13EditCostError___init__};
+static PyObject *__pyx_pw_8gedlibpy_13EditCostError_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_self = 0;
+ PyObject *__pyx_v_message = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__init__ (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_message,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_message)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("__init__", 1, 2, 2, 1); __PYX_ERR(0, 1408, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(0, 1408, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_self = values[0];
+ __pyx_v_message = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("__init__", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 1408, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.EditCostError.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_13EditCostError___init__(__pyx_self, __pyx_v_self, __pyx_v_message);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_13EditCostError___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_message) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__init__", 0);
+
+ /* "gedlibpy.pyx":1415
+ * :type message: string
+ * """
+ * self.message = message # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_message, __pyx_v_message) < 0) __PYX_ERR(0, 1415, __pyx_L1_error)
+
+ /* "gedlibpy.pyx":1408
+ * :type message: string
+ * """
+ * def __init__(self, message): # <<<<<<<<<<<<<<
+ * """
+ * Inits the error with its message.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_AddTraceback("gedlibpy.EditCostError.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1425
+ * :type message: string
+ * """
+ * def __init__(self, message): # <<<<<<<<<<<<<<
+ * """
+ * Inits the error with its message.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_11MethodError_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_11MethodError___init__[] = "\n\t\t\tInits the error with its message. \n\n\t\t\t:param message: The message to print when the error is detected\n\t\t\t:type message: string\n\t\t";
+static PyMethodDef __pyx_mdef_8gedlibpy_11MethodError_1__init__ = {"__init__", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_8gedlibpy_11MethodError_1__init__, METH_VARARGS|METH_KEYWORDS, __pyx_doc_8gedlibpy_11MethodError___init__};
+static PyObject *__pyx_pw_8gedlibpy_11MethodError_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_self = 0;
+ PyObject *__pyx_v_message = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__init__ (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_message,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_message)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("__init__", 1, 2, 2, 1); __PYX_ERR(0, 1425, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(0, 1425, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_self = values[0];
+ __pyx_v_message = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("__init__", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 1425, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.MethodError.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_11MethodError___init__(__pyx_self, __pyx_v_self, __pyx_v_message);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_11MethodError___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_message) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__init__", 0);
+
+ /* "gedlibpy.pyx":1432
+ * :type message: string
+ * """
+ * self.message = message # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_message, __pyx_v_message) < 0) __PYX_ERR(0, 1432, __pyx_L1_error)
+
+ /* "gedlibpy.pyx":1425
+ * :type message: string
+ * """
+ * def __init__(self, message): # <<<<<<<<<<<<<<
+ * """
+ * Inits the error with its message.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_AddTraceback("gedlibpy.MethodError.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1442
+ * :type message: string
+ * """
+ * def __init__(self, message): # <<<<<<<<<<<<<<
+ * """
+ * Inits the error with its message.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_9InitError_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8gedlibpy_9InitError___init__[] = "\n\t\t\tInits the error with its message. \n\n\t\t\t:param message: The message to print when the error is detected\n\t\t\t:type message: string\n\t\t";
+static PyMethodDef __pyx_mdef_8gedlibpy_9InitError_1__init__ = {"__init__", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_8gedlibpy_9InitError_1__init__, METH_VARARGS|METH_KEYWORDS, __pyx_doc_8gedlibpy_9InitError___init__};
+static PyObject *__pyx_pw_8gedlibpy_9InitError_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_self = 0;
+ PyObject *__pyx_v_message = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__init__ (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_message,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_message)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("__init__", 1, 2, 2, 1); __PYX_ERR(0, 1442, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(0, 1442, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ }
+ __pyx_v_self = values[0];
+ __pyx_v_message = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("__init__", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 1442, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("gedlibpy.InitError.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_8gedlibpy_9InitError___init__(__pyx_self, __pyx_v_self, __pyx_v_message);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_9InitError___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_message) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__init__", 0);
+
+ /* "gedlibpy.pyx":1449
+ * :type message: string
+ * """
+ * self.message = message # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_message, __pyx_v_message) < 0) __PYX_ERR(0, 1449, __pyx_L1_error)
+
+ /* "gedlibpy.pyx":1442
+ * :type message: string
+ * """
+ * def __init__(self, message): # <<<<<<<<<<<<<<
+ * """
+ * Inits the error with its message.
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_AddTraceback("gedlibpy.InitError.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1456
+ * #########################################
+ *
+ * def encode_your_map(map_u): # <<<<<<<<<<<<<<
+ * """
+ * Encodes Python unicode strings in dictionnary `map` to utf-8 byte strings for C++ functions.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_9encode_your_map(PyObject *__pyx_self, PyObject *__pyx_v_map_u); /*proto*/
+static char __pyx_doc_8gedlibpy_8encode_your_map[] = "\n\t\tEncodes Python unicode strings in dictionnary `map` to utf-8 byte strings for C++ functions.\n\n\t\t:param map_b: The map to encode\n\t\t:type map_b: dict{string : string}\n\t\t:return: The encoded map\n\t\t:rtype: dict{'b'string : 'b'string}\n\n\t\t.. note:: This function is used for type connection. \n\t\t\n\t";
+static PyMethodDef __pyx_mdef_8gedlibpy_9encode_your_map = {"encode_your_map", (PyCFunction)__pyx_pw_8gedlibpy_9encode_your_map, METH_O, __pyx_doc_8gedlibpy_8encode_your_map};
+static PyObject *__pyx_pw_8gedlibpy_9encode_your_map(PyObject *__pyx_self, PyObject *__pyx_v_map_u) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("encode_your_map (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_8encode_your_map(__pyx_self, ((PyObject *)__pyx_v_map_u));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_8encode_your_map(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_map_u) {
+ PyObject *__pyx_v_res = NULL;
+ PyObject *__pyx_v_key = NULL;
+ PyObject *__pyx_v_value = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ Py_ssize_t __pyx_t_2;
+ Py_ssize_t __pyx_t_3;
+ int __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ int __pyx_t_7;
+ PyObject *__pyx_t_8 = NULL;
+ PyObject *__pyx_t_9 = NULL;
+ __Pyx_RefNannySetupContext("encode_your_map", 0);
+
+ /* "gedlibpy.pyx":1468
+ *
+ * """
+ * res = {} # <<<<<<<<<<<<<<
+ * for key, value in map_u.items():
+ * res[key.encode('utf-8')] = value.encode('utf-8')
+ */
+ __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1468, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v_res = ((PyObject*)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1469
+ * """
+ * res = {}
+ * for key, value in map_u.items(): # <<<<<<<<<<<<<<
+ * res[key.encode('utf-8')] = value.encode('utf-8')
+ * return res
+ */
+ __pyx_t_2 = 0;
+ if (unlikely(__pyx_v_map_u == Py_None)) {
+ PyErr_Format(PyExc_AttributeError, "'NoneType' object has no attribute '%.30s'", "items");
+ __PYX_ERR(0, 1469, __pyx_L1_error)
+ }
+ __pyx_t_5 = __Pyx_dict_iterator(__pyx_v_map_u, 0, __pyx_n_s_items, (&__pyx_t_3), (&__pyx_t_4)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1469, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_1);
+ __pyx_t_1 = __pyx_t_5;
+ __pyx_t_5 = 0;
+ while (1) {
+ __pyx_t_7 = __Pyx_dict_iter_next(__pyx_t_1, __pyx_t_3, &__pyx_t_2, &__pyx_t_5, &__pyx_t_6, NULL, __pyx_t_4);
+ if (unlikely(__pyx_t_7 == 0)) break;
+ if (unlikely(__pyx_t_7 == -1)) __PYX_ERR(0, 1469, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_XDECREF_SET(__pyx_v_key, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __Pyx_XDECREF_SET(__pyx_v_value, __pyx_t_6);
+ __pyx_t_6 = 0;
+
+ /* "gedlibpy.pyx":1470
+ * res = {}
+ * for key, value in map_u.items():
+ * res[key.encode('utf-8')] = value.encode('utf-8') # <<<<<<<<<<<<<<
+ * return res
+ *
+ */
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_value, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1470, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_8 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
+ __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_5);
+ if (likely(__pyx_t_8)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
+ __Pyx_INCREF(__pyx_t_8);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_5, function);
+ }
+ }
+ __pyx_t_6 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_8, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
+ if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1470, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_key, __pyx_n_s_encode); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1470, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_9 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) {
+ __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_8);
+ if (likely(__pyx_t_9)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
+ __Pyx_INCREF(__pyx_t_9);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_8, function);
+ }
+ }
+ __pyx_t_5 = (__pyx_t_9) ? __Pyx_PyObject_Call2Args(__pyx_t_8, __pyx_t_9, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_8, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1470, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ if (unlikely(PyDict_SetItem(__pyx_v_res, __pyx_t_5, __pyx_t_6) < 0)) __PYX_ERR(0, 1470, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1471
+ * for key, value in map_u.items():
+ * res[key.encode('utf-8')] = value.encode('utf-8')
+ * return res # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_res);
+ __pyx_r = __pyx_v_res;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1456
+ * #########################################
+ *
+ * def encode_your_map(map_u): # <<<<<<<<<<<<<<
+ * """
+ * Encodes Python unicode strings in dictionnary `map` to utf-8 byte strings for C++ functions.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_8);
+ __Pyx_XDECREF(__pyx_t_9);
+ __Pyx_AddTraceback("gedlibpy.encode_your_map", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_res);
+ __Pyx_XDECREF(__pyx_v_key);
+ __Pyx_XDECREF(__pyx_v_value);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1474
+ *
+ *
+ * def decode_your_map(map_b): # <<<<<<<<<<<<<<
+ * """
+ * Decodes utf-8 byte strings in `map` from C++ functions to Python unicode strings.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_11decode_your_map(PyObject *__pyx_self, PyObject *__pyx_v_map_b); /*proto*/
+static char __pyx_doc_8gedlibpy_10decode_your_map[] = "\n\t\tDecodes utf-8 byte strings in `map` from C++ functions to Python unicode strings. \n\n\t\t:param map_b: The map to decode\n\t\t:type map_b: dict{'b'string : 'b'string}\n\t\t:return: The decoded map\n\t\t:rtype: dict{string : string}\n\n\t\t.. note:: This function is used for type connection. \n\t\t\n\t";
+static PyMethodDef __pyx_mdef_8gedlibpy_11decode_your_map = {"decode_your_map", (PyCFunction)__pyx_pw_8gedlibpy_11decode_your_map, METH_O, __pyx_doc_8gedlibpy_10decode_your_map};
+static PyObject *__pyx_pw_8gedlibpy_11decode_your_map(PyObject *__pyx_self, PyObject *__pyx_v_map_b) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("decode_your_map (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_10decode_your_map(__pyx_self, ((PyObject *)__pyx_v_map_b));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_10decode_your_map(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_map_b) {
+ PyObject *__pyx_v_res = NULL;
+ PyObject *__pyx_v_key = NULL;
+ PyObject *__pyx_v_value = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ Py_ssize_t __pyx_t_2;
+ Py_ssize_t __pyx_t_3;
+ int __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ int __pyx_t_7;
+ PyObject *__pyx_t_8 = NULL;
+ PyObject *__pyx_t_9 = NULL;
+ __Pyx_RefNannySetupContext("decode_your_map", 0);
+
+ /* "gedlibpy.pyx":1486
+ *
+ * """
+ * res = {} # <<<<<<<<<<<<<<
+ * for key, value in map_b.items():
+ * res[key.decode('utf-8')] = value.decode('utf-8')
+ */
+ __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1486, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v_res = ((PyObject*)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1487
+ * """
+ * res = {}
+ * for key, value in map_b.items(): # <<<<<<<<<<<<<<
+ * res[key.decode('utf-8')] = value.decode('utf-8')
+ * return res
+ */
+ __pyx_t_2 = 0;
+ if (unlikely(__pyx_v_map_b == Py_None)) {
+ PyErr_Format(PyExc_AttributeError, "'NoneType' object has no attribute '%.30s'", "items");
+ __PYX_ERR(0, 1487, __pyx_L1_error)
+ }
+ __pyx_t_5 = __Pyx_dict_iterator(__pyx_v_map_b, 0, __pyx_n_s_items, (&__pyx_t_3), (&__pyx_t_4)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1487, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_1);
+ __pyx_t_1 = __pyx_t_5;
+ __pyx_t_5 = 0;
+ while (1) {
+ __pyx_t_7 = __Pyx_dict_iter_next(__pyx_t_1, __pyx_t_3, &__pyx_t_2, &__pyx_t_5, &__pyx_t_6, NULL, __pyx_t_4);
+ if (unlikely(__pyx_t_7 == 0)) break;
+ if (unlikely(__pyx_t_7 == -1)) __PYX_ERR(0, 1487, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_XDECREF_SET(__pyx_v_key, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __Pyx_XDECREF_SET(__pyx_v_value, __pyx_t_6);
+ __pyx_t_6 = 0;
+
+ /* "gedlibpy.pyx":1488
+ * res = {}
+ * for key, value in map_b.items():
+ * res[key.decode('utf-8')] = value.decode('utf-8') # <<<<<<<<<<<<<<
+ * return res
+ *
+ */
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_value, __pyx_n_s_decode); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1488, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_8 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
+ __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_5);
+ if (likely(__pyx_t_8)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
+ __Pyx_INCREF(__pyx_t_8);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_5, function);
+ }
+ }
+ __pyx_t_6 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_8, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
+ if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1488, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_key, __pyx_n_s_decode); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1488, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_9 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) {
+ __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_8);
+ if (likely(__pyx_t_9)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
+ __Pyx_INCREF(__pyx_t_9);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_8, function);
+ }
+ }
+ __pyx_t_5 = (__pyx_t_9) ? __Pyx_PyObject_Call2Args(__pyx_t_8, __pyx_t_9, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_8, __pyx_kp_u_utf_8);
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1488, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ if (unlikely(PyDict_SetItem(__pyx_v_res, __pyx_t_5, __pyx_t_6) < 0)) __PYX_ERR(0, 1488, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1489
+ * for key, value in map_b.items():
+ * res[key.decode('utf-8')] = value.decode('utf-8')
+ * return res # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_res);
+ __pyx_r = __pyx_v_res;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1474
+ *
+ *
+ * def decode_your_map(map_b): # <<<<<<<<<<<<<<
+ * """
+ * Decodes utf-8 byte strings in `map` from C++ functions to Python unicode strings.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_8);
+ __Pyx_XDECREF(__pyx_t_9);
+ __Pyx_AddTraceback("gedlibpy.decode_your_map", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_res);
+ __Pyx_XDECREF(__pyx_v_key);
+ __Pyx_XDECREF(__pyx_v_value);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "gedlibpy.pyx":1492
+ *
+ *
+ * def decode_graph_edges(map_edge_b): # <<<<<<<<<<<<<<
+ * """
+ * Decode utf-8 byte strings in graph edges `map` from C++ functions to Python unicode strings.
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8gedlibpy_13decode_graph_edges(PyObject *__pyx_self, PyObject *__pyx_v_map_edge_b); /*proto*/
+static char __pyx_doc_8gedlibpy_12decode_graph_edges[] = "\n\tDecode utf-8 byte strings in graph edges `map` from C++ functions to Python unicode strings. \n\n\tParameters\n\t----------\n\tmap_edge_b : dict{tuple(size_t, size_t) : dict{'b'string : 'b'string}}\n\t\tThe map to decode.\n\n\tReturns\n\t-------\n\tdict{tuple(size_t, size_t) : dict{string : string}}\n\t\tThe decoded map.\n\t\n\tNotes\n\t-----\n\tThis is a helper function for function `GEDEnv.get_graph_edges()`.\n\t";
+static PyMethodDef __pyx_mdef_8gedlibpy_13decode_graph_edges = {"decode_graph_edges", (PyCFunction)__pyx_pw_8gedlibpy_13decode_graph_edges, METH_O, __pyx_doc_8gedlibpy_12decode_graph_edges};
+static PyObject *__pyx_pw_8gedlibpy_13decode_graph_edges(PyObject *__pyx_self, PyObject *__pyx_v_map_edge_b) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("decode_graph_edges (wrapper)", 0);
+ __pyx_r = __pyx_pf_8gedlibpy_12decode_graph_edges(__pyx_self, ((PyObject *)__pyx_v_map_edge_b));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8gedlibpy_12decode_graph_edges(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_map_edge_b) {
+ PyObject *__pyx_v_map_edges = NULL;
+ PyObject *__pyx_v_key = NULL;
+ PyObject *__pyx_v_value = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ Py_ssize_t __pyx_t_2;
+ Py_ssize_t __pyx_t_3;
+ int __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ int __pyx_t_7;
+ PyObject *__pyx_t_8 = NULL;
+ __Pyx_RefNannySetupContext("decode_graph_edges", 0);
+
+ /* "gedlibpy.pyx":1510
+ * This is a helper function for function `GEDEnv.get_graph_edges()`.
+ * """
+ * map_edges = {} # <<<<<<<<<<<<<<
+ * for key, value in map_edge_b.items():
+ * map_edges[key] = decode_your_map(value)
+ */
+ __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1510, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v_map_edges = ((PyObject*)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1511
+ * """
+ * map_edges = {}
+ * for key, value in map_edge_b.items(): # <<<<<<<<<<<<<<
+ * map_edges[key] = decode_your_map(value)
+ * return map_edges
+ */
+ __pyx_t_2 = 0;
+ if (unlikely(__pyx_v_map_edge_b == Py_None)) {
+ PyErr_Format(PyExc_AttributeError, "'NoneType' object has no attribute '%.30s'", "items");
+ __PYX_ERR(0, 1511, __pyx_L1_error)
+ }
+ __pyx_t_5 = __Pyx_dict_iterator(__pyx_v_map_edge_b, 0, __pyx_n_s_items, (&__pyx_t_3), (&__pyx_t_4)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1511, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_1);
+ __pyx_t_1 = __pyx_t_5;
+ __pyx_t_5 = 0;
+ while (1) {
+ __pyx_t_7 = __Pyx_dict_iter_next(__pyx_t_1, __pyx_t_3, &__pyx_t_2, &__pyx_t_5, &__pyx_t_6, NULL, __pyx_t_4);
+ if (unlikely(__pyx_t_7 == 0)) break;
+ if (unlikely(__pyx_t_7 == -1)) __PYX_ERR(0, 1511, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_XDECREF_SET(__pyx_v_key, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __Pyx_XDECREF_SET(__pyx_v_value, __pyx_t_6);
+ __pyx_t_6 = 0;
+
+ /* "gedlibpy.pyx":1512
+ * map_edges = {}
+ * for key, value in map_edge_b.items():
+ * map_edges[key] = decode_your_map(value) # <<<<<<<<<<<<<<
+ * return map_edges
+ *
+ */
+ __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_decode_your_map); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1512, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_8 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) {
+ __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_5);
+ if (likely(__pyx_t_8)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
+ __Pyx_INCREF(__pyx_t_8);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_5, function);
+ }
+ }
+ __pyx_t_6 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_8, __pyx_v_value) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_v_value);
+ __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
+ if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1512, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (unlikely(PyDict_SetItem(__pyx_v_map_edges, __pyx_v_key, __pyx_t_6) < 0)) __PYX_ERR(0, 1512, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "gedlibpy.pyx":1513
+ * for key, value in map_edge_b.items():
+ * map_edges[key] = decode_your_map(value)
+ * return map_edges # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_map_edges);
+ __pyx_r = __pyx_v_map_edges;
+ goto __pyx_L0;
+
+ /* "gedlibpy.pyx":1492
+ *
+ *
+ * def decode_graph_edges(map_edge_b): # <<<<<<<<<<<<<<
+ * """
+ * Decode utf-8 byte strings in graph edges `map` from C++ functions to Python unicode strings.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_8);
+ __Pyx_AddTraceback("gedlibpy.decode_graph_edges", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_map_edges);
+ __Pyx_XDECREF(__pyx_v_key);
+ __Pyx_XDECREF(__pyx_v_value);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":258
+ * # experimental exception made for __getbuffer__ and __releasebuffer__
+ * # -- the details of this may change.
+ * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<<
+ * # This implementation of getbuffer is geared towards Cython
+ * # requirements, and does not yet fulfill the PEP.
+ */
+
+/* Python wrapper */
+static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
+static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_5numpy_7ndarray___getbuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
+ int __pyx_v_i;
+ int __pyx_v_ndim;
+ int __pyx_v_endian_detector;
+ int __pyx_v_little_endian;
+ int __pyx_v_t;
+ char *__pyx_v_f;
+ PyArray_Descr *__pyx_v_descr = 0;
+ int __pyx_v_offset;
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ int __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ int __pyx_t_4;
+ int __pyx_t_5;
+ int __pyx_t_6;
+ PyArray_Descr *__pyx_t_7;
+ PyObject *__pyx_t_8 = NULL;
+ char *__pyx_t_9;
+ if (__pyx_v_info == NULL) {
+ PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete");
+ return -1;
+ }
+ __Pyx_RefNannySetupContext("__getbuffer__", 0);
+ __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);
+ __Pyx_GIVEREF(__pyx_v_info->obj);
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":265
+ *
+ * cdef int i, ndim
+ * cdef int endian_detector = 1 # <<<<<<<<<<<<<<
+ * cdef bint little_endian = ((&endian_detector)[0] != 0)
+ *
+ */
+ __pyx_v_endian_detector = 1;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":266
+ * cdef int i, ndim
+ * cdef int endian_detector = 1
+ * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<<
+ *
+ * ndim = PyArray_NDIM(self)
+ */
+ __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":268
+ * cdef bint little_endian = ((&endian_detector)[0] != 0)
+ *
+ * ndim = PyArray_NDIM(self) # <<<<<<<<<<<<<<
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)
+ */
+ __pyx_v_ndim = PyArray_NDIM(__pyx_v_self);
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":270
+ * ndim = PyArray_NDIM(self)
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous")
+ */
+ __pyx_t_2 = (((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) != 0);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L4_bool_binop_done;
+ }
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":271
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)): # <<<<<<<<<<<<<<
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ */
+ __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_ARRAY_C_CONTIGUOUS) != 0)) != 0);
+ __pyx_t_1 = __pyx_t_2;
+ __pyx_L4_bool_binop_done:;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":270
+ * ndim = PyArray_NDIM(self)
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous")
+ */
+ if (unlikely(__pyx_t_1)) {
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":272
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<<
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__13, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 272, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 272, __pyx_L1_error)
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":270
+ * ndim = PyArray_NDIM(self)
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous")
+ */
+ }
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":274
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ */
+ __pyx_t_2 = (((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) != 0);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L7_bool_binop_done;
+ }
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":275
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)): # <<<<<<<<<<<<<<
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ *
+ */
+ __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_ARRAY_F_CONTIGUOUS) != 0)) != 0);
+ __pyx_t_1 = __pyx_t_2;
+ __pyx_L7_bool_binop_done:;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":274
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ */
+ if (unlikely(__pyx_t_1)) {
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":276
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<<
+ *
+ * info.buf = PyArray_DATA(self)
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 276, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 276, __pyx_L1_error)
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":274
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ */
+ }
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":278
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ *
+ * info.buf = PyArray_DATA(self) # <<<<<<<<<<<<<<
+ * info.ndim = ndim
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ */
+ __pyx_v_info->buf = PyArray_DATA(__pyx_v_self);
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":279
+ *
+ * info.buf = PyArray_DATA(self)
+ * info.ndim = ndim # <<<<<<<<<<<<<<
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ * # Allocate new buffer for strides and shape info.
+ */
+ __pyx_v_info->ndim = __pyx_v_ndim;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":280
+ * info.buf = PyArray_DATA(self)
+ * info.ndim = ndim
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<<
+ * # Allocate new buffer for strides and shape info.
+ * # This is allocated as one block, strides first.
+ */
+ __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":283
+ * # Allocate new buffer for strides and shape info.
+ * # This is allocated as one block, strides first.
+ * info.strides = PyObject_Malloc(sizeof(Py_ssize_t) * 2 * ndim) # <<<<<<<<<<<<<<
+ * info.shape = info.strides + ndim
+ * for i in range(ndim):
+ */
+ __pyx_v_info->strides = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * 2) * ((size_t)__pyx_v_ndim))));
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":284
+ * # This is allocated as one block, strides first.
+ * info.strides = PyObject_Malloc(sizeof(Py_ssize_t) * 2 * ndim)
+ * info.shape = info.strides + ndim # <<<<<<<<<<<<<<
+ * for i in range(ndim):
+ * info.strides[i] = PyArray_STRIDES(self)[i]
+ */
+ __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim);
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":285
+ * info.strides = PyObject_Malloc(sizeof(Py_ssize_t) * 2 * ndim)
+ * info.shape = info.strides + ndim
+ * for i in range(ndim): # <<<<<<<<<<<<<<
+ * info.strides[i] = PyArray_STRIDES(self)[i]
+ * info.shape[i] = PyArray_DIMS(self)[i]
+ */
+ __pyx_t_4 = __pyx_v_ndim;
+ __pyx_t_5 = __pyx_t_4;
+ for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) {
+ __pyx_v_i = __pyx_t_6;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":286
+ * info.shape = info.strides + ndim
+ * for i in range(ndim):
+ * info.strides[i] = PyArray_STRIDES(self)[i] # <<<<<<<<<<<<<<
+ * info.shape[i] = PyArray_DIMS(self)[i]
+ * else:
+ */
+ (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(__pyx_v_self)[__pyx_v_i]);
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":287
+ * for i in range(ndim):
+ * info.strides[i] = PyArray_STRIDES(self)[i]
+ * info.shape[i] = PyArray_DIMS(self)[i] # <<<<<<<<<<<<<<
+ * else:
+ * info.strides = PyArray_STRIDES(self)
+ */
+ (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(__pyx_v_self)[__pyx_v_i]);
+ }
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":280
+ * info.buf = PyArray_DATA(self)
+ * info.ndim = ndim
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<<
+ * # Allocate new buffer for strides and shape info.
+ * # This is allocated as one block, strides first.
+ */
+ goto __pyx_L9;
+ }
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":289
+ * info.shape[i] = PyArray_DIMS(self)[i]
+ * else:
+ * info.strides = PyArray_STRIDES(self) # <<<<<<<<<<<<<<
+ * info.shape = PyArray_DIMS(self)
+ * info.suboffsets = NULL
+ */
+ /*else*/ {
+ __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(__pyx_v_self));
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":290
+ * else:
+ * info.strides = PyArray_STRIDES(self)
+ * info.shape = PyArray_DIMS(self) # <<<<<<<<<<<<<<
+ * info.suboffsets = NULL
+ * info.itemsize = PyArray_ITEMSIZE(self)
+ */
+ __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(__pyx_v_self));
+ }
+ __pyx_L9:;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":291
+ * info.strides = PyArray_STRIDES(self)
+ * info.shape = PyArray_DIMS(self)
+ * info.suboffsets = NULL # <<<<<<<<<<<<<<
+ * info.itemsize = PyArray_ITEMSIZE(self)
+ * info.readonly = not PyArray_ISWRITEABLE(self)
+ */
+ __pyx_v_info->suboffsets = NULL;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":292
+ * info.shape = PyArray_DIMS(self)
+ * info.suboffsets = NULL
+ * info.itemsize = PyArray_ITEMSIZE(self) # <<<<<<<<<<<<<<
+ * info.readonly = not PyArray_ISWRITEABLE(self)
+ *
+ */
+ __pyx_v_info->itemsize = PyArray_ITEMSIZE(__pyx_v_self);
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":293
+ * info.suboffsets = NULL
+ * info.itemsize = PyArray_ITEMSIZE(self)
+ * info.readonly = not PyArray_ISWRITEABLE(self) # <<<<<<<<<<<<<<
+ *
+ * cdef int t
+ */
+ __pyx_v_info->readonly = (!(PyArray_ISWRITEABLE(__pyx_v_self) != 0));
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":296
+ *
+ * cdef int t
+ * cdef char* f = NULL # <<<<<<<<<<<<<<
+ * cdef dtype descr = PyArray_DESCR(self)
+ * cdef int offset
+ */
+ __pyx_v_f = NULL;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":297
+ * cdef int t
+ * cdef char* f = NULL
+ * cdef dtype descr = PyArray_DESCR(self) # <<<<<<<<<<<<<<
+ * cdef int offset
+ *
+ */
+ __pyx_t_7 = PyArray_DESCR(__pyx_v_self);
+ __pyx_t_3 = ((PyObject *)__pyx_t_7);
+ __Pyx_INCREF(__pyx_t_3);
+ __pyx_v_descr = ((PyArray_Descr *)__pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":300
+ * cdef int offset
+ *
+ * info.obj = self # <<<<<<<<<<<<<<
+ *
+ * if not PyDataType_HASFIELDS(descr):
+ */
+ __Pyx_INCREF(((PyObject *)__pyx_v_self));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
+ __Pyx_GOTREF(__pyx_v_info->obj);
+ __Pyx_DECREF(__pyx_v_info->obj);
+ __pyx_v_info->obj = ((PyObject *)__pyx_v_self);
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":302
+ * info.obj = self
+ *
+ * if not PyDataType_HASFIELDS(descr): # <<<<<<<<<<<<<<
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or
+ */
+ __pyx_t_1 = ((!(PyDataType_HASFIELDS(__pyx_v_descr) != 0)) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":303
+ *
+ * if not PyDataType_HASFIELDS(descr):
+ * t = descr.type_num # <<<<<<<<<<<<<<
+ * if ((descr.byteorder == c'>' and little_endian) or
+ * (descr.byteorder == c'<' and not little_endian)):
+ */
+ __pyx_t_4 = __pyx_v_descr->type_num;
+ __pyx_v_t = __pyx_t_4;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":304
+ * if not PyDataType_HASFIELDS(descr):
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ __pyx_t_2 = ((__pyx_v_descr->byteorder == '>') != 0);
+ if (!__pyx_t_2) {
+ goto __pyx_L15_next_or;
+ } else {
+ }
+ __pyx_t_2 = (__pyx_v_little_endian != 0);
+ if (!__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L14_bool_binop_done;
+ }
+ __pyx_L15_next_or:;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":305
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or
+ * (descr.byteorder == c'<' and not little_endian)): # <<<<<<<<<<<<<<
+ * raise ValueError(u"Non-native byte order not supported")
+ * if t == NPY_BYTE: f = "b"
+ */
+ __pyx_t_2 = ((__pyx_v_descr->byteorder == '<') != 0);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L14_bool_binop_done;
+ }
+ __pyx_t_2 = ((!(__pyx_v_little_endian != 0)) != 0);
+ __pyx_t_1 = __pyx_t_2;
+ __pyx_L14_bool_binop_done:;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":304
+ * if not PyDataType_HASFIELDS(descr):
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ if (unlikely(__pyx_t_1)) {
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":306
+ * if ((descr.byteorder == c'>' and little_endian) or
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<<
+ * if t == NPY_BYTE: f = "b"
+ * elif t == NPY_UBYTE: f = "B"
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 306, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 306, __pyx_L1_error)
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":304
+ * if not PyDataType_HASFIELDS(descr):
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ }
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":307
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ * if t == NPY_BYTE: f = "b" # <<<<<<<<<<<<<<
+ * elif t == NPY_UBYTE: f = "B"
+ * elif t == NPY_SHORT: f = "h"
+ */
+ switch (__pyx_v_t) {
+ case NPY_BYTE:
+ __pyx_v_f = ((char *)"b");
+ break;
+ case NPY_UBYTE:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":308
+ * raise ValueError(u"Non-native byte order not supported")
+ * if t == NPY_BYTE: f = "b"
+ * elif t == NPY_UBYTE: f = "B" # <<<<<<<<<<<<<<
+ * elif t == NPY_SHORT: f = "h"
+ * elif t == NPY_USHORT: f = "H"
+ */
+ __pyx_v_f = ((char *)"B");
+ break;
+ case NPY_SHORT:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":309
+ * if t == NPY_BYTE: f = "b"
+ * elif t == NPY_UBYTE: f = "B"
+ * elif t == NPY_SHORT: f = "h" # <<<<<<<<<<<<<<
+ * elif t == NPY_USHORT: f = "H"
+ * elif t == NPY_INT: f = "i"
+ */
+ __pyx_v_f = ((char *)"h");
+ break;
+ case NPY_USHORT:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":310
+ * elif t == NPY_UBYTE: f = "B"
+ * elif t == NPY_SHORT: f = "h"
+ * elif t == NPY_USHORT: f = "H" # <<<<<<<<<<<<<<
+ * elif t == NPY_INT: f = "i"
+ * elif t == NPY_UINT: f = "I"
+ */
+ __pyx_v_f = ((char *)"H");
+ break;
+ case NPY_INT:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":311
+ * elif t == NPY_SHORT: f = "h"
+ * elif t == NPY_USHORT: f = "H"
+ * elif t == NPY_INT: f = "i" # <<<<<<<<<<<<<<
+ * elif t == NPY_UINT: f = "I"
+ * elif t == NPY_LONG: f = "l"
+ */
+ __pyx_v_f = ((char *)"i");
+ break;
+ case NPY_UINT:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":312
+ * elif t == NPY_USHORT: f = "H"
+ * elif t == NPY_INT: f = "i"
+ * elif t == NPY_UINT: f = "I" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONG: f = "l"
+ * elif t == NPY_ULONG: f = "L"
+ */
+ __pyx_v_f = ((char *)"I");
+ break;
+ case NPY_LONG:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":313
+ * elif t == NPY_INT: f = "i"
+ * elif t == NPY_UINT: f = "I"
+ * elif t == NPY_LONG: f = "l" # <<<<<<<<<<<<<<
+ * elif t == NPY_ULONG: f = "L"
+ * elif t == NPY_LONGLONG: f = "q"
+ */
+ __pyx_v_f = ((char *)"l");
+ break;
+ case NPY_ULONG:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":314
+ * elif t == NPY_UINT: f = "I"
+ * elif t == NPY_LONG: f = "l"
+ * elif t == NPY_ULONG: f = "L" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONGLONG: f = "q"
+ * elif t == NPY_ULONGLONG: f = "Q"
+ */
+ __pyx_v_f = ((char *)"L");
+ break;
+ case NPY_LONGLONG:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":315
+ * elif t == NPY_LONG: f = "l"
+ * elif t == NPY_ULONG: f = "L"
+ * elif t == NPY_LONGLONG: f = "q" # <<<<<<<<<<<<<<
+ * elif t == NPY_ULONGLONG: f = "Q"
+ * elif t == NPY_FLOAT: f = "f"
+ */
+ __pyx_v_f = ((char *)"q");
+ break;
+ case NPY_ULONGLONG:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":316
+ * elif t == NPY_ULONG: f = "L"
+ * elif t == NPY_LONGLONG: f = "q"
+ * elif t == NPY_ULONGLONG: f = "Q" # <<<<<<<<<<<<<<
+ * elif t == NPY_FLOAT: f = "f"
+ * elif t == NPY_DOUBLE: f = "d"
+ */
+ __pyx_v_f = ((char *)"Q");
+ break;
+ case NPY_FLOAT:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":317
+ * elif t == NPY_LONGLONG: f = "q"
+ * elif t == NPY_ULONGLONG: f = "Q"
+ * elif t == NPY_FLOAT: f = "f" # <<<<<<<<<<<<<<
+ * elif t == NPY_DOUBLE: f = "d"
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ */
+ __pyx_v_f = ((char *)"f");
+ break;
+ case NPY_DOUBLE:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":318
+ * elif t == NPY_ULONGLONG: f = "Q"
+ * elif t == NPY_FLOAT: f = "f"
+ * elif t == NPY_DOUBLE: f = "d" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ * elif t == NPY_CFLOAT: f = "Zf"
+ */
+ __pyx_v_f = ((char *)"d");
+ break;
+ case NPY_LONGDOUBLE:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":319
+ * elif t == NPY_FLOAT: f = "f"
+ * elif t == NPY_DOUBLE: f = "d"
+ * elif t == NPY_LONGDOUBLE: f = "g" # <<<<<<<<<<<<<<
+ * elif t == NPY_CFLOAT: f = "Zf"
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ */
+ __pyx_v_f = ((char *)"g");
+ break;
+ case NPY_CFLOAT:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":320
+ * elif t == NPY_DOUBLE: f = "d"
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ * elif t == NPY_CFLOAT: f = "Zf" # <<<<<<<<<<<<<<
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ * elif t == NPY_CLONGDOUBLE: f = "Zg"
+ */
+ __pyx_v_f = ((char *)"Zf");
+ break;
+ case NPY_CDOUBLE:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":321
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ * elif t == NPY_CFLOAT: f = "Zf"
+ * elif t == NPY_CDOUBLE: f = "Zd" # <<<<<<<<<<<<<<
+ * elif t == NPY_CLONGDOUBLE: f = "Zg"
+ * elif t == NPY_OBJECT: f = "O"
+ */
+ __pyx_v_f = ((char *)"Zd");
+ break;
+ case NPY_CLONGDOUBLE:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":322
+ * elif t == NPY_CFLOAT: f = "Zf"
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ * elif t == NPY_CLONGDOUBLE: f = "Zg" # <<<<<<<<<<<<<<
+ * elif t == NPY_OBJECT: f = "O"
+ * else:
+ */
+ __pyx_v_f = ((char *)"Zg");
+ break;
+ case NPY_OBJECT:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":323
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ * elif t == NPY_CLONGDOUBLE: f = "Zg"
+ * elif t == NPY_OBJECT: f = "O" # <<<<<<<<<<<<<<
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ */
+ __pyx_v_f = ((char *)"O");
+ break;
+ default:
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":325
+ * elif t == NPY_OBJECT: f = "O"
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<<
+ * info.format = f
+ * return
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 325, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_8 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_t_3); if (unlikely(!__pyx_t_8)) __PYX_ERR(2, 325, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_8); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 325, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 325, __pyx_L1_error)
+ break;
+ }
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":326
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ * info.format = f # <<<<<<<<<<<<<<
+ * return
+ * else:
+ */
+ __pyx_v_info->format = __pyx_v_f;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":327
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ * info.format = f
+ * return # <<<<<<<<<<<<<<
+ * else:
+ * info.format = PyObject_Malloc(_buffer_format_string_len)
+ */
+ __pyx_r = 0;
+ goto __pyx_L0;
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":302
+ * info.obj = self
+ *
+ * if not PyDataType_HASFIELDS(descr): # <<<<<<<<<<<<<<
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or
+ */
+ }
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":329
+ * return
+ * else:
+ * info.format = PyObject_Malloc(_buffer_format_string_len) # <<<<<<<<<<<<<<
+ * info.format[0] = c'^' # Native data types, manual alignment
+ * offset = 0
+ */
+ /*else*/ {
+ __pyx_v_info->format = ((char *)PyObject_Malloc(0xFF));
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":330
+ * else:
+ * info.format = PyObject_Malloc(_buffer_format_string_len)
+ * info.format[0] = c'^' # Native data types, manual alignment # <<<<<<<<<<<<<<
+ * offset = 0
+ * f = _util_dtypestring(descr, info.format + 1,
+ */
+ (__pyx_v_info->format[0]) = '^';
+
+ /* "../../virtualenv/v0.2_latest_requirements_versions/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":331
+ * info.format =