Skip to content

Commit

Permalink
Merge pull request #7 from worldcoin/yi-update-typo
Browse files Browse the repository at this point in the history
fix typos in all documents
  • Loading branch information
ycbiometrics authored Dec 20, 2023
2 parents 5a21037 + 43fde87 commit 1e09296
Show file tree
Hide file tree
Showing 6 changed files with 21 additions and 21 deletions.
2 changes: 1 addition & 1 deletion .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Please, give a brief description of what was changed or fixed and how you did it
## Checklist
<!-- Please make sure you did all pre review requesting steps and check all with an 'x' ([x]). -->

- [ ] I've made sure that my code works as expected by writting unit tests.
- [ ] I've made sure that my code works as expected by writing unit tests.
- [ ] I've checked if my code doesn't generate warnings or errors.
- [ ] I've performed a self-review of my code.
- [ ] I've made sure that my code follows the style guidelines of the project.
Expand Down
10 changes: 5 additions & 5 deletions SEMSEG_MODEL_CARD.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The decoder is enhanced through the incorporation of Spatial and Channel "Squeez

Importantly, no Worldcoin user data was used to train or fine-tune the IRIS pipeline. Rather, a research dataset from the University of Notre Dame du Lac (ND-IRIS-0405) [4] was used, with the University’s permission. This dataset was enhanced with manual labels, which themselves may be made available for research purposes.

The experimental dataset contained a total of 9 957 manually annotated IR images comming from 676 different people. All images were captured using LG 4000 device. Table below presents dataset split used during training semantic segmentation model.
The experimental dataset contained a total of 9 957 manually annotated IR images coming from 676 different people. All images were captured using LG 4000 device. Table below presents dataset split used during training semantic segmentation model.


| **Dataset type**| **Number of images** | **Number of subject** |
Expand Down Expand Up @@ -76,7 +76,7 @@ The model yields a tensor characterized by dimensions of Nx4x640x480, denoting b

Within the ambit of each class, the model formulates probability estimates pertaining to the likelihood of a given pixel being attributed to a particular class.

### Examplary inference results
### Example inference results

**Note**: The provided input image has been subjected to the processing methodology described earlier, prior to its introduction into the model. Moreover, for the intent of visualization, the IR image presented has been anonymized to safeguard the identity of the user. It is also worth to note that the inference process was conducted on the original, non-anonymized version of the image.

Expand All @@ -93,9 +93,9 @@ Within the ambit of each class, the model formulates probability estimates perta
## Limitations

Thorough examination of the results enabled us to pinpoint situations where the segmentation model experiences declines in performance. These instances are as follows:
- Segmenting images that were not captured by LG4400 sensor may not always produce smooth segmentation maps. The segmention performance depends on how similar images to be segmented are to the images captured by LG4400 sensor.
- Segmenting images with high specular reflection comming usually from glasses may lead to bad segmentation map predictions.
- Data based on which the model was trained were captured in the constrained environment with cooperative users. Therefore, in practise model is expected to produce poor segmentation maps for cases like: offgazes, misaligned eyes, blurry images etc.
- Segmenting images that were not captured by LG4400 sensor may not always produce smooth segmentation maps. The segmentation performance depends on how similar images to be segmented are to the images captured by LG4400 sensor.
- Segmenting images with high specular reflection coming usually from glasses may lead to bad segmentation map predictions.
- Data based on which the model was trained were captured in the constrained environment with cooperative users. Therefore, in practice model is expected to produce poor segmentation maps for cases like: offgaze, misaligned eyes, blurry images etc.

## Further reading

Expand Down
14 changes: 7 additions & 7 deletions docs/source/examples/custom_pipeline.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ The ``pipeline`` subsection contains a list of ``IRISPipeline`` nodes. The node

* ``name`` - that's node metadata information about node name. It's used later to define connections with other defined nodes. Also, it's worth to notice that the ``name`` key is later used by ``PipelineCallTraceStorage`` to store and return different intermediate results.
* ``algorithm`` - that's a key that contains a definition of a Python object that implements an algorithm we want to use in our pipeline.
* ``algorithms.class_name`` - a Python object class name that implements ``iris.Algorithm`` interface (more information about ``Algorithm`` class will be provided in section 3 of this tutorial). Please note, that defined here Python object must be importable by Python interpreter. That means that ``Algorithm`` implementation doesn't have to be implemented within ``iris`` package. User may implement or import it from any external library. The only contrain is that ``Algorithm`` interface must be satisfied to make everything compatible.
* ``algorithms.class_name`` - a Python object class name that implements ``iris.Algorithm`` interface (more information about ``Algorithm`` class will be provided in section 3 of this tutorial). Please note, that defined here Python object must be importable by Python interpreter. That means that ``Algorithm`` implementation doesn't have to be implemented within ``iris`` package. User may implement or import it from any external library. The only constraint is that ``Algorithm`` interface must be satisfied to make everything compatible.
* ``algorithms.params`` - that key defined a dictionary that contains all ``__init__`` parameters of a given node - ``Algorithm`` object. List of parameters of nodes available in the ``iris`` package with their descriptions can be found in project documentation.
* ``inputs`` - that key defined a list of inputs to node's ``run`` method - connections between node within pipeline graph. A single input record has to contain following keys: ``["name", "source_node"]``. Optionally, an ``inputs`` record can contain an ``index`` key. It's used whenever input node returns a tuple/list of objects and user wants to extract a certain output to be provided to ``run`` method of currently defined node. An example of a node definition that utilized ``index`` can look like follow:

Expand All @@ -65,7 +65,7 @@ The ``pipeline`` subsection contains a list of ``IRISPipeline`` nodes. The node
* ``inputs.name`` - the ``Algorithm`` ``run`` method argument name that is meant to be filled with the output from the ``source_name``.
* ``inputs.source_name`` - a name of node that outputs input to currently defined node.
* ``callbacks`` - a key that defines a list of possible ``iris.Callback`` object of a node. That key requires from an ``Algorithm`` object to allow callback plug in. User can allow that behaviour when specifing ``callbacks`` argument of the ``__init__`` method of particular ``Algorithm``.
* ``callbacks`` - a key that defines a list of possible ``iris.Callback`` object of a node. That key requires from an ``Algorithm`` object to allow callback plug in. User can allow that behaviour when specifying ``callbacks`` argument of the ``__init__`` method of particular ``Algorithm``.

*NOTE*: Nodes has to be defined consecutively with the order they appear within pipeline. That means that specifying ``source_name`` to the node which definition appears later within YAML file will cause exception being raised when instantiating pipeline.

Expand Down Expand Up @@ -543,11 +543,11 @@ Perfect! We've just learned how to modify ``IRISPipeline`` algorithms parameters
2. Configure ``IRISPipeline`` graph.
------------------------------------------------------

As descibed in previous section to define connection between nodes, we utilize ``inputs`` key within our YAML file or dictionary. Similar to previous tutorial, let's start with instantiating a default ``IRISPipeline`` and then modify "artificially" for demonstration purposes connections between ``distance_filter`` (``iris.ContourPointNoiseEyeballDistanceFilter``), ``smoothing`` (``iris.Smoothing``) and ``geometry_estimation`` (``iris.FusionExtrapolation``) nodes.
As described in previous section to define connection between nodes, we utilize ``inputs`` key within our YAML file or dictionary. Similar to previous tutorial, let's start with instantiating a default ``IRISPipeline`` and then modify "artificially" for demonstration purposes connections between ``distance_filter`` (``iris.ContourPointNoiseEyeballDistanceFilter``), ``smoothing`` (``iris.Smoothing``) and ``geometry_estimation`` (``iris.FusionExtrapolation``) nodes.

By default, ``smoothing`` node, responsible for refinement of vectorized iris and pupil points is taking as an input the output of ``distance_filter`` nodes, which btw is also doing refinement of vectorized iris and pupil points but of course a different one. The output of ``smoothing`` node is later passed to final ``geometry_estimation`` node as an input. Within commented section below user can follow that connection. Now, in this example let's imagine we want to bypass ``smoothing`` node and perform ``geometry_estimation`` based on the output of ``distance_filter`` node while still keeping ``smoothing`` node.

First let's intantiate ``IRISPipeline`` with default configuration and see nodes connected to ``geometry_estimation`` node.
First let's instantiate ``IRISPipeline`` with default configuration and see nodes connected to ``geometry_estimation`` node.

.. code-block:: python
Expand Down Expand Up @@ -1061,16 +1061,16 @@ The ``Algorithm`` class is an abstract class that is a base class for every node
NotImplementedError: Raised if subclass doesn't implement `run` method.
Returns:
Any: Return value by concrate implementation of the `run` method.
Any: Return value by concrete implementation of the `run` method.
"""
raise NotImplementedError(f"{self.__class__.__name__}.run method not implemented!")
There are 3 important things to note that have direct implications on how user have to implement custom ``Algorithm``:

* The ``run`` method - If we implement our own custom ``Algorithm`` we have to make sure that ``run`` method is implemented. Other then that, already mentioned callbacks.
* The ``__parameters_type__`` variable - In our code base, we use ``pydantic`` package to perform validation of ``Algorithm`` ``__init__`` parameters. To simplify, and hide behind the sceen, those mechanisms we introduced that variable.
* The ``__parameters_type__`` variable - In our code base, we use ``pydantic`` package to perform validation of ``Algorithm`` ``__init__`` parameters. To simplify and hide behind the screen those mechanisms, we introduced this variable.
* The ``callbacks`` special key that can be provided in the ``__init__`` method. As already mentioned before, if we want to turn on in our ``Algorithm`` callbacks mechanisms, we have to specify special - ``callbacks`` - parameter in that ``Algorithm`` ``__init__`` method.

In this section, we won't provide examples since there is a planty of them within the ``iris`` package. Plus, we also want to encourage you to explore the ``iris`` package by yourself. Therefore, for examples of concreate ``Algorithm`` implementations, please check ``iris.nodes`` submodule of the ``iris`` package.
In this section, we won't provide examples since there are plenty of them within the ``iris`` package. Plus, we also want to encourage you to explore the ``iris`` package by yourself. Therefore, for examples of concrete ``Algorithm`` implementations, please check ``iris.nodes`` submodule of the ``iris`` package.

**Thank you for making it to the end of this tutorial!**
10 changes: 5 additions & 5 deletions docs/source/examples/getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,13 +54,13 @@ If ``output["error"]`` value is ``None``, ``IRISPipeline`` finished inference ca
The ``iris_template`` value contains generated by the ``IRISPipeline`` iris code for an iris texture visible in the input image. The ``output["iris_template"]`` value is a ``dict`` containing two keys: ``["iris_codes", "mask_codes"]``.

Each code available in ``output["iris_template"]`` dictionary is a ``numpy.ndarray`` of shape ``(16, 256, 2, 2)``. The output shape of iris code is determined by ``IRISPipeline`` filter bank parameters. The iris/mask code shape's dimmensions correspond to the following ``(iris_code_height, iris_code_width, num_filters, 2)``. Values ``iris_code_height`` and ``iris_code_width`` are determined by ``ProbeSchema``s defined for ``ConvFilterBank`` object and ``num_filters`` is determined by number of filters specified for ``ConvFilterBank`` object. The last ``2`` value of the iris/mask code dimmension corresponds to real and complex parts of each complex filter response.
Each code available in ``output["iris_template"]`` dictionary is a ``numpy.ndarray`` of shape ``(16, 256, 2, 2)``. The output shape of iris code is determined by ``IRISPipeline`` filter bank parameters. The iris/mask code shape's dimensions correspond to the following ``(iris_code_height, iris_code_width, num_filters, 2)``. Values ``iris_code_height`` and ``iris_code_width`` are determined by ``ProbeSchema``s defined for ``ConvFilterBank`` object and ``num_filters`` is determined by number of filters specified for ``ConvFilterBank`` object. The last ``2`` value of the iris/mask code dimension corresponds to real and complex parts of each complex filter response.

*NOTE*: More about how to specify those parameters and configuring custom ``IRISPipeline`` can be found in the *Configuring custom pipeline* tutorial.

The ``metadata`` value contains additional information that may be useful for further processing or quality analisys. Metadata information contain in this dictionary presents as follow.
The ``metadata`` value contains additional information that may be useful for further processing or quality analysis. Metadata information contain in this dictionary presents as follow.

Configuring pipelines error handling and which intermediate results are returned can be achived through ``Environment`` parameter set when the ``IRISPipeline`` is instantiate. To understand more about that subject please follow to the notebook's next section - *2. Configuring ``IRISPipeline`` environment*.
Configuring pipelines error handling and which intermediate results are returned can be achieved through ``Environment`` parameter set when the ``IRISPipeline`` is instantiate. To understand more about that subject please follow to the notebook's next section - *2. Configuring ``IRISPipeline`` environment*.

1. Configuring ``IRISPipeline`` environment
--------------------------------------------
Expand All @@ -83,7 +83,7 @@ There are two parameters we can specify:

#. ``config: Union[Dict[str, Any], Optional[str]]`` - refers to ``IRISPipeline`` configuration that specified what nodes pipeline has and how all of them are orchestrated/connected into pipeline graph. How to configure pipeline graph is a subject of the tutorial *Configuring custom pipeline* tutorial.

#. ``env: Environment`` - refers to ``IRISPipeline`` enviroment that manages error handling and return behaviour of the ``IRISPipeline``.
#. ``env: Environment`` - refers to ``IRISPipeline`` environment that manages error handling and return behaviour of the ``IRISPipeline``.

From that we can see that in order to modify error handling or return behaviour we have to introduce our own ``Environment`` object when creating the ``IRISPipeline`` object. The ``Environment`` object is defined as follow.

Expand Down Expand Up @@ -135,7 +135,7 @@ User can also create and introduce to ``IRISPipeline`` their own ``Environment``
3. Visualizing intermediate results
------------------------------------------

The ``iris`` package provides also a useful module for plotting intermediate results - ``iris.visualisation``. The main class of the module - ``IRISVisualizer`` - provides a bunch of plot functions that given appropriate intermediate result creates a ready to dispay ``Canvas``. Definition of the ``Canvas`` type looks like follow.
The ``iris`` package provides also a useful module for plotting intermediate results - ``iris.visualisation``. The main class of the module - ``IRISVisualizer`` - provides a bunch of plot functions that given appropriate intermediate result creates a ready to display ``Canvas``. Definition of the ``Canvas`` type looks like follow.

.. code-block:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/source/quickstart/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Installation

Installation is as simple as running ``pip install`` with specifying ``IRIS_ENV`` installation global flag (``IRIS_ENV`` flag may be skipped if ``iris`` is installed from PyPl server but this option is only available when ``iris`` is installed on local machine). The ``IRIS_ENV`` flag is used to indicate an "environment" in which package is meant to work. Possible options are:

#. ``SERVER`` - For installing ``iris`` package with dependecies required for running an inference on a local machines.
#. ``SERVER`` - For installing ``iris`` package with dependencies required for running an inference on a local machines.

.. code:: bash
Expand All @@ -12,7 +12,7 @@ Installation is as simple as running ``pip install`` with specifying ``IRIS_ENV`
# or directly from GitHub
IRIS_ENV=SERVER pip install git+https://github.com/worldcoin/open-iris.git
#. ``ORB`` - For installing ``iris`` package with dependecies required for running an inference on the Orb.
#. ``ORB`` - For installing ``iris`` package with dependencies required for running an inference on the Orb.

.. code:: bash
Expand Down
2 changes: 1 addition & 1 deletion docs/source/quickstart/running_inference.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Running inference
================================

A simple inference run can be achived by running source code below.
A simple inference run can be achieved by running source code below.

.. code-block:: python
Expand Down

0 comments on commit 1e09296

Please sign in to comment.