From 5e8a1c574ec59a73c9980ebb216639f50a2b6506 Mon Sep 17 00:00:00 2001 From: armanrahman22 Date: Tue, 24 Jul 2018 14:41:30 -0400 Subject: [PATCH 01/50] Update README.md --- README.md | 64 ++++++++----------------------------------------------- 1 file changed, 9 insertions(+), 55 deletions(-) diff --git a/README.md b/README.md index 9220d20e4..3ca1dae9d 100644 --- a/README.md +++ b/README.md @@ -1,55 +1,9 @@ -# Face Recognition using Tensorflow [![Build Status][travis-image]][travis] - -[travis-image]: http://travis-ci.org/davidsandberg/facenet.svg?branch=master -[travis]: http://travis-ci.org/davidsandberg/facenet - -This is a TensorFlow implementation of the face recognizer described in the paper -["FaceNet: A Unified Embedding for Face Recognition and Clustering"](http://arxiv.org/abs/1503.03832). The project also uses ideas from the paper ["Deep Face Recognition"](http://www.robots.ox.ac.uk/~vgg/publications/2015/Parkhi15/parkhi15.pdf) from the [Visual Geometry Group](http://www.robots.ox.ac.uk/~vgg/) at Oxford. - -## Compatibility -The code is tested using Tensorflow r1.7 under Ubuntu 14.04 with Python 2.7 and Python 3.5. The test cases can be found [here](https://github.com/davidsandberg/facenet/tree/master/test) and the results can be found [here](http://travis-ci.org/davidsandberg/facenet). - -## News -| Date | Update | -|----------|--------| -| 2018-04-10 | Added new models trained on Casia-WebFace and VGGFace2 (see below). Note that the models uses fixed image standardization (see [wiki](https://github.com/davidsandberg/facenet/wiki/Training-using-the-VGGFace2-dataset)). | -| 2018-03-31 | Added a new, more flexible input pipeline as well as a bunch of minor updates. | -| 2017-05-13 | Removed a bunch of older non-slim models. Moved the last bottleneck layer into the respective models. Corrected normalization of Center Loss. | -| 2017-05-06 | Added code to [train a classifier on your own images](https://github.com/davidsandberg/facenet/wiki/Train-a-classifier-on-own-images). Renamed facenet_train.py to train_tripletloss.py and facenet_train_classifier.py to train_softmax.py. | -| 2017-03-02 | Added pretrained models that generate 128-dimensional embeddings.| -| 2017-02-22 | Updated to Tensorflow r1.0. Added Continuous Integration using Travis-CI.| -| 2017-02-03 | Added models where only trainable variables has been stored in the checkpoint. These are therefore significantly smaller. | -| 2017-01-27 | Added a model trained on a subset of the MS-Celeb-1M dataset. The LFW accuracy of this model is around 0.994. | -| 2017‑01‑02 | Updated to run with Tensorflow r0.12. Not sure if it runs with older versions of Tensorflow though. | - -## Pre-trained models -| Model name | LFW accuracy | Training dataset | Architecture | -|-----------------|--------------|------------------|-------------| -| [20180408-102900](https://drive.google.com/open?id=1R77HmFADxe87GmoLwzfgMu_HY0IhcyBz) | 0.9905 | CASIA-WebFace | [Inception ResNet v1](https://github.com/davidsandberg/facenet/blob/master/src/models/inception_resnet_v1.py) | -| [20180402-114759](https://drive.google.com/open?id=1EXPBSXwTaqrSC0OhUdXNmKSh9qJUQ55-) | 0.9965 | VGGFace2 | [Inception ResNet v1](https://github.com/davidsandberg/facenet/blob/master/src/models/inception_resnet_v1.py) | - -NOTE: If you use any of the models, please do not forget to give proper credit to those providing the training dataset as well. - -## Inspiration -The code is heavily inspired by the [OpenFace](https://github.com/cmusatyalab/openface) implementation. - -## Training data -The [CASIA-WebFace](http://www.cbsr.ia.ac.cn/english/CASIA-WebFace-Database.html) dataset has been used for training. This training set consists of total of 453 453 images over 10 575 identities after face detection. Some performance improvement has been seen if the dataset has been filtered before training. Some more information about how this was done will come later. -The best performing model has been trained on the [VGGFace2](https://www.robots.ox.ac.uk/~vgg/data/vgg_face2/) dataset consisting of ~3.3M faces and ~9000 classes. - -## Pre-processing - -### Face alignment using MTCNN -One problem with the above approach seems to be that the Dlib face detector misses some of the hard examples (partial occlusion, silhouettes, etc). This makes the training set too "easy" which causes the model to perform worse on other benchmarks. -To solve this, other face landmark detectors has been tested. One face landmark detector that has proven to work very well in this setting is the -[Multi-task CNN](https://kpzhang93.github.io/MTCNN_face_detection_alignment/index.html). A Matlab/Caffe implementation can be found [here](https://github.com/kpzhang93/MTCNN_face_detection_alignment) and this has been used for face alignment with very good results. A Python/Tensorflow implementation of MTCNN can be found [here](https://github.com/davidsandberg/facenet/tree/master/src/align). This implementation does not give identical results to the Matlab/Caffe implementation but the performance is very similar. - -## Running training -Currently, the best results are achieved by training the model using softmax loss. Details on how to train a model using softmax loss on the CASIA-WebFace dataset can be found on the page [Classifier training of Inception-ResNet-v1](https://github.com/davidsandberg/facenet/wiki/Classifier-training-of-inception-resnet-v1) and . - -## Pre-trained models -### Inception-ResNet-v1 model -A couple of pretrained models are provided. They are trained using softmax loss with the Inception-Resnet-v1 model. The datasets has been aligned using [MTCNN](https://github.com/davidsandberg/facenet/tree/master/src/align). - -## Performance -The accuracy on LFW for the model [20180402-114759](https://drive.google.com/open?id=1EXPBSXwTaqrSC0OhUdXNmKSh9qJUQ55-) is 0.99650+-0.00252. A description of how to run the test can be found on the page [Validate on LFW](https://github.com/davidsandberg/facenet/wiki/Validate-on-lfw). Note that the input images to the model need to be standardized using fixed image standardization (use the option `--use_fixed_image_standardization` when running e.g. `validate_on_lfw.py`). +# david sandberg Facenet Mirror +This repo is just a mirror of https://github.com/davidsandberg/facenet that is pip installable. +## Installation +This implementation can be pip installed as follows: +``` +pip install facenet_sandberg +``` +## Copyright +MIT License from original repo https://github.com/davidsandberg/facenet/blob/master/LICENSE.md From 6bfae8cd790f29ae6b0a3a2ca5d82e5d960471b3 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Tue, 24 Jul 2018 14:42:14 -0400 Subject: [PATCH 02/50] made pypi package --- .vscode/settings.json | 3 +++ {src => facenet_sandberg}/__init__.py | 0 .../align/__init__.py | 0 .../align/align_dataset_mtcnn.py | 0 {src => facenet_sandberg}/align/det1.npy | Bin {src => facenet_sandberg}/align/det2.npy | Bin {src => facenet_sandberg}/align/det3.npy | Bin .../align/detect_face.py | 0 .../calculate_filtering_metrics.py | 0 {src => facenet_sandberg}/classifier.py | 0 {src => facenet_sandberg}/compare.py | 0 .../decode_msceleb_dataset.py | 0 .../download_and_extract.py | 0 {src => facenet_sandberg}/facenet.py | 0 {src => facenet_sandberg}/freeze_graph.py | 0 .../generative}/__init__.py | 0 .../generative/calculate_attribute_vectors.py | 0 .../generative/models}/__init__.py | 0 .../generative/models/dfc_vae.py | 0 .../generative/models/dfc_vae_large.py | 0 .../generative/models/dfc_vae_resnet.py | 0 .../generative/models/vae_base.py | 0 .../generative/modify_attribute.py | 0 .../generative/train_vae.py | 0 {src => facenet_sandberg}/lfw.py | 0 {src => facenet_sandberg}/models/__init__.py | 0 {src => facenet_sandberg}/models/dummy.py | 0 .../models/inception_resnet_v1.py | 0 .../models/inception_resnet_v2.py | 0 .../models/squeezenet.py | 0 {src => facenet_sandberg}/train_softmax.py | 0 .../train_tripletloss.py | 0 {src => facenet_sandberg}/validate_on_lfw.py | 0 setup.py | 18 ++++++++++++++++++ src/generative/models/__init__.py | 0 35 files changed, 21 insertions(+) create mode 100644 .vscode/settings.json rename {src => facenet_sandberg}/__init__.py (100%) rename __init__.py => facenet_sandberg/align/__init__.py (100%) rename {src => facenet_sandberg}/align/align_dataset_mtcnn.py (100%) rename {src => facenet_sandberg}/align/det1.npy (100%) rename {src => facenet_sandberg}/align/det2.npy (100%) rename {src => facenet_sandberg}/align/det3.npy (100%) rename {src => facenet_sandberg}/align/detect_face.py (100%) rename {src => facenet_sandberg}/calculate_filtering_metrics.py (100%) rename {src => facenet_sandberg}/classifier.py (100%) rename {src => facenet_sandberg}/compare.py (100%) rename {src => facenet_sandberg}/decode_msceleb_dataset.py (100%) rename {src => facenet_sandberg}/download_and_extract.py (100%) rename {src => facenet_sandberg}/facenet.py (100%) rename {src => facenet_sandberg}/freeze_graph.py (100%) rename {src/align => facenet_sandberg/generative}/__init__.py (100%) rename {src => facenet_sandberg}/generative/calculate_attribute_vectors.py (100%) rename {src/generative => facenet_sandberg/generative/models}/__init__.py (100%) rename {src => facenet_sandberg}/generative/models/dfc_vae.py (100%) rename {src => facenet_sandberg}/generative/models/dfc_vae_large.py (100%) rename {src => facenet_sandberg}/generative/models/dfc_vae_resnet.py (100%) rename {src => facenet_sandberg}/generative/models/vae_base.py (100%) rename {src => facenet_sandberg}/generative/modify_attribute.py (100%) rename {src => facenet_sandberg}/generative/train_vae.py (100%) rename {src => facenet_sandberg}/lfw.py (100%) rename {src => facenet_sandberg}/models/__init__.py (100%) rename {src => facenet_sandberg}/models/dummy.py (100%) rename {src => facenet_sandberg}/models/inception_resnet_v1.py (100%) rename {src => facenet_sandberg}/models/inception_resnet_v2.py (100%) rename {src => facenet_sandberg}/models/squeezenet.py (100%) rename {src => facenet_sandberg}/train_softmax.py (100%) rename {src => facenet_sandberg}/train_tripletloss.py (100%) rename {src => facenet_sandberg}/validate_on_lfw.py (100%) create mode 100644 setup.py delete mode 100644 src/generative/models/__init__.py diff --git a/.vscode/settings.json b/.vscode/settings.json new file mode 100644 index 000000000..70f059b00 --- /dev/null +++ b/.vscode/settings.json @@ -0,0 +1,3 @@ +{ + "python.pythonPath": "/Users/armanrahman/anaconda3/bin/python" +} \ No newline at end of file diff --git a/src/__init__.py b/facenet_sandberg/__init__.py similarity index 100% rename from src/__init__.py rename to facenet_sandberg/__init__.py diff --git a/__init__.py b/facenet_sandberg/align/__init__.py similarity index 100% rename from __init__.py rename to facenet_sandberg/align/__init__.py diff --git a/src/align/align_dataset_mtcnn.py b/facenet_sandberg/align/align_dataset_mtcnn.py similarity index 100% rename from src/align/align_dataset_mtcnn.py rename to facenet_sandberg/align/align_dataset_mtcnn.py diff --git a/src/align/det1.npy b/facenet_sandberg/align/det1.npy similarity index 100% rename from src/align/det1.npy rename to facenet_sandberg/align/det1.npy diff --git a/src/align/det2.npy b/facenet_sandberg/align/det2.npy similarity index 100% rename from src/align/det2.npy rename to facenet_sandberg/align/det2.npy diff --git a/src/align/det3.npy b/facenet_sandberg/align/det3.npy similarity index 100% rename from src/align/det3.npy rename to facenet_sandberg/align/det3.npy diff --git a/src/align/detect_face.py b/facenet_sandberg/align/detect_face.py similarity index 100% rename from src/align/detect_face.py rename to facenet_sandberg/align/detect_face.py diff --git a/src/calculate_filtering_metrics.py b/facenet_sandberg/calculate_filtering_metrics.py similarity index 100% rename from src/calculate_filtering_metrics.py rename to facenet_sandberg/calculate_filtering_metrics.py diff --git a/src/classifier.py b/facenet_sandberg/classifier.py similarity index 100% rename from src/classifier.py rename to facenet_sandberg/classifier.py diff --git a/src/compare.py b/facenet_sandberg/compare.py similarity index 100% rename from src/compare.py rename to facenet_sandberg/compare.py diff --git a/src/decode_msceleb_dataset.py b/facenet_sandberg/decode_msceleb_dataset.py similarity index 100% rename from src/decode_msceleb_dataset.py rename to facenet_sandberg/decode_msceleb_dataset.py diff --git a/src/download_and_extract.py b/facenet_sandberg/download_and_extract.py similarity index 100% rename from src/download_and_extract.py rename to facenet_sandberg/download_and_extract.py diff --git a/src/facenet.py b/facenet_sandberg/facenet.py similarity index 100% rename from src/facenet.py rename to facenet_sandberg/facenet.py diff --git a/src/freeze_graph.py b/facenet_sandberg/freeze_graph.py similarity index 100% rename from src/freeze_graph.py rename to facenet_sandberg/freeze_graph.py diff --git a/src/align/__init__.py b/facenet_sandberg/generative/__init__.py similarity index 100% rename from src/align/__init__.py rename to facenet_sandberg/generative/__init__.py diff --git a/src/generative/calculate_attribute_vectors.py b/facenet_sandberg/generative/calculate_attribute_vectors.py similarity index 100% rename from src/generative/calculate_attribute_vectors.py rename to facenet_sandberg/generative/calculate_attribute_vectors.py diff --git a/src/generative/__init__.py b/facenet_sandberg/generative/models/__init__.py similarity index 100% rename from src/generative/__init__.py rename to facenet_sandberg/generative/models/__init__.py diff --git a/src/generative/models/dfc_vae.py b/facenet_sandberg/generative/models/dfc_vae.py similarity index 100% rename from src/generative/models/dfc_vae.py rename to facenet_sandberg/generative/models/dfc_vae.py diff --git a/src/generative/models/dfc_vae_large.py b/facenet_sandberg/generative/models/dfc_vae_large.py similarity index 100% rename from src/generative/models/dfc_vae_large.py rename to facenet_sandberg/generative/models/dfc_vae_large.py diff --git a/src/generative/models/dfc_vae_resnet.py b/facenet_sandberg/generative/models/dfc_vae_resnet.py similarity index 100% rename from src/generative/models/dfc_vae_resnet.py rename to facenet_sandberg/generative/models/dfc_vae_resnet.py diff --git a/src/generative/models/vae_base.py b/facenet_sandberg/generative/models/vae_base.py similarity index 100% rename from src/generative/models/vae_base.py rename to facenet_sandberg/generative/models/vae_base.py diff --git a/src/generative/modify_attribute.py b/facenet_sandberg/generative/modify_attribute.py similarity index 100% rename from src/generative/modify_attribute.py rename to facenet_sandberg/generative/modify_attribute.py diff --git a/src/generative/train_vae.py b/facenet_sandberg/generative/train_vae.py similarity index 100% rename from src/generative/train_vae.py rename to facenet_sandberg/generative/train_vae.py diff --git a/src/lfw.py b/facenet_sandberg/lfw.py similarity index 100% rename from src/lfw.py rename to facenet_sandberg/lfw.py diff --git a/src/models/__init__.py b/facenet_sandberg/models/__init__.py similarity index 100% rename from src/models/__init__.py rename to facenet_sandberg/models/__init__.py diff --git a/src/models/dummy.py b/facenet_sandberg/models/dummy.py similarity index 100% rename from src/models/dummy.py rename to facenet_sandberg/models/dummy.py diff --git a/src/models/inception_resnet_v1.py b/facenet_sandberg/models/inception_resnet_v1.py similarity index 100% rename from src/models/inception_resnet_v1.py rename to facenet_sandberg/models/inception_resnet_v1.py diff --git a/src/models/inception_resnet_v2.py b/facenet_sandberg/models/inception_resnet_v2.py similarity index 100% rename from src/models/inception_resnet_v2.py rename to facenet_sandberg/models/inception_resnet_v2.py diff --git a/src/models/squeezenet.py b/facenet_sandberg/models/squeezenet.py similarity index 100% rename from src/models/squeezenet.py rename to facenet_sandberg/models/squeezenet.py diff --git a/src/train_softmax.py b/facenet_sandberg/train_softmax.py similarity index 100% rename from src/train_softmax.py rename to facenet_sandberg/train_softmax.py diff --git a/src/train_tripletloss.py b/facenet_sandberg/train_tripletloss.py similarity index 100% rename from src/train_tripletloss.py rename to facenet_sandberg/train_tripletloss.py diff --git a/src/validate_on_lfw.py b/facenet_sandberg/validate_on_lfw.py similarity index 100% rename from src/validate_on_lfw.py rename to facenet_sandberg/validate_on_lfw.py diff --git a/setup.py b/setup.py new file mode 100644 index 000000000..533861ee8 --- /dev/null +++ b/setup.py @@ -0,0 +1,18 @@ +from setuptools import setup, find_packages + +setup( + name='facenet_sandberg', + version='1.0.1', + description="Face recognition using TensorFlow", + long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", + url='https://github.com/armanrahman22/facenet', + packages= find_packages(), + maintainer='Arman Rahman', + maintainer_email='armanrahman22@gmail.com', + include_package_data=True, + license='MIT', + install_requires=[ + 'tensorflow', 'scipy', 'scikit-learn', 'opencv-python', + 'h5py', 'matplotlib', 'Pillow', 'requests', 'psutil' + ] +) \ No newline at end of file diff --git a/src/generative/models/__init__.py b/src/generative/models/__init__.py deleted file mode 100644 index e69de29bb..000000000 From bc74a923b6c34d89e3115cd60a6dd8b985c9c858 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Wed, 25 Jul 2018 23:46:42 -0400 Subject: [PATCH 03/50] added npy files --- MANIFEST.in | 1 + setup.py | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) create mode 100644 MANIFEST.in diff --git a/MANIFEST.in b/MANIFEST.in new file mode 100644 index 000000000..c4336d672 --- /dev/null +++ b/MANIFEST.in @@ -0,0 +1 @@ +include facenet_sandberg/align/*.npy \ No newline at end of file diff --git a/setup.py b/setup.py index 533861ee8..505362464 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ setup( name='facenet_sandberg', - version='1.0.1', + version='1.0.2', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', From 06436827f61d53bd0b53161ef63647dc9d32a402 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Fri, 3 Aug 2018 12:14:30 -0400 Subject: [PATCH 04/50] import fixes --- facenet_sandberg/align/align_dataset_mtcnn.py | 14 +++++++++----- facenet_sandberg/calculate_filtering_metrics.py | 2 +- facenet_sandberg/classifier.py | 2 +- facenet_sandberg/compare.py | 4 ++-- facenet_sandberg/decode_msceleb_dataset.py | 2 +- facenet_sandberg/freeze_graph.py | 2 +- facenet_sandberg/generative/modify_attribute.py | 2 +- facenet_sandberg/generative/train_vae.py | 2 +- facenet_sandberg/lfw.py | 2 +- facenet_sandberg/train_softmax.py | 4 ++-- facenet_sandberg/train_tripletloss.py | 4 ++-- facenet_sandberg/validate_on_lfw.py | 4 ++-- setup.py | 2 +- test/center_loss_test.py | 2 +- test/train_test.py | 2 +- test/triplet_loss_test.py | 2 +- 16 files changed, 28 insertions(+), 24 deletions(-) diff --git a/facenet_sandberg/align/align_dataset_mtcnn.py b/facenet_sandberg/align/align_dataset_mtcnn.py index 7d5e735e6..ab4fdd0e9 100644 --- a/facenet_sandberg/align/align_dataset_mtcnn.py +++ b/facenet_sandberg/align/align_dataset_mtcnn.py @@ -25,14 +25,15 @@ from __future__ import division from __future__ import print_function +from glob import iglob from scipy import misc import sys import os import argparse import tensorflow as tf import numpy as np -import facenet -import align.detect_face +from facenet_sandberg import facenet +from facenet_sandberg.align import detect_face import random from time import sleep @@ -52,7 +53,7 @@ def main(args): gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=args.gpu_memory_fraction) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False)) with sess.as_default(): - pnet, rnet, onet = align.detect_face.create_mtcnn(sess, None) + pnet, rnet, onet = detect_face.create_mtcnn(sess, None) minsize = 20 # minimum size of face threshold = [ 0.6, 0.7, 0.7 ] # three steps's threshold @@ -65,6 +66,8 @@ def main(args): with open(bounding_boxes_filename, "w") as text_file: nrof_images_total = 0 nrof_successfully_aligned = 0 + num_images = sum(1 for x in iglob(args.input_dir + '/**/*.*', recursive=True)) + # import pdb; pdb.set_trace() if args.random_order: random.shuffle(dataset) for cls in dataset: @@ -74,10 +77,11 @@ def main(args): if args.random_order: random.shuffle(cls.image_paths) for image_path in cls.image_paths: + if nrof_images_total%(num_images//20) == 0: + print('{} percent complete'.format(str(int(100 * round(nrof_images_total/num_images, 2))))) nrof_images_total += 1 filename = os.path.splitext(os.path.split(image_path)[1])[0] output_filename = os.path.join(output_class_dir, filename+'.png') - print(image_path) if not os.path.exists(output_filename): try: img = misc.imread(image_path) @@ -93,7 +97,7 @@ def main(args): img = facenet.to_rgb(img) img = img[:,:,0:3] - bounding_boxes, _ = align.detect_face.detect_face(img, minsize, pnet, rnet, onet, threshold, factor) + bounding_boxes, _ = detect_face.detect_face(img, minsize, pnet, rnet, onet, threshold, factor) nrof_faces = bounding_boxes.shape[0] if nrof_faces>0: det = bounding_boxes[:,0:4] diff --git a/facenet_sandberg/calculate_filtering_metrics.py b/facenet_sandberg/calculate_filtering_metrics.py index f60b9ae4d..6f70a3afb 100644 --- a/facenet_sandberg/calculate_filtering_metrics.py +++ b/facenet_sandberg/calculate_filtering_metrics.py @@ -29,7 +29,7 @@ import tensorflow as tf import numpy as np import argparse -import facenet +from facenet_sandberg import facenet import os import sys import time diff --git a/facenet_sandberg/classifier.py b/facenet_sandberg/classifier.py index 749db4d6b..82eeb6921 100644 --- a/facenet_sandberg/classifier.py +++ b/facenet_sandberg/classifier.py @@ -29,7 +29,7 @@ import tensorflow as tf import numpy as np import argparse -import facenet +from facenet_sandberg import facenet import os import sys import math diff --git a/facenet_sandberg/compare.py b/facenet_sandberg/compare.py index bc53cc421..c7d375327 100644 --- a/facenet_sandberg/compare.py +++ b/facenet_sandberg/compare.py @@ -33,8 +33,8 @@ import os import copy import argparse -import facenet -import align.detect_face +from facenet_sandberg import facenet +from facenet_sandberg.align import detect_face def main(args): diff --git a/facenet_sandberg/decode_msceleb_dataset.py b/facenet_sandberg/decode_msceleb_dataset.py index 4556bfa6c..477dd3392 100644 --- a/facenet_sandberg/decode_msceleb_dataset.py +++ b/facenet_sandberg/decode_msceleb_dataset.py @@ -34,7 +34,7 @@ import os import cv2 import argparse -import facenet +from facenet_sandberg import facenet # File format: text files, each line is an image record containing 6 columns, delimited by TAB. diff --git a/facenet_sandberg/freeze_graph.py b/facenet_sandberg/freeze_graph.py index 3584c186e..494fab0c2 100644 --- a/facenet_sandberg/freeze_graph.py +++ b/facenet_sandberg/freeze_graph.py @@ -32,7 +32,7 @@ import argparse import os import sys -import facenet +from facenet_sandberg import facenet from six.moves import xrange # @UnresolvedImport def main(args): diff --git a/facenet_sandberg/generative/modify_attribute.py b/facenet_sandberg/generative/modify_attribute.py index 8187cff47..c96093059 100644 --- a/facenet_sandberg/generative/modify_attribute.py +++ b/facenet_sandberg/generative/modify_attribute.py @@ -32,7 +32,7 @@ import sys import argparse import importlib -import facenet +from facenet_sandberg import facenet import os import numpy as np import h5py diff --git a/facenet_sandberg/generative/train_vae.py b/facenet_sandberg/generative/train_vae.py index c3c882fab..8cc3d9135 100644 --- a/facenet_sandberg/generative/train_vae.py +++ b/facenet_sandberg/generative/train_vae.py @@ -32,7 +32,7 @@ import time import importlib import argparse -import facenet +from facenet_sandberg import facenet import numpy as np import h5py import os diff --git a/facenet_sandberg/lfw.py b/facenet_sandberg/lfw.py index 91944332d..48831a714 100644 --- a/facenet_sandberg/lfw.py +++ b/facenet_sandberg/lfw.py @@ -29,7 +29,7 @@ import os import numpy as np -import facenet +from facenet_sandberg import facenet def evaluate(embeddings, actual_issame, nrof_folds=10, distance_metric=0, subtract_mean=False): # Calculate evaluation metrics diff --git a/facenet_sandberg/train_softmax.py b/facenet_sandberg/train_softmax.py index 6b0b28b58..79fa60933 100644 --- a/facenet_sandberg/train_softmax.py +++ b/facenet_sandberg/train_softmax.py @@ -35,8 +35,8 @@ import numpy as np import importlib import argparse -import facenet -import lfw +from facenet_sandberg import facenet +from facenet_sandberg import lfw import h5py import math import tensorflow.contrib.slim as slim diff --git a/facenet_sandberg/train_tripletloss.py b/facenet_sandberg/train_tripletloss.py index d6df19a4d..d10c8d3f8 100644 --- a/facenet_sandberg/train_tripletloss.py +++ b/facenet_sandberg/train_tripletloss.py @@ -36,8 +36,8 @@ import importlib import itertools import argparse -import facenet -import lfw +from facenet_sandberg import facenet +from facenet_sandberg import lfw from tensorflow.python.ops import data_flow_ops diff --git a/facenet_sandberg/validate_on_lfw.py b/facenet_sandberg/validate_on_lfw.py index ac456c5f6..a60a469c1 100644 --- a/facenet_sandberg/validate_on_lfw.py +++ b/facenet_sandberg/validate_on_lfw.py @@ -32,8 +32,8 @@ import tensorflow as tf import numpy as np import argparse -import facenet -import lfw +from facenet_sandberg import facenet +from facenet_sandberg import lfw import os import sys from tensorflow.python.ops import data_flow_ops diff --git a/setup.py b/setup.py index 505362464..0a811b6a4 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ setup( name='facenet_sandberg', - version='1.0.2', + version='1.0.4', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', diff --git a/test/center_loss_test.py b/test/center_loss_test.py index 196cd1143..50681d549 100644 --- a/test/center_loss_test.py +++ b/test/center_loss_test.py @@ -23,7 +23,7 @@ import unittest import tensorflow as tf import numpy as np -import facenet +from facenet_sandberg import facenet class CenterLossTest(unittest.TestCase): diff --git a/test/train_test.py b/test/train_test.py index 12cd6638a..760e73eab 100644 --- a/test/train_test.py +++ b/test/train_test.py @@ -26,7 +26,7 @@ import cv2 import os import shutil -import download_and_extract # @UnresolvedImport +from facenet_sandberg import download_and_extract # @UnresolvedImport import subprocess def memory_usage_psutil(): diff --git a/test/triplet_loss_test.py b/test/triplet_loss_test.py index 2648b3061..6046fa5fd 100644 --- a/test/triplet_loss_test.py +++ b/test/triplet_loss_test.py @@ -23,7 +23,7 @@ import unittest import tensorflow as tf import numpy as np -import facenet +from facenet_sandberg import facenet class DemuxEmbeddingsTest(unittest.TestCase): From a0668eb2fd648bfe7228777820f803d2d1c2f120 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Fri, 3 Aug 2018 12:30:47 -0400 Subject: [PATCH 05/50] fix version --- setup.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/setup.py b/setup.py index 0a811b6a4..3ec73b212 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ setup( name='facenet_sandberg', - version='1.0.4', + version='1.0.3', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', From b3294e787e7677e847044d38ca56ee3f633f9e87 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Mon, 6 Aug 2018 10:08:21 -0400 Subject: [PATCH 06/50] removed printing statements --- facenet_sandberg/facenet.py | 5 ----- setup.py | 2 +- 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/facenet_sandberg/facenet.py b/facenet_sandberg/facenet.py index 0e056765a..a8a569ac9 100644 --- a/facenet_sandberg/facenet.py +++ b/facenet_sandberg/facenet.py @@ -366,17 +366,12 @@ def load_model(model, input_map=None): # or if it is a protobuf file with a frozen graph model_exp = os.path.expanduser(model) if (os.path.isfile(model_exp)): - print('Model filename: %s' % model_exp) with gfile.FastGFile(model_exp,'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def, input_map=input_map, name='') else: - print('Model directory: %s' % model_exp) meta_file, ckpt_file = get_model_filenames(model_exp) - - print('Metagraph file: %s' % meta_file) - print('Checkpoint file: %s' % ckpt_file) saver = tf.train.import_meta_graph(os.path.join(model_exp, meta_file), input_map=input_map) saver.restore(tf.get_default_session(), os.path.join(model_exp, ckpt_file)) diff --git a/setup.py b/setup.py index 3ec73b212..05f617142 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ setup( name='facenet_sandberg', - version='1.0.3', + version='1.0.5', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', From 69c1aa834ad7c11bdb05cdab5c12e8471ba39529 Mon Sep 17 00:00:00 2001 From: Michael Perel Date: Tue, 7 Aug 2018 15:33:04 -0400 Subject: [PATCH 07/50] script to generate pairs.txt --- facenet_sandberg/generate_pairs.py | 85 ++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 facenet_sandberg/generate_pairs.py diff --git a/facenet_sandberg/generate_pairs.py b/facenet_sandberg/generate_pairs.py new file mode 100644 index 000000000..5700a05aa --- /dev/null +++ b/facenet_sandberg/generate_pairs.py @@ -0,0 +1,85 @@ +# Implementation of pairs.txt from lfw dataset +# Section f: http://vis-www.cs.umass.edu/lfw/lfw.pdf +# More succint, less explicit: http://vis-www.cs.umass.edu/lfw/README.txt + +import os +import random +import numpy as np +from typing import List, Tuple + +def split_people_into_sets(image_dir: str, k_num_sets: int) -> List[List[str]]: + names = os.listdir(image_dir) + random.shuffle(names) + return [list(arr) for arr in np.array_split(names, k_num_sets)] + +def make_matches(image_dir:str , people: List[str], total_matches: int) -> List[Tuple[str, int, int]]: + matches: List[Tuple[str, int, int]] = [] + curr_matches = 0 + while curr_matches < total_matches: + person = random.choice(people) + images = os.listdir(os.path.join(image_dir, person)) + if len(images) > 1: + img1, img2 = sorted( + [ + int(''.join([i for i in random.choice(images) if i.isnumeric() and i != '0'])), + int(''.join([i for i in random.choice(images) if i.isnumeric() and i != '0'])) + ] + ) + match = (person, img1, img2) + if (img1 != img2) and (match not in matches): + matches.append(match) + curr_matches += 1 + return sorted(matches, key=lambda x: x[0].lower()) + +def make_mismatches(image_dir: str, people: List[str], total_matches: int) -> List[Tuple[str, int, str, int]]: + mismatches: List[Tuple[str, int, str, int]] = [] + curr_matches = 0 + while curr_matches < total_matches: + person1 = random.choice(people) + person2 = random.choice(people) + if person1 != person2: + person1_images = os.listdir(os.path.join(image_dir, person1)) + person2_images = os.listdir(os.path.join(image_dir, person2)) + + if person1_images and person2_images: + img1 = int(''.join([i for i in random.choice(person1_images) if i.isnumeric() and i != '0'])) + img2 = int(''.join([i for i in random.choice(person2_images) if i.isnumeric() and i != '0'])) + + if person1.lower() > person2.lower(): + person1, img1, person2, img2 = person2, img2, person1, img1 + + mismatch = (person1, img1, person2, img2) + if mismatch not in mismatches: + mismatches.append(mismatch) + curr_matches += 1 + return sorted(mismatches, key=lambda x: x[0].lower()) + +def write_pairs(fname: str, match_sets: List[List[Tuple[str, int, int]]], mismatch_sets: List[List[Tuple[str, int, str, int]]], k_num_sets: int, total_matches_mismatches: int) -> None: + file_contents = f'{k_num_sets}\t{total_matches_mismatches}\n' + for match_set, mismatch_set in zip(match_sets, mismatch_sets): + for match in match_set: + file_contents += f'{match[0]}\t{match[1]}\t{match[2]}\n' + for mismatch in mismatch_set: + file_contents += f'{mismatch[0]}\t{mismatch[1]}\t{mismatch[2]}\t{mismatch[3]}\n' + + with open(fname, 'w') as fpairs: + fpairs.write(file_contents) + +if __name__ == '__main__': + k_num_sets = 10 + total_matches_mismatches = 100 + image_dir = os.path.join( + os.path.dirname( + os.path.abspath(__file__) + ), + 'images') + + people_lists = split_people_into_sets(image_dir, k_num_sets) + matches = [] + mismatches = [] + for people in people_lists: + matches.append(make_matches(image_dir, people, total_matches_mismatches)) + mismatches.append(make_mismatches(image_dir, people, total_matches_mismatches)) + + fname = 'new_pairs.txt' + write_pairs(fname, matches, mismatches, k_num_sets, total_matches_mismatches) From 97a33b2b74d4ca80eed4ee2092d9c481ac37223a Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Tue, 7 Aug 2018 15:35:40 -0400 Subject: [PATCH 08/50] refactoring lfw code --- facenet_sandberg/lfw.py | 79 +++++++++++++++++++++++++---- facenet_sandberg/validate_on_lfw.py | 16 +++--- 2 files changed, 80 insertions(+), 15 deletions(-) diff --git a/facenet_sandberg/lfw.py b/facenet_sandberg/lfw.py index 48831a714..28d013b64 100644 --- a/facenet_sandberg/lfw.py +++ b/facenet_sandberg/lfw.py @@ -29,44 +29,93 @@ import os import numpy as np +import glob from facenet_sandberg import facenet +from pathlib import Path -def evaluate(embeddings, actual_issame, nrof_folds=10, distance_metric=0, subtract_mean=False): +def evaluate(embeddings, labels, nrof_folds=10, distance_metric=0, subtract_mean=False): # Calculate evaluation metrics thresholds = np.arange(0, 4, 0.01) embeddings1 = embeddings[0::2] embeddings2 = embeddings[1::2] tpr, fpr, accuracy = facenet.calculate_roc(thresholds, embeddings1, embeddings2, - np.asarray(actual_issame), nrof_folds=nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean) + np.asarray(labels), nrof_folds=nrof_folds, + distance_metric=distance_metric, subtract_mean=subtract_mean) thresholds = np.arange(0, 4, 0.001) val, val_std, far = facenet.calculate_val(thresholds, embeddings1, embeddings2, - np.asarray(actual_issame), 1e-3, nrof_folds=nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean) + np.asarray(labels), 1e-3, nrof_folds=nrof_folds, + distance_metric=distance_metric, subtract_mean=subtract_mean) return tpr, fpr, accuracy, val, val_std, far + def get_paths(lfw_dir, pairs): + """Gets full paths for image pairs and labels (same person or not) + + Arguments: + lfw_dir {str} -- Base directory of testing data + pairs {[[str]]} -- List of pairs of form: + - For same person: [name, image 1 index, image 2 index] + - For different: [name 1, image index 1, name 2, image index 2] + + Returns: + [(str, str)], [bool] -- list of image pair paths and labels + """ + nrof_skipped_pairs = 0 path_list = [] - issame_list = [] + labels = [] for pair in pairs: if len(pair) == 3: path0 = add_extension(os.path.join(lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[1]))) path1 = add_extension(os.path.join(lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[2]))) - issame = True + is_same_person = True elif len(pair) == 4: path0 = add_extension(os.path.join(lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[1]))) path1 = add_extension(os.path.join(lfw_dir, pair[2], pair[2] + '_' + '%04d' % int(pair[3]))) - issame = False + is_same_person = False if os.path.exists(path0) and os.path.exists(path1): # Only add the pair if both paths exist path_list += (path0,path1) - issame_list.append(issame) + labels.append(is_same_person) else: nrof_skipped_pairs += 1 if nrof_skipped_pairs>0: print('Skipped %d image pairs' % nrof_skipped_pairs) - return path_list, issame_list - + return path_list, labels + + +def transform_directory_to_lfw_format(image_directory): + """Transforms an image dataset to lfw format image names. + Base directory should have a folder per person with the person's name. + + Arguments: + image_directory {str} -- base directory of people folders + """ + + all_folders = os.path.join(image_directory, "*", "") + people_folders = glob.iglob(all_folders) + for person_folder in people_folders: + all_image_paths = glob.glob(person_folder) + person_name = os.path.basename(os.path.normpath(person_folder)) + for index, image_path in enumerate(all_image_paths): + new_name = '_'.join(person_name.split()) + file_ext = Path(image_path).suffix + os.rename(image_path, new_name + file_ext) + + def add_extension(path): + """Adds a image file extension to the path if it exists + + Arguments: + path {str} -- base path to image file + + Raises: + RuntimeError -- [description] + + Returns: + str -- base path plus image file extension + """ + if os.path.exists(path+'.jpg'): return path+'.jpg' elif os.path.exists(path+'.png'): @@ -74,7 +123,19 @@ def add_extension(path): else: raise RuntimeError('No file "%s" with extension png or jpg.' % path) + def read_pairs(pairs_filename): + """Reads a pairs.txt file to array. Each file line is of format: + - If same person: "{person} {image 1 index} {image 2 index}" + - If different: "{person 1} {image 1 index} {person 2} {image 2 index}" + + Arguments: + pairs_filename {str} -- path to pairs.txt file + + Returns: + np.ndarray -- numpy array of pairs + """ + pairs = [] with open(pairs_filename, 'r') as f: for line in f.readlines()[1:]: diff --git a/facenet_sandberg/validate_on_lfw.py b/facenet_sandberg/validate_on_lfw.py index a60a469c1..2f7713a1d 100644 --- a/facenet_sandberg/validate_on_lfw.py +++ b/facenet_sandberg/validate_on_lfw.py @@ -78,13 +78,17 @@ def main(args): coord = tf.train.Coordinator() tf.train.start_queue_runners(coord=coord, sess=sess) - evaluate(sess, eval_enqueue_op, image_paths_placeholder, labels_placeholder, phase_train_placeholder, batch_size_placeholder, control_placeholder, - embeddings, label_batch, paths, actual_issame, args.lfw_batch_size, args.lfw_nrof_folds, args.distance_metric, args.subtract_mean, - args.use_flipped_images, args.use_fixed_image_standardization) + evaluate(sess, eval_enqueue_op, image_paths_placeholder, labels_placeholder, + phase_train_placeholder, batch_size_placeholder, control_placeholder, + embeddings, label_batch, paths, actual_issame, args.lfw_batch_size, + args.lfw_nrof_folds, args.distance_metric, args.subtract_mean, + args.use_flipped_images, args.use_fixed_image_standardization) - -def evaluate(sess, enqueue_op, image_paths_placeholder, labels_placeholder, phase_train_placeholder, batch_size_placeholder, control_placeholder, - embeddings, labels, image_paths, actual_issame, batch_size, nrof_folds, distance_metric, subtract_mean, use_flipped_images, use_fixed_image_standardization): + +def evaluate(sess, enqueue_op, image_paths_placeholder, labels_placeholder, + phase_train_placeholder, batch_size_placeholder, control_placeholder, + embeddings, labels, image_paths, actual_issame, batch_size, nrof_folds, + distance_metric, subtract_mean, use_flipped_images, use_fixed_image_standardization): # Run forward pass to calculate embeddings print('Runnning forward pass on LFW images') From 3dc86d3a7deba56075bd345cc2e26e31bc887736 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Wed, 8 Aug 2018 10:20:29 -0400 Subject: [PATCH 09/50] reformating code and adding convienience methods --- facenet_sandberg/align/align_dataset_mtcnn.py | 230 ++++++------ facenet_sandberg/face.py | 335 ++++++++++++++++++ facenet_sandberg/lfw.py | 69 +++- facenet_sandberg/validate_on_lfw.py | 294 ++++++++++----- setup.py | 8 +- 5 files changed, 721 insertions(+), 215 deletions(-) create mode 100644 facenet_sandberg/face.py diff --git a/facenet_sandberg/align/align_dataset_mtcnn.py b/facenet_sandberg/align/align_dataset_mtcnn.py index ab4fdd0e9..64c07114c 100644 --- a/facenet_sandberg/align/align_dataset_mtcnn.py +++ b/facenet_sandberg/align/align_dataset_mtcnn.py @@ -1,18 +1,18 @@ """Performs face alignment and stores face thumbnails in the output directory.""" # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -21,67 +21,89 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function -from glob import iglob -from scipy import misc -import sys -import os import argparse -import tensorflow as tf -import numpy as np -from facenet_sandberg import facenet -from facenet_sandberg.align import detect_face +import os import random +import sys +from glob import iglob from time import sleep -def main(args): +import cv2 +import numpy as np +import progressbar as pb +import tensorflow as tf +from facenet_sandberg import face, facenet +from facenet_sandberg.align import detect_face +from mtcnn.mtcnn import MTCNN +from scipy import misc + + +def main( + input_dir, + output_dir, + random_order, + image_size=182, + margin=44, + detect_multiple_faces=False): + """Aligns an image dataset + + Arguments: + input_dir {str} -- Directory with unaligned images. + output_dir {str} -- Directory with aligned face thumbnails. + random_order {bool} -- Shuffles the order of images to enable alignment using multiple processes. + + Keyword Arguments: + image_size {int} -- Image size (height, width) in pixels. (default: {182}) + margin {int} -- Margin for the crop around the bounding box + (height, width) in pixels. (default: {44}) + detect_multiple_faces {bool} -- Detect and align multiple faces per image. + (default: {False}) + """ + + widgets = ['Aligning Dataset', pb.Percentage(), ' ', + pb.Bar(marker=pb.RotatingMarker()), ' ', pb.ETA()] sleep(random.random()) - output_dir = os.path.expanduser(args.output_dir) + output_dir = os.path.expanduser(output_dir) if not os.path.exists(output_dir): os.makedirs(output_dir) # Store some git revision info in a text file in the log directory - src_path,_ = os.path.split(os.path.realpath(__file__)) + src_path, _ = os.path.split(os.path.realpath(__file__)) facenet.store_revision_info(src_path, output_dir, ' '.join(sys.argv)) - dataset = facenet.get_dataset(args.input_dir) - + dataset = facenet.get_dataset(input_dir) + print('Creating networks and loading parameters') - - with tf.Graph().as_default(): - gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=args.gpu_memory_fraction) - sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False)) - with sess.as_default(): - pnet, rnet, onet = detect_face.create_mtcnn(sess, None) - - minsize = 20 # minimum size of face - threshold = [ 0.6, 0.7, 0.7 ] # three steps's threshold - factor = 0.709 # scale factor - - # Add a random key to the filename to allow alignment using multiple processes + + detector = face.Detector(face_crop_size=image_size, face_crop_margin=margin, + detect_multiple_faces=detect_multiple_faces) + + # Add a random key to the filename to allow alignment using multiple + # processes random_key = np.random.randint(0, high=99999) - bounding_boxes_filename = os.path.join(output_dir, 'bounding_boxes_%05d.txt' % random_key) - + bounding_boxes_filename = os.path.join( + output_dir, 'bounding_boxes_%05d.txt' % random_key) + with open(bounding_boxes_filename, "w") as text_file: nrof_images_total = 0 nrof_successfully_aligned = 0 - num_images = sum(1 for x in iglob(args.input_dir + '/**/*.*', recursive=True)) - # import pdb; pdb.set_trace() - if args.random_order: + num_images = sum(1 for x in iglob( + input_dir + '/**/*.*', recursive=True)) + timer = pb.ProgressBar(widgets=widgets, maxval=num_images).start() + if random_order: random.shuffle(dataset) - for cls in dataset: - output_class_dir = os.path.join(output_dir, cls.name) + for datum in dataset: + output_class_dir = os.path.join(output_dir, datum.name) if not os.path.exists(output_class_dir): os.makedirs(output_class_dir) - if args.random_order: - random.shuffle(cls.image_paths) - for image_path in cls.image_paths: - if nrof_images_total%(num_images//20) == 0: - print('{} percent complete'.format(str(int(100 * round(nrof_images_total/num_images, 2))))) + if random_order: + random.shuffle(datum.image_paths) + for image_path in datum.image_paths: + timer.update(nrof_images_total) nrof_images_total += 1 filename = os.path.splitext(os.path.split(image_path)[1])[0] - output_filename = os.path.join(output_class_dir, filename+'.png') + output_filename = os.path.join( + output_class_dir, filename + '.png') if not os.path.exists(output_filename): try: img = misc.imread(image_path) @@ -89,75 +111,75 @@ def main(args): errorMessage = '{}: {}'.format(image_path, e) print(errorMessage) else: - if img.ndim<2: + if img.ndim < 2: print('Unable to align "%s"' % image_path) text_file.write('%s\n' % (output_filename)) continue if img.ndim == 2: img = facenet.to_rgb(img) - img = img[:,:,0:3] - - bounding_boxes, _ = detect_face.detect_face(img, minsize, pnet, rnet, onet, threshold, factor) - nrof_faces = bounding_boxes.shape[0] - if nrof_faces>0: - det = bounding_boxes[:,0:4] - det_arr = [] - img_size = np.asarray(img.shape)[0:2] - if nrof_faces>1: - if args.detect_multiple_faces: - for i in range(nrof_faces): - det_arr.append(np.squeeze(det[i])) - else: - bounding_box_size = (det[:,2]-det[:,0])*(det[:,3]-det[:,1]) - img_center = img_size / 2 - offsets = np.vstack([ (det[:,0]+det[:,2])/2-img_center[1], (det[:,1]+det[:,3])/2-img_center[0] ]) - offset_dist_squared = np.sum(np.power(offsets,2.0),0) - index = np.argmax(bounding_box_size-offset_dist_squared*2.0) # some extra weight on the centering - det_arr.append(det[index,:]) + img = img[:, :, 0:3] + faces = detector.find_faces(img) + nrof_successfully_aligned += 1 + for index, person in enumerate(faces): + filename_base, file_extension = os.path.splitext( + output_filename) + if detect_multiple_faces: + output_filename_n = "{}_{}{}".format( + filename_base, index, file_extension) else: - det_arr.append(np.squeeze(det)) - - for i, det in enumerate(det_arr): - det = np.squeeze(det) - bb = np.zeros(4, dtype=np.int32) - bb[0] = np.maximum(det[0]-args.margin/2, 0) - bb[1] = np.maximum(det[1]-args.margin/2, 0) - bb[2] = np.minimum(det[2]+args.margin/2, img_size[1]) - bb[3] = np.minimum(det[3]+args.margin/2, img_size[0]) - cropped = img[bb[1]:bb[3],bb[0]:bb[2],:] - scaled = misc.imresize(cropped, (args.image_size, args.image_size), interp='bilinear') - nrof_successfully_aligned += 1 - filename_base, file_extension = os.path.splitext(output_filename) - if args.detect_multiple_faces: - output_filename_n = "{}_{}{}".format(filename_base, i, file_extension) - else: - output_filename_n = "{}{}".format(filename_base, file_extension) - misc.imsave(output_filename_n, scaled) - text_file.write('%s %d %d %d %d\n' % (output_filename_n, bb[0], bb[1], bb[2], bb[3])) - else: - print('Unable to align "%s"' % image_path) - text_file.write('%s\n' % (output_filename)) - + output_filename_n = "{}{}".format( + filename_base, file_extension) + misc.imsave(output_filename_n, person.image) + text_file.write( + '%s %d %d %d %d\n' % + (output_filename_n, person.bounding_box[0], + person.bounding_box[1], person.bounding_box[2], + person.bounding_box[3])) + else: + print('Unable to align "%s"' % image_path) + text_file.write('%s\n' % (output_filename)) + print('Total number of images: %d' % nrof_images_total) - print('Number of successfully aligned images: %d' % nrof_successfully_aligned) - + print('Number of successfully aligned images: %d' % + nrof_successfully_aligned) + def parse_arguments(argv): parser = argparse.ArgumentParser() - - parser.add_argument('input_dir', type=str, help='Directory with unaligned images.') - parser.add_argument('output_dir', type=str, help='Directory with aligned face thumbnails.') - parser.add_argument('--image_size', type=int, - help='Image size (height, width) in pixels.', default=182) - parser.add_argument('--margin', type=int, - help='Margin for the crop around the bounding box (height, width) in pixels.', default=44) - parser.add_argument('--random_order', - help='Shuffles the order of images to enable alignment using multiple processes.', action='store_true') - parser.add_argument('--gpu_memory_fraction', type=float, - help='Upper bound on the amount of GPU memory that will be used by the process.', default=1.0) - parser.add_argument('--detect_multiple_faces', type=bool, - help='Detect and align multiple faces per image.', default=False) + + parser.add_argument('input_dir', type=str, + help='Directory with unaligned images.') + parser.add_argument('output_dir', type=str, + help='Directory with aligned face thumbnails.') + parser.add_argument( + '--image_size', + type=int, + help='Image size (height, width) in pixels.', + default=182) + parser.add_argument( + '--margin', + type=int, + help='Margin for the crop around the bounding box (height, width) in pixels.', + default=44) + parser.add_argument( + '--random_order', + help='Shuffles the order of images to enable alignment using multiple processes.', + action='store_true') + parser.add_argument( + '--detect_multiple_faces', + type=bool, + help='Detect and align multiple faces per image.', + default=False) return parser.parse_args(argv) + if __name__ == '__main__': - main(parse_arguments(sys.argv[1:])) + args = parse_arguments(sys.argv[1:]) + if args: + main( + args.input_dir, + args.output_dir, + args.random_order, + args.image_size, + args.margin, + args.detect_multiple_faces) diff --git a/facenet_sandberg/face.py b/facenet_sandberg/face.py new file mode 100644 index 000000000..c9315f53f --- /dev/null +++ b/facenet_sandberg/face.py @@ -0,0 +1,335 @@ +"""Face Detection and Recognition""" + +import itertools +import os +import pickle +from glob import glob +from urllib.request import urlopen + +import cv2 +import numpy as np +import tensorflow as tf +from facenet_sandberg import facenet, validate_on_lfw +from facenet_sandberg.align import align_dataset_mtcnn, detect_face +from mtcnn.mtcnn import MTCNN +from scipy import misc + +debug = False + + +class Face: + """Class representing a single face + + Attributes: + name {str} -- Name of person + bounding_box {Float[]} -- box around their face in container_image + image {cv2 image (np array)} -- Image cropped around face + container_image {cv2 image (np array)} -- Original image + embedding {Float} -- Face embedding + matches {Matches[]} -- List of matches to the face + url {str} -- Url where image came from + """ + + def __init__(self): + self.name = None + self.bounding_box = None + self.image = None + self.container_image = None + self.embedding = None + self.matches = [] + self.url = None + + +class Match: + """Class representing a match between two faces + + Attributes: + face_1 {Face} -- Face object for person 1 + face_2 {Face} -- Face object for person 2 + score {Float} -- Distance between two face embeddings + is_match {bool} -- whether is match between faces + """ + + def __init__(self): + self.face_1 = Face() + self.face_2 = Face() + self.score = float("inf") + self.is_match = False + + +class Identifier: + """Class to detect, encode, and match faces + + Arguments: + threshold {Float} -- Distance threshold to determine matches + """ + + def __init__(self, facenet_model_checkpoint, threshold=1.10): + self.detector = Detector() + self.encoder = Encoder(facenet_model_checkpoint) + self.threshold = threshold + + def download_image(self, url): + """Downloads an image from the url as a cv2 image + + Arguments: + url {str} -- url of image + + Returns: + cv2 image -- image array + """ + + req = urlopen(url) + arr = np.asarray(bytearray(req.read()), dtype=np.uint8) + image = cv2.imdecode(arr, -1) + return image + + def detect_encode(self, image, face_limit=5): + """Detects faces in an image and encodes them + + Arguments: + image {cv2 image (np array)} -- image to find faces and encode + face_limit {int} -- Maximum # of faces allowed in image. + If over limit returns empty list + + Returns: + Face[] -- list of Face objects with embeddings attached + """ + + faces = self.detector.find_faces(image, face_limit) + for face in faces: + face.embedding = self.encoder.generate_embedding(face.image) + return faces + + def detect_encode_all(self, images, urls=None, save_memory=False): + """For a list of images finds and encodes all faces + + Arguments: + images {List or iterable of cv2 images} -- images to encode + + Keyword Arguments: + urls {str[]} -- Optional list of urls to attach to Face objects. + Should be same length as images if used. (default: {None}) + save_memory {bool} -- Saves memory by deleting image from Face objects. + Should only be used if with you have some other kind + of refference to the original image like a url. (default: {False}) + + Returns: + Face[] -- List of Face objects with + """ + + all_faces = self.detector.bulk_find_face(images, urls) + all_embeddings = self.encoder.get_all_embeddings( + all_faces, save_memory) + return all_embeddings + + def compare_embedding(self, embedding_1, embedding_2, distance_metric=0): + """Compares the distance between two embeddings + + Arguments: + embedding_1 {numpy.ndarray} -- face embedding + embedding_2 {numpy.ndarray} -- face embedding + + Keyword Arguments: + distance_metric {int} -- 0 for Euclidian distance and 1 for Cosine similarity (default: {0}) + + Returns: + bool, float -- returns True if match and distance + """ + + distance = facenet.distance(embedding_1.reshape( + 1, -1), embedding_2.reshape(1, -1), distance_metric=distance_metric)[0] + is_match = False + if distance < self.threshold: + is_match = True + return is_match, distance + + def compare_images(self, image_1, image_2): + """Compares two images for matching faces + + Arguments: + image_1 {cv2 image (np array)} -- openCV image + image_2 {cv2 image (np array)} -- openCV image + + Returns: + Match -- Match object which has the two images, is_match, and score + """ + + match = Match() + image_1_faces = self.detect_encode(image_1) + image_2_faces = self.detect_encode(image_2) + if image_1_faces and image_2_faces: + for face_1 in image_1_faces: + for face_2 in image_2_faces: + distance = facenet.distance(face_1.embedding.reshape( + 1, -1), face_2.embedding.reshape(1, -1), distance_metric=0)[0] + if distance < match.score: + match.score = distance + match.face_1 = face_1 + match.face_2 = face_2 + if distance < self.threshold: + match.is_match = True + return match + + def find_all_matches(self, image_directory): + """Finds all matches in a directory of images + + Arguments: + image_directory {str} -- directory of images + + Returns: + Face[], Match[] -- List of face objects and list of Match objects + """ + + all_images = glob(image_directory + '/*') + all_matches = [] + all_faces = self.detect_encode_all(all_images) + # Really inefficient way to check all combinations + for face_1, face_2 in itertools.combinations(all_faces, 2): + is_match, score = self.compare_embedding( + face_1.embedding, face_2.embedding) + if is_match: + match = Match() + match.face_1 = face_1 + match.face_2 = face_2 + match.is_match = True + match.score = score + all_matches.append(match) + face_1.matches.append(match) + face_2.matches.append(match) + return all_faces, all_matches + + def tear_down(self): + self.encoder.tear_down() + + +class Encoder: + def __init__(self, facenet_model_checkpoint): + import tensorflow as tf + self.sess = tf.Session() + with self.sess.as_default(): + facenet.load_model(facenet_model_checkpoint) + # Get input and output tensors + self.images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0") + self.embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0") + self.phase_train_placeholder = tf.get_default_graph( + ).get_tensor_by_name("phase_train:0") + + def generate_embedding(self, image): + """Generates embeddings for a Face object with image + + Arguments: + image {cv2 image (np array)} -- Image of face. Should be aligned. + + Returns: + numpy.ndarray -- a single vector representing a face embedding + """ + + prewhiten_face = facenet.prewhiten(image) + + # Run forward pass to calculate embeddings + feed_dict = {self.images_placeholder: [ + prewhiten_face], self.phase_train_placeholder: False} + return self.sess.run(self.embeddings, feed_dict=feed_dict)[0] + + def get_all_embeddings(self, all_faces, save_memory=False): + """Generates embeddings for list of images + + Arguments: + all_faces {cv2 image[]} -- array of face images + + Keyword Arguments: + save_memory {bool} -- save memory by deleting image from Face object (default: {False}) + + Returns: + [type] -- [description] + """ + + all_images = [facenet.prewhiten(face.image) for face in all_faces] + + # Run forward pass to calculate embeddings + feed_dict = {self.images_placeholder: all_images, + self.phase_train_placeholder: False} + embed_array = self.sess.run(self.embeddings, feed_dict=feed_dict) + + for index, face in enumerate(all_faces): + if save_memory: + face.image = None + face.embedding = embed_array[index] + return all_faces + + def tear_down(self): + if tf.get_default_session(): + tf.get_default_session().close() + + +class Detector: + # face detection parameters + def __init__(self, face_crop_size=160, face_crop_margin=32, detect_multiple_faces=True, + min_face_size=20, scale_factor=0.709, steps_threshold=[0.6, 0.7, 0.7]): + self.detector = MTCNN(weights_file=None, min_face_size=min_face_size, + steps_threshold=steps_threshold, scale_factor=scale_factor) + self.face_crop_size = face_crop_size + self.face_crop_margin = face_crop_margin + self.detect_multiple_faces = detect_multiple_faces + + def bulk_find_face(self, images, urls=None, face_limit=5): + all_faces = [] + for index, image in enumerate(images): + faces = self.find_faces(image, face_limit) + if urls and index < len(urls): + for face in faces: + face.url = urls[index] + all_faces.append(face) + else: + all_faces += faces + return all_faces + + def find_faces(self, image, face_limit=5): + if isinstance(image, str): + image = cv2.imread(image) + faces = [] + results = self.detector.detect_faces(image) + img_size = np.asarray(image.shape)[0:2] + if len(results) < face_limit: + for result in results: + face = Face() + # bb[x, y, dx, dy] + bb = result['box'] + bb[2] = bb[0] + bb[2] + bb[3] = bb[1] + bb[3] + bb = self._fit_bounding_box(img_size[0], img_size[1], bb[0], bb[1], bb[2], bb[3]) + cropped = image[bb[1]:bb[3], bb[0]:bb[2], :] + + bb[0] = np.maximum(bb[0] - self.face_crop_margin / 2, 0) + bb[1] = np.maximum(bb[1] - self.face_crop_margin / 2, 0) + bb[2] = np.minimum(bb[2] + self.face_crop_margin / 2, img_size[1]) + bb[3] = np.minimum(bb[3] + self.face_crop_margin / 2, img_size[0]) + + face.bounding_box = bb + face.image = misc.imresize(cropped, (self.face_crop_size, self.face_crop_size), interp='bilinear') + + faces.append(face) + return faces + + def _fit_bounding_box(self, max_x, max_y, x1, y1, x2, y2): + x1 = max(min(x1, max_x), 0) + x2 = max(min(x2, max_x), 0) + y1 = max(min(y1, max_y), 0) + y2 = max(min(y2, max_y), 0) + return [x1, y1, x2, y2] + + + +def align_dataset(input_dir, output_dir, image_size=182, + margin=44, random_order=False, detect_multiple_faces=False): + align_dataset_mtcnn.main( + input_dir, output_dir, image_size, margin, random_order, detect_multiple_faces) + + +def test_dataset(lfw_dir, model, lfw_pairs, use_flipped_images, subtract_mean, + use_fixed_image_standardization, image_size=160, lfw_nrof_folds=10, + distance_metric=0, lfw_batch_size=128): + validate_on_lfw.main(lfw_dir, model, lfw_pairs, use_flipped_images, subtract_mean, + use_fixed_image_standardization, image_size, lfw_nrof_folds, + distance_metric, lfw_batch_size) diff --git a/facenet_sandberg/lfw.py b/facenet_sandberg/lfw.py index 28d013b64..69297c2ee 100644 --- a/facenet_sandberg/lfw.py +++ b/facenet_sandberg/lfw.py @@ -23,15 +23,19 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function +import argparse +import glob import os +import sys +from multiprocessing import Lock, Manager, Pool, Queue, Value +from multiprocessing.dummy import Pool as ThreadPool +from pathlib import Path + import numpy as np -import glob from facenet_sandberg import facenet -from pathlib import Path + def evaluate(embeddings, labels, nrof_folds=10, distance_metric=0, subtract_mean=False): # Calculate evaluation metrics @@ -84,34 +88,48 @@ def get_paths(lfw_dir, pairs): return path_list, labels -def transform_directory_to_lfw_format(image_directory): +def transform_to_lfw_format(image_directory, num_processes=os.cpu_count()): """Transforms an image dataset to lfw format image names. Base directory should have a folder per person with the person's name. Arguments: image_directory {str} -- base directory of people folders """ - all_folders = os.path.join(image_directory, "*", "") people_folders = glob.iglob(all_folders) - for person_folder in people_folders: - all_image_paths = glob.glob(person_folder) - person_name = os.path.basename(os.path.normpath(person_folder)) - for index, image_path in enumerate(all_image_paths): - new_name = '_'.join(person_name.split()) - file_ext = Path(image_path).suffix - os.rename(image_path, new_name + file_ext) + process_pool = Pool(num_processes) + process_pool.imap(rename, people_folders) + process_pool.close() + process_pool.join() + + +def rename(person_folder): + """Renames all the images in a folder in lfw format + + Arguments: + person_folder {str} -- path to folder named after person + """ + + all_image_paths = glob.glob(os.path.join(person_folder, "*")) + person_name = os.path.basename(os.path.normpath(person_folder)) + concat_name = '_'.join(person_name.split()) + for index, image_path in enumerate(all_image_paths): + image_name = concat_name + '_' + '%04d' % (index + 1) + file_ext = Path(image_path).suffix + new_image_path = os.path.join(person_folder, image_name + file_ext) + os.rename(image_path, new_image_path) + os.rename(person_folder, person_folder.replace(person_name, concat_name)) def add_extension(path): """Adds a image file extension to the path if it exists - + Arguments: path {str} -- base path to image file - + Raises: RuntimeError -- [description] - + Returns: str -- base path plus image file extension """ @@ -144,4 +162,21 @@ def read_pairs(pairs_filename): return np.array(pairs) +def parse_arguments(argv): + """Argument parser + """ + + parser = argparse.ArgumentParser() + + parser.add_argument( + 'image_directory', + type=str, + help='Path to the data directory containing images to fix names') + + return parser.parse_args(argv) + +if __name__ == '__main__': + args = parse_arguments(sys.argv[1:]) + if args: + transform_to_lfw_format(args.image_directory) diff --git a/facenet_sandberg/validate_on_lfw.py b/facenet_sandberg/validate_on_lfw.py index 2f7713a1d..9ca38b237 100644 --- a/facenet_sandberg/validate_on_lfw.py +++ b/facenet_sandberg/validate_on_lfw.py @@ -4,19 +4,19 @@ in the same directory, and the metagraph should have the extension '.meta'. """ # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -25,144 +25,258 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function -import tensorflow as tf -import numpy as np import argparse -from facenet_sandberg import facenet -from facenet_sandberg import lfw import os import sys -from tensorflow.python.ops import data_flow_ops -from sklearn import metrics -from scipy.optimize import brentq + +import numpy as np +import progressbar as pb +import tensorflow as tf +from facenet_sandberg import facenet, lfw from scipy import interpolate +from scipy.optimize import brentq +from sklearn import metrics +from tensorflow.python.ops import data_flow_ops + + +def main(lfw_dir, model, lfw_pairs, use_flipped_images, subtract_mean, + use_fixed_image_standardization, image_size=160, lfw_nrof_folds=10, + distance_metric=0, lfw_batch_size=128): + """Runs testing on dataset + + Arguments: + lfw_dir {str} -- Path to the data directory containing aligned LFW face patches. + model {str} -- Could be either a directory containing the meta_file and ckpt_file or a model protobuf (.pb) file. + lfw_pairs {str} -- The file containing the pairs to use for validation. + use_flipped_images {bool} -- Concatenates embeddings for the image and its horizontally flipped counterpart. + subtract_mean {bool} -- Subtract feature mean before calculating distance. + use_fixed_image_standardization {bool} -- Performs fixed standardization of images. + + Keyword Arguments: + image_size {int} -- [description] (default: {160}) + lfw_nrof_folds {int} -- Number of folds to use for cross validation. Mainly used for testing. (default: {10}) + distance_metric {int} -- Distance metric 0:euclidian, 1:cosine similarity. (default: {0}) + lfw_batch_size {int} -- Number of images to process in a batch in the LFW test set. (default: {128}) + """ -def main(args): - with tf.Graph().as_default(): - with tf.Session() as sess: - # Read the file containing the pairs used for testing - pairs = lfw.read_pairs(os.path.expanduser(args.lfw_pairs)) + pairs = lfw.read_pairs(os.path.expanduser(lfw_pairs)) # Get the paths for the corresponding images - paths, actual_issame = lfw.get_paths(os.path.expanduser(args.lfw_dir), pairs) - - image_paths_placeholder = tf.placeholder(tf.string, shape=(None,1), name='image_paths') - labels_placeholder = tf.placeholder(tf.int32, shape=(None,1), name='labels') - batch_size_placeholder = tf.placeholder(tf.int32, name='batch_size') - control_placeholder = tf.placeholder(tf.int32, shape=(None,1), name='control') - phase_train_placeholder = tf.placeholder(tf.bool, name='phase_train') - + paths, labels = lfw.get_paths(os.path.expanduser(lfw_dir), pairs) + + image_paths_placeholder = tf.placeholder( + tf.string, shape=(None, 1), name='image_paths') + labels_placeholder = tf.placeholder( + tf.int32, shape=(None, 1), name='labels') + batch_size_placeholder = tf.placeholder( + tf.int32, name='batch_size') + control_placeholder = tf.placeholder( + tf.int32, shape=(None, 1), name='control') + phase_train_placeholder = tf.placeholder( + tf.bool, name='phase_train') + nrof_preprocess_threads = 4 - image_size = (args.image_size, args.image_size) - eval_input_queue = data_flow_ops.FIFOQueue(capacity=2000000, - dtypes=[tf.string, tf.int32, tf.int32], - shapes=[(1,), (1,), (1,)], - shared_name=None, name=None) - eval_enqueue_op = eval_input_queue.enqueue_many([image_paths_placeholder, labels_placeholder, control_placeholder], name='eval_enqueue_op') - image_batch, label_batch = facenet.create_input_pipeline(eval_input_queue, image_size, nrof_preprocess_threads, batch_size_placeholder) - + image_size = (image_size, image_size) + eval_input_queue = data_flow_ops.FIFOQueue( + capacity=2000000, dtypes=[ + tf.string, tf.int32, tf.int32], shapes=[ + (1,), (1,), (1,)], shared_name=None, name=None) + eval_enqueue_op = eval_input_queue.enqueue_many([image_paths_placeholder, labels_placeholder, + control_placeholder], name='eval_enqueue_op') + image_batch, label_batch = facenet.create_input_pipeline( + eval_input_queue, image_size, nrof_preprocess_threads, batch_size_placeholder) + # Load the model - input_map = {'image_batch': image_batch, 'label_batch': label_batch, 'phase_train': phase_train_placeholder} - facenet.load_model(args.model, input_map=input_map) + input_map = { + 'image_batch': image_batch, + 'label_batch': label_batch, + 'phase_train': phase_train_placeholder} + facenet.load_model(model, input_map=input_map) # Get output tensor embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0") -# + coord = tf.train.Coordinator() tf.train.start_queue_runners(coord=coord, sess=sess) - evaluate(sess, eval_enqueue_op, image_paths_placeholder, labels_placeholder, - phase_train_placeholder, batch_size_placeholder, control_placeholder, - embeddings, label_batch, paths, actual_issame, args.lfw_batch_size, - args.lfw_nrof_folds, args.distance_metric, args.subtract_mean, - args.use_flipped_images, args.use_fixed_image_standardization) + evaluate( + sess, + eval_enqueue_op, + image_paths_placeholder, + labels_placeholder, + phase_train_placeholder, + batch_size_placeholder, + control_placeholder, + embeddings, + label_batch, + paths, + labels, + lfw_batch_size, + lfw_nrof_folds, + distance_metric, + subtract_mean, + use_flipped_images, + use_fixed_image_standardization) -def evaluate(sess, enqueue_op, image_paths_placeholder, labels_placeholder, - phase_train_placeholder, batch_size_placeholder, control_placeholder, - embeddings, labels, image_paths, actual_issame, batch_size, nrof_folds, - distance_metric, subtract_mean, use_flipped_images, use_fixed_image_standardization): +def evaluate( + sess, + enqueue_op, + image_paths_placeholder, + labels_placeholder, + phase_train_placeholder, + batch_size_placeholder, + control_placeholder, + embeddings, + labels, + image_paths, + actual_issame, + batch_size, + nrof_folds, + distance_metric, + subtract_mean, + use_flipped_images, + use_fixed_image_standardization): # Run forward pass to calculate embeddings - print('Runnning forward pass on LFW images') - + widgets = ['Runnning forward pass on LFW images', pb.Percentage(), ' ', + pb.Bar(marker=pb.RotatingMarker()), ' ', pb.ETA()] + # Enqueue one epoch of image paths and labels - nrof_embeddings = len(actual_issame)*2 # nrof_pairs * nrof_images_per_pair + # nrof_pairs * nrof_images_per_pair + nrof_embeddings = len(actual_issame) * 2 nrof_flips = 2 if use_flipped_images else 1 nrof_images = nrof_embeddings * nrof_flips - labels_array = np.expand_dims(np.arange(0,nrof_images),1) - image_paths_array = np.expand_dims(np.repeat(np.array(image_paths),nrof_flips),1) + + labels_array = np.expand_dims(np.arange(0, nrof_images), 1) + image_paths_array = np.expand_dims( + np.repeat(np.array(image_paths), nrof_flips), 1) control_array = np.zeros_like(labels_array, np.int32) + if use_fixed_image_standardization: - control_array += np.ones_like(labels_array)*facenet.FIXED_STANDARDIZATION + control_array += np.ones_like(labels_array) * \ + facenet.FIXED_STANDARDIZATION if use_flipped_images: # Flip every second image - control_array += (labels_array % 2)*facenet.FLIP - sess.run(enqueue_op, {image_paths_placeholder: image_paths_array, labels_placeholder: labels_array, control_placeholder: control_array}) - + control_array += (labels_array % 2) * facenet.FLIP + + sess.run(enqueue_op, + {image_paths_placeholder: image_paths_array, + labels_placeholder: labels_array, + control_placeholder: control_array}) + embedding_size = int(embeddings.get_shape()[1]) assert nrof_images % batch_size == 0, 'The number of LFW images must be an integer multiple of the LFW batch size' nrof_batches = nrof_images // batch_size emb_array = np.zeros((nrof_images, embedding_size)) lab_array = np.zeros((nrof_images,)) + + timer = pb.ProgressBar(widgets=widgets, maxval=nrof_batches).start() for i in range(nrof_batches): - feed_dict = {phase_train_placeholder:False, batch_size_placeholder:batch_size} + feed_dict = {phase_train_placeholder: False, + batch_size_placeholder: batch_size} emb, lab = sess.run([embeddings, labels], feed_dict=feed_dict) lab_array[lab] = lab emb_array[lab, :] = emb - if i % 10 == 9: - print('.', end='') - sys.stdout.flush() - print('') - embeddings = np.zeros((nrof_embeddings, embedding_size*nrof_flips)) + timer.update(i) + embeddings = np.zeros((nrof_embeddings, embedding_size * nrof_flips)) if use_flipped_images: - # Concatenate embeddings for flipped and non flipped version of the images - embeddings[:,:embedding_size] = emb_array[0::2,:] - embeddings[:,embedding_size:] = emb_array[1::2,:] + # Concatenate embeddings for flipped and non flipped version of the + # images + embeddings[:, :embedding_size] = emb_array[0::2, :] + embeddings[:, embedding_size:] = emb_array[1::2, :] else: embeddings = emb_array - assert np.array_equal(lab_array, np.arange(nrof_images))==True, 'Wrong labels used for evaluation, possibly caused by training examples left in the input pipeline' - tpr, fpr, accuracy, val, val_std, far = lfw.evaluate(embeddings, actual_issame, nrof_folds=nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean) - + assert np.array_equal(lab_array, np.arange( + nrof_images)), 'Wrong labels used for evaluation, possibly caused by training examples left in the input pipeline' + tpr, fpr, accuracy, val, val_std, far = lfw.evaluate( + embeddings, actual_issame, nrof_folds=nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean) + print('Accuracy: %2.5f+-%2.5f' % (np.mean(accuracy), np.std(accuracy))) print('Validation rate: %2.5f+-%2.5f @ FAR=%2.5f' % (val, val_std, far)) - + auc = metrics.auc(fpr, tpr) print('Area Under Curve (AUC): %1.3f' % auc) eer = brentq(lambda x: 1. - x - interpolate.interp1d(fpr, tpr)(x), 0., 1.) print('Equal Error Rate (EER): %1.3f' % eer) - + + def parse_arguments(argv): + """Argument parser + + Arguments: + argv {} -- arguments + + Returns: + {} -- parsed arguments + """ + parser = argparse.ArgumentParser() - - parser.add_argument('lfw_dir', type=str, + + parser.add_argument( + 'lfw_dir', + type=str, help='Path to the data directory containing aligned LFW face patches.') - parser.add_argument('--lfw_batch_size', type=int, - help='Number of images to process in a batch in the LFW test set.', default=100) - parser.add_argument('model', type=str, + parser.add_argument( + '--lfw_batch_size', + type=int, + help='Number of images to process in a batch in the LFW test set.', + default=100) + parser.add_argument( + 'model', + type=str, help='Could be either a directory containing the meta_file and ckpt_file or a model protobuf (.pb) file') - parser.add_argument('--image_size', type=int, - help='Image size (height, width) in pixels.', default=160) - parser.add_argument('--lfw_pairs', type=str, - help='The file containing the pairs to use for validation.', default='data/pairs.txt') - parser.add_argument('--lfw_nrof_folds', type=int, - help='Number of folds to use for cross validation. Mainly used for testing.', default=10) - parser.add_argument('--distance_metric', type=int, - help='Distance metric 0:euclidian, 1:cosine similarity.', default=0) - parser.add_argument('--use_flipped_images', - help='Concatenates embeddings for the image and its horizontally flipped counterpart.', action='store_true') - parser.add_argument('--subtract_mean', - help='Subtract feature mean before calculating distance.', action='store_true') - parser.add_argument('--use_fixed_image_standardization', - help='Performs fixed standardization of images.', action='store_true') + parser.add_argument( + '--image_size', + type=int, + help='Image size (height, width) in pixels.', + default=160) + parser.add_argument( + '--lfw_pairs', + type=str, + help='The file containing the pairs to use for validation.', + default='data/pairs.txt') + parser.add_argument( + '--lfw_nrof_folds', + type=int, + help='Number of folds to use for cross validation. Mainly used for testing.', + default=10) + parser.add_argument( + '--distance_metric', + type=int, + help='Distance metric 0:euclidian, 1:cosine similarity.', + default=0) + parser.add_argument( + '--use_flipped_images', + help='Concatenates embeddings for the image and its horizontally flipped counterpart.', + action='store_true') + parser.add_argument( + '--subtract_mean', + help='Subtract feature mean before calculating distance.', + action='store_true') + parser.add_argument( + '--use_fixed_image_standardization', + help='Performs fixed standardization of images.', + action='store_true') return parser.parse_args(argv) + if __name__ == '__main__': - main(parse_arguments(sys.argv[1:])) + args = parse_arguments(sys.argv[1:]) + if args: + main( + args.lfw_dir, + args.model, + args.lfw_pairs, + args.use_flipped_images, + args.subtract_mean, + args.use_fixed_image_standardization, + args.image_size, + args.lfw_nrof_folds, + args.distance_metric, + args.lfw_batch_size) diff --git a/setup.py b/setup.py index 05f617142..055910372 100644 --- a/setup.py +++ b/setup.py @@ -2,17 +2,17 @@ setup( name='facenet_sandberg', - version='1.0.5', + version='1.0.6', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', - packages= find_packages(), + packages=find_packages(), maintainer='Arman Rahman', maintainer_email='armanrahman22@gmail.com', include_package_data=True, license='MIT', install_requires=[ 'tensorflow', 'scipy', 'scikit-learn', 'opencv-python', - 'h5py', 'matplotlib', 'Pillow', 'requests', 'psutil' + 'h5py', 'matplotlib', 'Pillow', 'requests', 'psutil', 'progressbar', 'mtcnn' ] -) \ No newline at end of file +) From 4f25ed4aca44f0c3f1e6435e7d25e625d6e7f614 Mon Sep 17 00:00:00 2001 From: Michael Perel Date: Wed, 8 Aug 2018 14:18:14 -0400 Subject: [PATCH 10/50] fixed filename bug when 0 comes after a nonzero number such as 10 --- facenet_sandberg/generate_pairs.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/facenet_sandberg/generate_pairs.py b/facenet_sandberg/generate_pairs.py index 5700a05aa..67d874fab 100644 --- a/facenet_sandberg/generate_pairs.py +++ b/facenet_sandberg/generate_pairs.py @@ -21,8 +21,8 @@ def make_matches(image_dir:str , people: List[str], total_matches: int) -> List[ if len(images) > 1: img1, img2 = sorted( [ - int(''.join([i for i in random.choice(images) if i.isnumeric() and i != '0'])), - int(''.join([i for i in random.choice(images) if i.isnumeric() and i != '0'])) + int(''.join([i for i in random.choice(images) if i.isnumeric()]).lstrip('0')), + int(''.join([i for i in random.choice(images) if i.isnumeric()]).lstrip('0')) ] ) match = (person, img1, img2) @@ -42,8 +42,8 @@ def make_mismatches(image_dir: str, people: List[str], total_matches: int) -> Li person2_images = os.listdir(os.path.join(image_dir, person2)) if person1_images and person2_images: - img1 = int(''.join([i for i in random.choice(person1_images) if i.isnumeric() and i != '0'])) - img2 = int(''.join([i for i in random.choice(person2_images) if i.isnumeric() and i != '0'])) + img1 = int(''.join([i for i in random.choice(person1_images) if i.isnumeric()]).lstrip('0')) + img2 = int(''.join([i for i in random.choice(person2_images) if i.isnumeric()]).lstrip('0')) if person1.lower() > person2.lower(): person1, img1, person2, img2 = person2, img2, person1, img1 From 19ff4c9b17185e6b14aa5f43e755dfacd247ecfe Mon Sep 17 00:00:00 2001 From: Michael Perel Date: Wed, 8 Aug 2018 17:46:29 -0400 Subject: [PATCH 11/50] fixed bug where non folders were being used as names --- facenet_sandberg/generate_pairs.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/facenet_sandberg/generate_pairs.py b/facenet_sandberg/generate_pairs.py index 67d874fab..bda5fefe8 100644 --- a/facenet_sandberg/generate_pairs.py +++ b/facenet_sandberg/generate_pairs.py @@ -8,7 +8,7 @@ from typing import List, Tuple def split_people_into_sets(image_dir: str, k_num_sets: int) -> List[List[str]]: - names = os.listdir(image_dir) + names = [d for d in os.listdir(image_dir) if os.path.isdir(os.path.join(image_dir, d))] random.shuffle(names) return [list(arr) for arr in np.array_split(names, k_num_sets)] From 9048d25a0d7532e31adb21fa250b68312139d765 Mon Sep 17 00:00:00 2001 From: Michael Perel Date: Fri, 10 Aug 2018 11:40:16 -0400 Subject: [PATCH 12/50] fixed mismatch generation when no person1 or person2 images --- facenet_sandberg/generate_pairs.py | 35 +++++++++++++++--------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/facenet_sandberg/generate_pairs.py b/facenet_sandberg/generate_pairs.py index bda5fefe8..6333b691f 100644 --- a/facenet_sandberg/generate_pairs.py +++ b/facenet_sandberg/generate_pairs.py @@ -45,13 +45,13 @@ def make_mismatches(image_dir: str, people: List[str], total_matches: int) -> Li img1 = int(''.join([i for i in random.choice(person1_images) if i.isnumeric()]).lstrip('0')) img2 = int(''.join([i for i in random.choice(person2_images) if i.isnumeric()]).lstrip('0')) - if person1.lower() > person2.lower(): - person1, img1, person2, img2 = person2, img2, person1, img1 - - mismatch = (person1, img1, person2, img2) - if mismatch not in mismatches: - mismatches.append(mismatch) - curr_matches += 1 + if person1.lower() > person2.lower(): + person1, img1, person2, img2 = person2, img2, person1, img1 + + mismatch = (person1, img1, person2, img2) + if mismatch not in mismatches: + mismatches.append(mismatch) + curr_matches += 1 return sorted(mismatches, key=lambda x: x[0].lower()) def write_pairs(fname: str, match_sets: List[List[Tuple[str, int, int]]], mismatch_sets: List[List[Tuple[str, int, str, int]]], k_num_sets: int, total_matches_mismatches: int) -> None: @@ -60,19 +60,20 @@ def write_pairs(fname: str, match_sets: List[List[Tuple[str, int, int]]], mismat for match in match_set: file_contents += f'{match[0]}\t{match[1]}\t{match[2]}\n' for mismatch in mismatch_set: - file_contents += f'{mismatch[0]}\t{mismatch[1]}\t{mismatch[2]}\t{mismatch[3]}\n' + file_contents += f'{mismatch[0]}\t{mismatch[1]}\t{mismatch[2]}\t{mismatch[3]}\n' with open(fname, 'w') as fpairs: fpairs.write(file_contents) if __name__ == '__main__': k_num_sets = 10 - total_matches_mismatches = 100 - image_dir = os.path.join( - os.path.dirname( - os.path.abspath(__file__) - ), - 'images') + total_matches_mismatches = 15 + #image_dir = os.path.join( + # os.path.dirname( + # os.path.abspath(__file__) + # ), + # 'images') + image_dir = '/home/miperel/redcross/facenet/datasets/lfw/raw_mtcnn' people_lists = split_people_into_sets(image_dir, k_num_sets) matches = [] @@ -80,6 +81,6 @@ def write_pairs(fname: str, match_sets: List[List[Tuple[str, int, int]]], mismat for people in people_lists: matches.append(make_matches(image_dir, people, total_matches_mismatches)) mismatches.append(make_mismatches(image_dir, people, total_matches_mismatches)) - - fname = 'new_pairs.txt' - write_pairs(fname, matches, mismatches, k_num_sets, total_matches_mismatches) + + fname = '/home/miperel/redcross/facenet/data/pairs.txt' + write_pairs(fname, matches, mismatches, k_num_sets, total_matches_mismatches) \ No newline at end of file From da73ee979d3e1f365c62d6f5ff85bde2f267dd08 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Thu, 9 Aug 2018 21:19:48 -0500 Subject: [PATCH 13/50] reformat and efficient alignment --- facenet_sandberg/align/align_dataset_mtcnn.py | 245 +++++++++++------ facenet_sandberg/classifier.py | 4 +- facenet_sandberg/convert_to_keras.py | 115 ++++++++ facenet_sandberg/face.py | 256 ++++++++++++------ facenet_sandberg/facenet.py | 22 +- .../models/keras_inception_resnet_v1.py | 227 ++++++++++++++++ setup.py | 4 +- 7 files changed, 694 insertions(+), 179 deletions(-) create mode 100644 facenet_sandberg/convert_to_keras.py create mode 100644 facenet_sandberg/models/keras_inception_resnet_v1.py diff --git a/facenet_sandberg/align/align_dataset_mtcnn.py b/facenet_sandberg/align/align_dataset_mtcnn.py index 64c07114c..15108d817 100644 --- a/facenet_sandberg/align/align_dataset_mtcnn.py +++ b/facenet_sandberg/align/align_dataset_mtcnn.py @@ -27,121 +27,188 @@ import os import random import sys +from ctypes import c_int from glob import iglob -from time import sleep +from multiprocessing import Lock, Value +from typing import List -import cv2 import numpy as np import progressbar as pb -import tensorflow as tf from facenet_sandberg import face, facenet -from facenet_sandberg.align import detect_face -from mtcnn.mtcnn import MTCNN +from pathos.multiprocessing import ProcessPool from scipy import misc def main( - input_dir, - output_dir, - random_order, - image_size=182, - margin=44, - detect_multiple_faces=False): + input_dir: str, + output_dir: str, + random_order: bool=False, + image_size: int=182, + margin: int=44, + detect_multiple_faces: bool=False, + num_processes: int=1): """Aligns an image dataset Arguments: input_dir {str} -- Directory with unaligned images. output_dir {str} -- Directory with aligned face thumbnails. - random_order {bool} -- Shuffles the order of images to enable alignment using multiple processes. Keyword Arguments: + random_order {bool} -- Shuffles the order of images to enable alignment + using multiple processes. (default: {False}) image_size {int} -- Image size (height, width) in pixels. (default: {182}) - margin {int} -- Margin for the crop around the bounding box + margin {int} -- Margin for the crop around the bounding box (height, width) in pixels. (default: {44}) - detect_multiple_faces {bool} -- Detect and align multiple faces per image. + detect_multiple_faces {bool} -- Detect and align multiple faces per image. (default: {False}) + num_processes {int} -- Number of processes to use (default: {1}) """ - widgets = ['Aligning Dataset', pb.Percentage(), ' ', - pb.Bar(marker=pb.RotatingMarker()), ' ', pb.ETA()] - sleep(random.random()) output_dir = os.path.expanduser(output_dir) - if not os.path.exists(output_dir): - os.makedirs(output_dir) + os.makedirs(output_dir, exist_ok=True) + # Store some git revision info in a text file in the log directory src_path, _ = os.path.split(os.path.realpath(__file__)) facenet.store_revision_info(src_path, output_dir, ' '.join(sys.argv)) + dataset = facenet.get_dataset(input_dir) + if random_order: + random.shuffle(dataset) + + input_dir_all = os.path.join(input_dir, '**', '*.*') + num_images = sum(1 for x in iglob( + input_dir_all, recursive=True)) + + num_processes = min(num_processes, os.cpu_count()) + + aligner = Aligner( + image_size=image_size, + margin=margin, + detect_multiple_faces=detect_multiple_faces, + output_dir=output_dir, + random_order=random_order, + num_processes=num_processes, + num_images=num_images) + + aligner.align_multiprocess(dataset=dataset) print('Creating networks and loading parameters') - detector = face.Detector(face_crop_size=image_size, face_crop_margin=margin, - detect_multiple_faces=detect_multiple_faces) - - # Add a random key to the filename to allow alignment using multiple - # processes - random_key = np.random.randint(0, high=99999) - bounding_boxes_filename = os.path.join( - output_dir, 'bounding_boxes_%05d.txt' % random_key) - - with open(bounding_boxes_filename, "w") as text_file: - nrof_images_total = 0 - nrof_successfully_aligned = 0 - num_images = sum(1 for x in iglob( - input_dir + '/**/*.*', recursive=True)) - timer = pb.ProgressBar(widgets=widgets, maxval=num_images).start() - if random_order: - random.shuffle(dataset) - for datum in dataset: - output_class_dir = os.path.join(output_dir, datum.name) - if not os.path.exists(output_class_dir): - os.makedirs(output_class_dir) - if random_order: - random.shuffle(datum.image_paths) - for image_path in datum.image_paths: - timer.update(nrof_images_total) - nrof_images_total += 1 - filename = os.path.splitext(os.path.split(image_path)[1])[0] - output_filename = os.path.join( - output_class_dir, filename + '.png') - if not os.path.exists(output_filename): - try: - img = misc.imread(image_path) - except (IOError, ValueError, IndexError) as e: - errorMessage = '{}: {}'.format(image_path, e) - print(errorMessage) + +class Aligner: + + def __init__(self, image_size: int, margin: int, detect_multiple_faces: bool, + output_dir: str, random_order: bool, num_processes: int, num_images: int): + widgets = ['Aligning Dataset', pb.Percentage(), ' ', + pb.Bar(marker=pb.RotatingMarker()), ' ', pb.ETA()] + self.image_size = image_size + self.margin = margin + self.detect_multiple_faces = detect_multiple_faces + self.output_dir = output_dir + self.random_order = random_order + self.num_processes = num_processes + self.timer = pb.ProgressBar(widgets=widgets, maxval=num_images).start() + self.num_sucessful = Value(c_int) # defaults to 0 + self.num_sucessful_lock = Lock() + self.num_images_total = Value(c_int) + self.num_images_total_lock = Lock() + + def align_multiprocess(self, dataset: List[facenet.PersonClass]): + if self.num_processes > 1: + process_pool = ProcessPool(self.num_processes) + process_pool.imap(self.align, dataset) + process_pool.close() + process_pool.join() + else: + for person in dataset: + self.align(person) + print('Total number of images: %d' % int(self.num_images_total.value)) + print('Number of successfully aligned images: %d' % + int(self.num_sucessful.value)) + + def align(self, person: facenet.PersonClass): + # import pdb;pdb.set_trace() + detector = face.Detector( + face_crop_size=self.image_size, + face_crop_margin=self.margin, + detect_multiple_faces=self.detect_multiple_faces) + # Add a random key to the filename to allow alignment using multiple + # processes + random_key = np.random.randint(0, high=99999) + bounding_boxes_filename = os.path.join( + self.output_dir, 'bounding_boxes_%05d.txt' % random_key) + output_class_dir = os.path.join(self.output_dir, person.name) + + if not os.path.exists(output_class_dir): + os.makedirs(output_class_dir) + if self.random_order: + random.shuffle(person.image_paths) + + with open(bounding_boxes_filename, "w") as text_file: + for image_path in person.image_paths: + self.increment_total() + self.process_image(detector, image_path, + text_file, output_class_dir) + self.timer.update(int(self.num_sucessful.value)) + + def process_image(self, detector, image_path: str, text_file: str, output_class_dir: str): + output_filename = self.get_file_name(image_path, output_class_dir) + if not os.path.exists(output_filename): + try: + image = misc.imread(image_path) + except (IOError, ValueError, IndexError) as error: + error_message = '{}: {}'.format(image_path, error) + print(error_message) + else: + image = self.fix_image( + image, image_path, output_filename, text_file) + faces = detector.find_faces(image) + for index, person in enumerate(faces): + self.increment_sucessful() + filename_base, file_extension = os.path.splitext( + output_filename) + if self.detect_multiple_faces: + output_filename_n = "{}_{}{}".format( + filename_base, index, file_extension) else: - if img.ndim < 2: - print('Unable to align "%s"' % image_path) - text_file.write('%s\n' % (output_filename)) - continue - if img.ndim == 2: - img = facenet.to_rgb(img) - img = img[:, :, 0:3] - faces = detector.find_faces(img) - nrof_successfully_aligned += 1 - for index, person in enumerate(faces): - filename_base, file_extension = os.path.splitext( - output_filename) - if detect_multiple_faces: - output_filename_n = "{}_{}{}".format( - filename_base, index, file_extension) - else: - output_filename_n = "{}{}".format( - filename_base, file_extension) - misc.imsave(output_filename_n, person.image) - text_file.write( - '%s %d %d %d %d\n' % - (output_filename_n, person.bounding_box[0], - person.bounding_box[1], person.bounding_box[2], - person.bounding_box[3])) - else: - print('Unable to align "%s"' % image_path) - text_file.write('%s\n' % (output_filename)) - - print('Total number of images: %d' % nrof_images_total) - print('Number of successfully aligned images: %d' % - nrof_successfully_aligned) + output_filename_n = "{}{}".format( + filename_base, file_extension) + misc.imsave(output_filename_n, person.image) + text_file.write( + '%s %d %d %d %d\n' % + (output_filename_n, + person.bounding_box[0], + person.bounding_box[1], + person.bounding_box[2], + person.bounding_box[3])) + else: + print('Unable to align "%s"' % image_path) + text_file.write('%s\n' % (output_filename)) + + def increment_sucessful(self, add_amount: int=1): + with self.num_sucessful_lock: + self.num_sucessful.value += add_amount + + def increment_total(self, add_amount: int=1): + with self.num_images_total_lock: + self.num_images_total.value += add_amount + + @staticmethod + def fix_image(image: np.ndarray, image_path: str, output_filename: str, text_file: str): + if image.ndim < 2: + print('Unable to align "%s"' % image_path) + text_file.write('%s\n' % (output_filename)) + if image.ndim == 2: + image = facenet.to_rgb(image) + image = image[:, :, 0:3] + return image + + @staticmethod + def get_file_name(image_path: str, output_class_dir: str) -> str: + filename = os.path.splitext(os.path.split(image_path)[1])[0] + output_filename = os.path.join( + output_class_dir, filename + '.png') + return output_filename def parse_arguments(argv): @@ -170,6 +237,11 @@ def parse_arguments(argv): type=bool, help='Detect and align multiple faces per image.', default=False) + parser.add_argument( + '--num_processes', + type=int, + help='Number of processes to use', + default=1) return parser.parse_args(argv) @@ -182,4 +254,5 @@ def parse_arguments(argv): args.random_order, args.image_size, args.margin, - args.detect_multiple_faces) + args.detect_multiple_faces, + args.num_processes) diff --git a/facenet_sandberg/classifier.py b/facenet_sandberg/classifier.py index 82eeb6921..1eb79455c 100644 --- a/facenet_sandberg/classifier.py +++ b/facenet_sandberg/classifier.py @@ -130,8 +130,8 @@ def split_dataset(dataset, min_nrof_images_per_class, nrof_train_images_per_clas # Remove classes with less than min_nrof_images_per_class if len(paths)>=min_nrof_images_per_class: np.random.shuffle(paths) - train_set.append(facenet.ImageClass(cls.name, paths[:nrof_train_images_per_class])) - test_set.append(facenet.ImageClass(cls.name, paths[nrof_train_images_per_class:])) + train_set.append(facenet.PersonClass(cls.name, paths[:nrof_train_images_per_class])) + test_set.append(facenet.PersonClass(cls.name, paths[nrof_train_images_per_class:])) return train_set, test_set diff --git a/facenet_sandberg/convert_to_keras.py b/facenet_sandberg/convert_to_keras.py new file mode 100644 index 000000000..b12b9f1b4 --- /dev/null +++ b/facenet_sandberg/convert_to_keras.py @@ -0,0 +1,115 @@ +import argparse +import os +import re +import sys + +import numpy as np +import tensorflow as tf +from facenet_sandberg.models.keras_inception_resnet_v1 import * + +re_repeat = re.compile(r'Repeat_[0-9_]*b') +re_block8 = re.compile(r'Block8_[A-Za-z]') + + +def main(tf_ckpt_path, output_base_path, output_model_name): + weights_filename = output_model_name + '_weights.h5' + model_filename = output_model_name + '.h5' + + npy_weights_dir, weights_dir, model_dir = create_output_directories(output_base_path) + + extract_tensors_from_checkpoint_file(tf_ckpt_path, npy_weights_dir) + model = InceptionResNetV1() + + print('Loading numpy weights from', npy_weights_dir) + for layer in model.layers: + if layer.weights: + weights = [] + for w in layer.weights: + weight_name = os.path.basename(w.name).replace(':0', '') + weight_file = layer.name + '_' + weight_name + '.npy' + weight_arr = np.load(os.path.join(npy_weights_dir, weight_file)) + weights.append(weight_arr) + layer.set_weights(weights) + + print('Saving weights...') + model.save_weights(os.path.join(weights_dir, weights_filename)) + print('Saving model...') + model.save(os.path.join(model_dir, model_filename)) + + +def create_output_directories(output_base_path): + npy_weights_dir = os.path.join(output_base_path, 'npy_weights') + weights_dir = os.path.join(output_base_path, 'weights') + model_dir = os.path.join(output_base_path, 'model') + os.makedirs(npy_weights_dir, exist_ok=True) + os.makedirs(weights_dir, exist_ok=True) + os.makedirs(model_dir, exist_ok=True) + return npy_weights_dir, weights_dir, model_dir + + +def get_filename(key): + filename = str(key) + filename = filename.replace('/', '_') + filename = filename.replace('InceptionResnetV1_', '') + + # remove "Repeat" scope from filename + filename = re_repeat.sub('B', filename) + + if re_block8.match(filename): + # the last block8 has different name with the previous 5 occurrences + filename = filename.replace('Block8', 'Block8_6') + + # from TF to Keras naming + filename = filename.replace('_weights', '_kernel') + filename = filename.replace('_biases', '_bias') + + return filename + '.npy' + + +def extract_tensors_from_checkpoint_file(filename, output_folder): + reader = tf.train.NewCheckpointReader(filename) + + for key in reader.get_variable_to_shape_map(): + # not saving the following tensors + if key == 'global_step': + continue + if 'AuxLogit' in key: + continue + + # convert tensor name into the corresponding Keras layer weight name and save + path = os.path.join(output_folder, get_filename(key)) + arr = reader.get_tensor(key) + np.save(path, arr) + + +def parse_arguments(argv): + """Argument parser + """ + + parser = argparse.ArgumentParser() + + parser.add_argument( + 'tf_ckpt_path', + type=str, + help='Path to the directory containing pretrained tensorflow checkpoints.') + + parser.add_argument( + 'output_base_path', + type=str, + help='Base path for the desired output directory.') + + parser.add_argument( + 'output_model_name', + type=str, + help='Name for the new model (do not include .h5)') + + return parser.parse_args(argv) + + +if __name__ == '__main__': + args = parse_arguments(sys.argv[1:]) + if args: + main( + args.tf_ckpt_path, + args.output_base_path, + args.output_model_name) diff --git a/facenet_sandberg/face.py b/facenet_sandberg/face.py index c9315f53f..9e8b63146 100644 --- a/facenet_sandberg/face.py +++ b/facenet_sandberg/face.py @@ -3,7 +3,8 @@ import itertools import os import pickle -from glob import glob +from glob import glob, iglob +from typing import Dict, Generator, List from urllib.request import urlopen import cv2 @@ -31,13 +32,13 @@ class Face: """ def __init__(self): - self.name = None - self.bounding_box = None - self.image = None - self.container_image = None - self.embedding = None - self.matches = [] - self.url = None + self.name: str = None + self.bounding_box: List[float] = None + self.image: np.ndarray = None + self.container_image: np.ndarray = None + self.embedding: np.ndarray = None + self.matches: List[Match] = [] + self.url: str = None class Match: @@ -51,10 +52,10 @@ class Match: """ def __init__(self): - self.face_1 = Face() - self.face_2 = Face() - self.score = float("inf") - self.is_match = False + self.face_1: Face = Face() + self.face_2: Face = Face() + self.score: float = float("inf") + self.is_match: bool = False class Identifier: @@ -64,12 +65,13 @@ class Identifier: threshold {Float} -- Distance threshold to determine matches """ - def __init__(self, facenet_model_checkpoint, threshold=1.10): + def __init__(self, facenet_model_checkpoint: str, threshold: float = 1.10): self.detector = Detector() self.encoder = Encoder(facenet_model_checkpoint) - self.threshold = threshold + self.threshold: float = threshold - def download_image(self, url): + @staticmethod + def download_image(url: str) -> np.ndarray: """Downloads an image from the url as a cv2 image Arguments: @@ -84,7 +86,42 @@ def download_image(self, url): image = cv2.imdecode(arr, -1) return image - def detect_encode(self, image, face_limit=5): + @staticmethod + def get_image_from_path(image_path: str) -> np.ndarray: + return cv2.imread(image_path) + + @staticmethod + def get_images_from_dir( + directory: str, recursive: bool) -> Generator[np.ndarray, None, None]: + if recursive: + image_paths = iglob(os.path.join( + directory, '**', '*.*'), recursive=recursive) + else: + image_paths = iglob(os.path.join(directory, '*.*')) + for image_path in image_paths: + yield cv2.imread(image_path) + + def vectorize(self, image: np.ndarray, + face_limit: int = 5) -> List[np.ndarray]: + faces: List[Face] = self.detect_encode(image, face_limit) + vectors = [face.embedding for face in faces] + return vectors + + def vectorize_all(self, + images: Generator[np.ndarray, + None, + None], + face_limit: int = 5) -> Generator[List[np.ndarray], + None, + None]: + all_faces: Generator[List[Face], None, None] = self.detect_encode_all( + images=images, save_memory=True, face_limit=face_limit) + vectors: Generator[List[np.ndarray], None, None] = ( + face.embedding for faces in all_faces for face in faces) + return vectors + + def detect_encode(self, image: np.ndarray, + face_limit: int=5) -> List[Face]: """Detects faces in an image and encodes them Arguments: @@ -96,12 +133,20 @@ def detect_encode(self, image, face_limit=5): Face[] -- list of Face objects with embeddings attached """ - faces = self.detector.find_faces(image, face_limit) + faces: List[Face] = self.detector.find_faces(image, face_limit) for face in faces: face.embedding = self.encoder.generate_embedding(face.image) return faces - def detect_encode_all(self, images, urls=None, save_memory=False): + def detect_encode_all(self, + images: Generator[np.ndarray, + None, + None], + urls: [str]=None, + save_memory: bool=False, + face_limit: int=5) -> Generator[List[Face], + None, + None]: """For a list of images finds and encodes all faces Arguments: @@ -118,12 +163,15 @@ def detect_encode_all(self, images, urls=None, save_memory=False): Face[] -- List of Face objects with """ - all_faces = self.detector.bulk_find_face(images, urls) - all_embeddings = self.encoder.get_all_embeddings( - all_faces, save_memory) - return all_embeddings + all_faces: Generator[List[Face], None, None] = self.detector.bulk_find_face( + images, urls, face_limit) + return self.encoder.get_all_embeddings(all_faces, save_memory) - def compare_embedding(self, embedding_1, embedding_2, distance_metric=0): + def compare_embedding(self, + embedding_1: np.ndarray, + embedding_2: np.ndarray, + distance_metric: int=0) -> (bool, + float): """Compares the distance between two embeddings Arguments: @@ -144,7 +192,8 @@ def compare_embedding(self, embedding_1, embedding_2, distance_metric=0): is_match = True return is_match, distance - def compare_images(self, image_1, image_2): + def compare_images(self, image_1: np.ndarray, + image_2: np.ndarray) -> Match: """Compares two images for matching faces Arguments: @@ -171,19 +220,23 @@ def compare_images(self, image_1, image_2): match.is_match = True return match - def find_all_matches(self, image_directory): + def find_all_matches(self, image_directory: str, + recursive: bool) -> List[Match]: """Finds all matches in a directory of images Arguments: image_directory {str} -- directory of images Returns: - Face[], Match[] -- List of face objects and list of Match objects + Match[] -- List of Match objects """ - all_images = glob(image_directory + '/*') + all_images = self.get_images_from_dir(image_directory, recursive) all_matches = [] - all_faces = self.detect_encode_all(all_images) + all_faces_lists: Generator[List[Face], None, + None] = self.detect_encode_all(all_images) + all_faces: Generator[Face, None, None] = ( + face for faces in all_faces_lists for face in faces) # Really inefficient way to check all combinations for face_1, face_2 in itertools.combinations(all_faces, 2): is_match, score = self.compare_embedding( @@ -197,14 +250,14 @@ def find_all_matches(self, image_directory): all_matches.append(match) face_1.matches.append(match) face_2.matches.append(match) - return all_faces, all_matches + return all_matches def tear_down(self): self.encoder.tear_down() class Encoder: - def __init__(self, facenet_model_checkpoint): + def __init__(self, facenet_model_checkpoint: str): import tensorflow as tf self.sess = tf.Session() with self.sess.as_default(): @@ -215,7 +268,7 @@ def __init__(self, facenet_model_checkpoint): self.phase_train_placeholder = tf.get_default_graph( ).get_tensor_by_name("phase_train:0") - def generate_embedding(self, image): + def generate_embedding(self, image: np.ndarray) -> np.ndarray: """Generates embeddings for a Face object with image Arguments: @@ -232,31 +285,37 @@ def generate_embedding(self, image): prewhiten_face], self.phase_train_placeholder: False} return self.sess.run(self.embeddings, feed_dict=feed_dict)[0] - def get_all_embeddings(self, all_faces, save_memory=False): + def get_all_embeddings(self, + all_faces: Generator[List[Face], + None, + None], + save_memory: bool=False) -> Generator[List[Face], + None, + None]: """Generates embeddings for list of images Arguments: - all_faces {cv2 image[]} -- array of face images + all_faces -- array of face images Keyword Arguments: - save_memory {bool} -- save memory by deleting image from Face object (default: {False}) + save_memory -- save memory by deleting image from Face object (default: {False}) Returns: - [type] -- [description] + Faces with embeddings """ - all_images = [facenet.prewhiten(face.image) for face in all_faces] - - # Run forward pass to calculate embeddings - feed_dict = {self.images_placeholder: all_images, - self.phase_train_placeholder: False} - embed_array = self.sess.run(self.embeddings, feed_dict=feed_dict) - - for index, face in enumerate(all_faces): - if save_memory: - face.image = None - face.embedding = embed_array[index] - return all_faces + for faces in all_faces: + prewhitened_images = [facenet.prewhiten( + face.image) for face in faces] + feed_dict = {self.images_placeholder: prewhitened_images, + self.phase_train_placeholder: False} + embed_array = self.sess.run(self.embeddings, feed_dict=feed_dict) + for index, face in enumerate(faces): + if save_memory: + face.image = None + face.container_image = None + face.embedding = embed_array[index] + yield faces def tear_down(self): if tf.get_default_session(): @@ -265,29 +324,42 @@ def tear_down(self): class Detector: # face detection parameters - def __init__(self, face_crop_size=160, face_crop_margin=32, detect_multiple_faces=True, - min_face_size=20, scale_factor=0.709, steps_threshold=[0.6, 0.7, 0.7]): - self.detector = MTCNN(weights_file=None, min_face_size=min_face_size, - steps_threshold=steps_threshold, scale_factor=scale_factor) + def __init__( + self, + face_crop_size: int=160, + face_crop_margin: int=32, + detect_multiple_faces: bool=True, + min_face_size: int=20, + scale_factor: float=0.709, + steps_threshold: List[float]=[ + 0.6, + 0.7, + 0.7]): + self.detector = MTCNN( + weights_file=None, + min_face_size=min_face_size, + steps_threshold=steps_threshold, + scale_factor=scale_factor) self.face_crop_size = face_crop_size self.face_crop_margin = face_crop_margin self.detect_multiple_faces = detect_multiple_faces - def bulk_find_face(self, images, urls=None, face_limit=5): - all_faces = [] + def bulk_find_face(self, + images: Generator[np.ndarray, + None, None], + urls: List[str] = None, + face_limit: int=5) -> Generator[List[Face], + None, None]: for index, image in enumerate(images): faces = self.find_faces(image, face_limit) if urls and index < len(urls): for face in faces: face.url = urls[index] - all_faces.append(face) + yield faces else: - all_faces += faces - return all_faces + yield faces - def find_faces(self, image, face_limit=5): - if isinstance(image, str): - image = cv2.imread(image) + def find_faces(self, image: np.ndarray, face_limit: int=5) -> List[Face]: faces = [] results = self.detector.detect_faces(image) img_size = np.asarray(image.shape)[0:2] @@ -296,23 +368,31 @@ def find_faces(self, image, face_limit=5): face = Face() # bb[x, y, dx, dy] bb = result['box'] - bb[2] = bb[0] + bb[2] - bb[3] = bb[1] + bb[3] - bb = self._fit_bounding_box(img_size[0], img_size[1], bb[0], bb[1], bb[2], bb[3]) + bb = self.fit_bounding_box( + img_size[0], img_size[1], bb[0], bb[1], bb[2], bb[3]) cropped = image[bb[1]:bb[3], bb[0]:bb[2], :] - + bb[0] = np.maximum(bb[0] - self.face_crop_margin / 2, 0) bb[1] = np.maximum(bb[1] - self.face_crop_margin / 2, 0) - bb[2] = np.minimum(bb[2] + self.face_crop_margin / 2, img_size[1]) - bb[3] = np.minimum(bb[3] + self.face_crop_margin / 2, img_size[0]) + bb[2] = np.minimum( + bb[2] + self.face_crop_margin / 2, img_size[1]) + bb[3] = np.minimum( + bb[3] + self.face_crop_margin / 2, img_size[0]) face.bounding_box = bb - face.image = misc.imresize(cropped, (self.face_crop_size, self.face_crop_size), interp='bilinear') + face.image = misc.imresize( + cropped, + (self.face_crop_size, self.face_crop_size), + interp='bilinear') faces.append(face) return faces - - def _fit_bounding_box(self, max_x, max_y, x1, y1, x2, y2): + + @staticmethod + def fit_bounding_box(max_x: int, max_y: int, x1: int, + y1: int, dx: int, dy: int) -> List[int]: + x2 = x1 + dx + y2 = y1 + dy x1 = max(min(x1, max_x), 0) x2 = max(min(x2, max_x), 0) y1 = max(min(y1, max_y), 0) @@ -320,16 +400,36 @@ def _fit_bounding_box(self, max_x, max_y, x1, y1, x2, y2): return [x1, y1, x2, y2] - def align_dataset(input_dir, output_dir, image_size=182, margin=44, random_order=False, detect_multiple_faces=False): align_dataset_mtcnn.main( - input_dir, output_dir, image_size, margin, random_order, detect_multiple_faces) - - -def test_dataset(lfw_dir, model, lfw_pairs, use_flipped_images, subtract_mean, - use_fixed_image_standardization, image_size=160, lfw_nrof_folds=10, - distance_metric=0, lfw_batch_size=128): - validate_on_lfw.main(lfw_dir, model, lfw_pairs, use_flipped_images, subtract_mean, - use_fixed_image_standardization, image_size, lfw_nrof_folds, - distance_metric, lfw_batch_size) + input_dir, + output_dir, + image_size, + margin, + random_order, + detect_multiple_faces) + + +def test_dataset( + lfw_dir, + model, + lfw_pairs, + use_flipped_images, + subtract_mean, + use_fixed_image_standardization, + image_size=160, + lfw_nrof_folds=10, + distance_metric=0, + lfw_batch_size=128): + validate_on_lfw.main( + lfw_dir, + model, + lfw_pairs, + use_flipped_images, + subtract_mean, + use_fixed_image_standardization, + image_size, + lfw_nrof_folds, + distance_metric, + lfw_batch_size) diff --git a/facenet_sandberg/facenet.py b/facenet_sandberg/facenet.py index a8a569ac9..2b72579a5 100644 --- a/facenet_sandberg/facenet.py +++ b/facenet_sandberg/facenet.py @@ -302,8 +302,8 @@ def get_learning_rate_from_file(filename, epoch): else: return learning_rate -class ImageClass(): - "Stores the paths to images for a given class" +class PersonClass(): + "Stores the paths to images for a given person" def __init__(self, name, image_paths): self.name = name self.image_paths = image_paths @@ -317,15 +317,15 @@ def __len__(self): def get_dataset(path, has_class_directories=True): dataset = [] path_exp = os.path.expanduser(path) - classes = [path for path in os.listdir(path_exp) \ + people = [path for path in os.listdir(path_exp) \ if os.path.isdir(os.path.join(path_exp, path))] - classes.sort() - nrof_classes = len(classes) - for i in range(nrof_classes): - class_name = classes[i] - facedir = os.path.join(path_exp, class_name) + people.sort() + num_people = len(people) + for i in range(num_people): + person_name = people[i] + facedir = os.path.join(path_exp, person_name) image_paths = get_image_paths(facedir) - dataset.append(ImageClass(class_name, image_paths)) + dataset.append(PersonClass(person_name, image_paths)) return dataset @@ -355,8 +355,8 @@ def split_dataset(dataset, split_ratio, min_nrof_images_per_class, mode): if split==nrof_images_in_class: split = nrof_images_in_class-1 if split>=min_nrof_images_per_class and nrof_images_in_class-split>=1: - train_set.append(ImageClass(cls.name, paths[:split])) - test_set.append(ImageClass(cls.name, paths[split:])) + train_set.append(PersonClass(cls.name, paths[:split])) + test_set.append(PersonClass(cls.name, paths[split:])) else: raise ValueError('Invalid train/test split mode "%s"' % mode) return train_set, test_set diff --git a/facenet_sandberg/models/keras_inception_resnet_v1.py b/facenet_sandberg/models/keras_inception_resnet_v1.py new file mode 100644 index 000000000..9235ef0a8 --- /dev/null +++ b/facenet_sandberg/models/keras_inception_resnet_v1.py @@ -0,0 +1,227 @@ +"""Inception-ResNet V1 model for Keras. +# Reference +http://arxiv.org/abs/1602.07261 +https://github.com/davidsandberg/facenet/blob/master/src/models/inception_resnet_v1.py +https://github.com/myutwo150/keras-inception-resnet-v2/blob/master/inception_resnet_v2.py +""" +from functools import partial + +from keras.models import Model +from keras.layers import Activation +from keras.layers import BatchNormalization +from keras.layers import Concatenate +from keras.layers import Conv2D +from keras.layers import Dense +from keras.layers import Dropout +from keras.layers import GlobalAveragePooling2D +from keras.layers import Input +from keras.layers import Lambda +from keras.layers import MaxPooling2D +from keras.layers import add +from keras import backend as K + + +def scaling(x, scale): + return x * scale + + +def conv2d_bn(x, + filters, + kernel_size, + strides=1, + padding='same', + activation='relu', + use_bias=False, + name=None): + x = Conv2D(filters, + kernel_size, + strides=strides, + padding=padding, + use_bias=use_bias, + name=name)(x) + if not use_bias: + bn_axis = 1 if K.image_data_format() == 'channels_first' else 3 + bn_name = _generate_layer_name('BatchNorm', prefix=name) + x = BatchNormalization(axis=bn_axis, momentum=0.995, epsilon=0.001, + scale=False, name=bn_name)(x) + if activation is not None: + ac_name = _generate_layer_name('Activation', prefix=name) + x = Activation(activation, name=ac_name)(x) + return x + + +def _generate_layer_name(name, branch_idx=None, prefix=None): + if prefix is None: + return None + if branch_idx is None: + return '_'.join((prefix, name)) + return '_'.join((prefix, 'Branch', str(branch_idx), name)) + + +def _inception_resnet_block(x, scale, block_type, block_idx, activation='relu'): + channel_axis = 1 if K.image_data_format() == 'channels_first' else 3 + if block_idx is None: + prefix = None + else: + prefix = '_'.join((block_type, str(block_idx))) + name_fmt = partial(_generate_layer_name, prefix=prefix) + + if block_type == 'Block35': + branch_0 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_1x1', 0)) + branch_1 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_0a_1x1', 1)) + branch_1 = conv2d_bn( + branch_1, 32, 3, name=name_fmt('Conv2d_0b_3x3', 1)) + branch_2 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_0a_1x1', 2)) + branch_2 = conv2d_bn( + branch_2, 32, 3, name=name_fmt('Conv2d_0b_3x3', 2)) + branch_2 = conv2d_bn( + branch_2, 32, 3, name=name_fmt('Conv2d_0c_3x3', 2)) + branches = [branch_0, branch_1, branch_2] + elif block_type == 'Block17': + branch_0 = conv2d_bn(x, 128, 1, name=name_fmt('Conv2d_1x1', 0)) + branch_1 = conv2d_bn(x, 128, 1, name=name_fmt('Conv2d_0a_1x1', 1)) + branch_1 = conv2d_bn( + branch_1, 128, [1, 7], name=name_fmt('Conv2d_0b_1x7', 1)) + branch_1 = conv2d_bn( + branch_1, 128, [7, 1], name=name_fmt('Conv2d_0c_7x1', 1)) + branches = [branch_0, branch_1] + elif block_type == 'Block8': + branch_0 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_1x1', 0)) + branch_1 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_0a_1x1', 1)) + branch_1 = conv2d_bn( + branch_1, 192, [1, 3], name=name_fmt('Conv2d_0b_1x3', 1)) + branch_1 = conv2d_bn( + branch_1, 192, [3, 1], name=name_fmt('Conv2d_0c_3x1', 1)) + branches = [branch_0, branch_1] + else: + raise ValueError('Unknown Inception-ResNet block type. ' + 'Expects "Block35", "Block17" or "Block8", ' + 'but got: ' + str(block_type)) + + mixed = Concatenate(axis=channel_axis, + name=name_fmt('Concatenate'))(branches) + up = conv2d_bn(mixed, + K.int_shape(x)[channel_axis], + 1, + activation=None, + use_bias=True, + name=name_fmt('Conv2d_1x1')) + up = Lambda(scaling, + output_shape=K.int_shape(up)[1:], + arguments={'scale': scale})(up) + x = add([x, up]) + if activation is not None: + x = Activation(activation, name=name_fmt('Activation'))(x) + return x + + +def InceptionResNetV1(input_shape=(160, 160, 3), + classes=128, + dropout_keep_prob=0.8, + weights_path=None): + inputs = Input(shape=input_shape) + x = conv2d_bn(inputs, 32, 3, strides=2, + padding='valid', name='Conv2d_1a_3x3') + x = conv2d_bn(x, 32, 3, padding='valid', name='Conv2d_2a_3x3') + x = conv2d_bn(x, 64, 3, name='Conv2d_2b_3x3') + x = MaxPooling2D(3, strides=2, name='MaxPool_3a_3x3')(x) + x = conv2d_bn(x, 80, 1, padding='valid', name='Conv2d_3b_1x1') + x = conv2d_bn(x, 192, 3, padding='valid', name='Conv2d_4a_3x3') + x = conv2d_bn(x, 256, 3, strides=2, padding='valid', name='Conv2d_4b_3x3') + + # 5x Block35 (Inception-ResNet-A block): + for block_idx in range(1, 6): + x = _inception_resnet_block(x, + scale=0.17, + block_type='Block35', + block_idx=block_idx) + + # Mixed 6a (Reduction-A block): + channel_axis = 1 if K.image_data_format() == 'channels_first' else 3 + name_fmt = partial(_generate_layer_name, prefix='Mixed_6a') + branch_0 = conv2d_bn(x, + 384, + 3, + strides=2, + padding='valid', + name=name_fmt('Conv2d_1a_3x3', 0)) + branch_1 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_0a_1x1', 1)) + branch_1 = conv2d_bn(branch_1, 192, 3, name=name_fmt('Conv2d_0b_3x3', 1)) + branch_1 = conv2d_bn(branch_1, + 256, + 3, + strides=2, + padding='valid', + name=name_fmt('Conv2d_1a_3x3', 1)) + branch_pool = MaxPooling2D(3, + strides=2, + padding='valid', + name=name_fmt('MaxPool_1a_3x3', 2))(x) + branches = [branch_0, branch_1, branch_pool] + x = Concatenate(axis=channel_axis, name='Mixed_6a')(branches) + + # 10x Block17 (Inception-ResNet-B block): + for block_idx in range(1, 11): + x = _inception_resnet_block(x, + scale=0.1, + block_type='Block17', + block_idx=block_idx) + + # Mixed 7a (Reduction-B block): 8 x 8 x 2080 + name_fmt = partial(_generate_layer_name, prefix='Mixed_7a') + branch_0 = conv2d_bn(x, 256, 1, name=name_fmt('Conv2d_0a_1x1', 0)) + branch_0 = conv2d_bn(branch_0, + 384, + 3, + strides=2, + padding='valid', + name=name_fmt('Conv2d_1a_3x3', 0)) + branch_1 = conv2d_bn(x, 256, 1, name=name_fmt('Conv2d_0a_1x1', 1)) + branch_1 = conv2d_bn(branch_1, + 256, + 3, + strides=2, + padding='valid', + name=name_fmt('Conv2d_1a_3x3', 1)) + branch_2 = conv2d_bn(x, 256, 1, name=name_fmt('Conv2d_0a_1x1', 2)) + branch_2 = conv2d_bn(branch_2, 256, 3, name=name_fmt('Conv2d_0b_3x3', 2)) + branch_2 = conv2d_bn(branch_2, + 256, + 3, + strides=2, + padding='valid', + name=name_fmt('Conv2d_1a_3x3', 2)) + branch_pool = MaxPooling2D(3, + strides=2, + padding='valid', + name=name_fmt('MaxPool_1a_3x3', 3))(x) + branches = [branch_0, branch_1, branch_2, branch_pool] + x = Concatenate(axis=channel_axis, name='Mixed_7a')(branches) + + # 5x Block8 (Inception-ResNet-C block): + for block_idx in range(1, 6): + x = _inception_resnet_block(x, + scale=0.2, + block_type='Block8', + block_idx=block_idx) + x = _inception_resnet_block(x, + scale=1., + activation=None, + block_type='Block8', + block_idx=6) + + # Classification block + x = GlobalAveragePooling2D(name='AvgPool')(x) + x = Dropout(1.0 - dropout_keep_prob, name='Dropout')(x) + # Bottleneck + x = Dense(classes, use_bias=False, name='Bottleneck')(x) + bn_name = _generate_layer_name('BatchNorm', prefix='Bottleneck') + x = BatchNormalization(momentum=0.995, epsilon=0.001, scale=False, + name=bn_name)(x) + + # Create model + model = Model(inputs, x, name='inception_resnet_v1') + if weights_path is not None: + model.load_weights(weights_path) + + return model diff --git a/setup.py b/setup.py index 055910372..93260ef12 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ setup( name='facenet_sandberg', - version='1.0.6', + version='1.0.7', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', @@ -13,6 +13,6 @@ license='MIT', install_requires=[ 'tensorflow', 'scipy', 'scikit-learn', 'opencv-python', - 'h5py', 'matplotlib', 'Pillow', 'requests', 'psutil', 'progressbar', 'mtcnn' + 'h5py', 'matplotlib', 'Pillow', 'requests', 'psutil', 'progressbar', 'mtcnn', 'pathos' ] ) From 8ea46c35e8da8b8346e61b53beb420bc1ea43f03 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Fri, 10 Aug 2018 14:22:53 -0500 Subject: [PATCH 14/50] got rid of logging --- facenet_sandberg/align/align_dataset_mtcnn.py | 4 + facenet_sandberg/face.py | 103 ++++++++++++++---- setup.py | 2 +- 3 files changed, 86 insertions(+), 23 deletions(-) diff --git a/facenet_sandberg/align/align_dataset_mtcnn.py b/facenet_sandberg/align/align_dataset_mtcnn.py index 15108d817..07b7cbb69 100644 --- a/facenet_sandberg/align/align_dataset_mtcnn.py +++ b/facenet_sandberg/align/align_dataset_mtcnn.py @@ -34,10 +34,14 @@ import numpy as np import progressbar as pb +import tensorflow as tf from facenet_sandberg import face, facenet from pathos.multiprocessing import ProcessPool from scipy import misc +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +tf.logging.set_verbosity(tf.logging.ERROR) + def main( input_dir: str, diff --git a/facenet_sandberg/face.py b/facenet_sandberg/face.py index 9e8b63146..ff1d094cd 100644 --- a/facenet_sandberg/face.py +++ b/facenet_sandberg/face.py @@ -15,8 +15,8 @@ from mtcnn.mtcnn import MTCNN from scipy import misc -debug = False - +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +tf.logging.set_verbosity(tf.logging.ERROR) class Face: """Class representing a single face @@ -72,37 +72,79 @@ def __init__(self, facenet_model_checkpoint: str, threshold: float = 1.10): @staticmethod def download_image(url: str) -> np.ndarray: - """Downloads an image from the url as a cv2 image + """Downloads an image from the url as a numpy array (opencv format) Arguments: url {str} -- url of image Returns: - cv2 image -- image array + np.ndarray -- array representing image """ req = urlopen(url) arr = np.asarray(bytearray(req.read()), dtype=np.uint8) image = cv2.imdecode(arr, -1) - return image + return Identifier.fix_image(image) @staticmethod def get_image_from_path(image_path: str) -> np.ndarray: - return cv2.imread(image_path) + """Reads an image path to a numpy array (opencv format) + + Arguments: + image_path {str} -- path to image + + Returns: + np.ndarray -- array representing image + """ + + return Identifier.fix_image(cv2.imread(image_path)) @staticmethod def get_images_from_dir( directory: str, recursive: bool) -> Generator[np.ndarray, None, None]: + """Gets images in a directory + + Arguments: + directory {str} -- path to directory + recursive {bool} -- if True searches all subfolders for images. + else searches for images in folder only. + + Returns: + Generator[np.ndarray, None, None] -- generator of images + """ + if recursive: image_paths = iglob(os.path.join( directory, '**', '*.*'), recursive=recursive) else: image_paths = iglob(os.path.join(directory, '*.*')) for image_path in image_paths: - yield cv2.imread(image_path) + yield Identifier.fix_image(cv2.imread(image_path)) + + @staticmethod + def fix_image(image: np.ndarray): + if image.ndim < 2: + image = image[:, :, np.newaxis] + if image.ndim == 2: + image = facenet.to_rgb(image) + image = image[:, :, 0:3] + return image def vectorize(self, image: np.ndarray, face_limit: int = 5) -> List[np.ndarray]: + """Gets face embeddings in a single image + + Arguments: + image {np.ndarray} -- Image to find embeddings + + Keyword Arguments: + face_limit {int} -- max number of faces allowed + before image is discarded. (default: {5}) + + Returns: + List[np.ndarray] -- list of embeddings + """ + faces: List[Face] = self.detect_encode(image, face_limit) vectors = [face.embedding for face in faces] return vectors @@ -114,6 +156,20 @@ def vectorize_all(self, face_limit: int = 5) -> Generator[List[np.ndarray], None, None]: + """Gets face embeddings from a generator of images + + Arguments: + image {np.ndarray} -- Image to find embeddings + + Keyword Arguments: + face_limit {int} -- max number of faces allowed + before image is discarded. (default: {5}) + + Returns: + Generator[List[np.ndarray]]-- generator of lists of images found in + each photo + """ + all_faces: Generator[List[Face], None, None] = self.detect_encode_all( images=images, save_memory=True, face_limit=face_limit) vectors: Generator[List[np.ndarray], None, None] = ( @@ -125,12 +181,12 @@ def detect_encode(self, image: np.ndarray, """Detects faces in an image and encodes them Arguments: - image {cv2 image (np array)} -- image to find faces and encode + image {np.ndarray} -- image to find faces and encode face_limit {int} -- Maximum # of faces allowed in image. If over limit returns empty list Returns: - Face[] -- list of Face objects with embeddings attached + List[Face] -- list of Face objects with embeddings attached """ faces: List[Face] = self.detector.find_faces(image, face_limit) @@ -160,7 +216,7 @@ def detect_encode_all(self, of refference to the original image like a url. (default: {False}) Returns: - Face[] -- List of Face objects with + Generator[List[Face]] -- Generator of lists of Face objects in each image """ all_faces: Generator[List[Face], None, None] = self.detector.bulk_find_face( @@ -182,7 +238,7 @@ def compare_embedding(self, distance_metric {int} -- 0 for Euclidian distance and 1 for Cosine similarity (default: {0}) Returns: - bool, float -- returns True if match and distance + (bool, float) -- returns True if match and distance """ distance = facenet.distance(embedding_1.reshape( @@ -303,19 +359,22 @@ def get_all_embeddings(self, Returns: Faces with embeddings """ - - for faces in all_faces: - prewhitened_images = [facenet.prewhiten( - face.image) for face in faces] + # import pdb;pdb.set_trace() + face_list: List[List[Face]] = list(all_faces) + prewhitened_images = [facenet.prewhiten(face.image) for faces in face_list for face in faces] + if face_list: feed_dict = {self.images_placeholder: prewhitened_images, - self.phase_train_placeholder: False} + self.phase_train_placeholder: False} embed_array = self.sess.run(self.embeddings, feed_dict=feed_dict) - for index, face in enumerate(faces): - if save_memory: - face.image = None - face.container_image = None - face.embedding = embed_array[index] - yield faces + index = 0 + for faces in face_list: + for face in faces: + if save_memory: + face.image = None + face.container_image = None + face.embedding = embed_array[index] + index+=1 + yield faces def tear_down(self): if tf.get_default_session(): diff --git a/setup.py b/setup.py index 93260ef12..71c8dc053 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ setup( name='facenet_sandberg', - version='1.0.7', + version='1.0.8', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', From d3121b11f0f79a5e6e3c112fd312b8824e103e49 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Sat, 11 Aug 2018 15:17:34 -0500 Subject: [PATCH 15/50] formating --- .travis.yml | 12 +- facenet_sandberg/__init__.py | 1 - facenet_sandberg/align/align_dataset_mtcnn.py | 8 +- facenet_sandberg/align/detect_face.py | 781 --------------- .../calculate_filtering_metrics.py | 100 +- facenet_sandberg/classifier.py | 193 ++-- facenet_sandberg/compare.py | 130 ++- facenet_sandberg/convert_to_keras.py | 17 +- facenet_sandberg/decode_msceleb_dataset.py | 72 +- facenet_sandberg/download_and_extract.py | 41 +- facenet_sandberg/face.py | 166 ++-- facenet_sandberg/facenet.py | 478 +++++---- facenet_sandberg/freeze_graph.py | 72 +- facenet_sandberg/generate_pairs.py | 110 ++- facenet_sandberg/lfw.py | 68 +- facenet_sandberg/train_softmax.py | 911 ++++++++++++------ facenet_sandberg/train_tripletloss.py | 637 ++++++++---- facenet_sandberg/validate_on_lfw.py | 4 +- setup.py | 2 +- test/batch_norm_test.py | 49 +- test/center_loss_test.py | 94 +- test/restore_test.py | 96 +- test/train_test.py | 95 +- test/triplet_loss_test.py | 62 +- 24 files changed, 2199 insertions(+), 2000 deletions(-) delete mode 100644 facenet_sandberg/align/detect_face.py diff --git a/.travis.yml b/.travis.yml index 0f853c496..0d5a83cd4 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,17 +1,13 @@ language: python sudo: required python: - - "2.7" - - "3.5" -# command to install dependencies + - '3.6' install: -# numpy not using wheel to avoid problem described in -# https://github.com/tensorflow/tensorflow/issues/6968 - pip install --no-binary numpy --upgrade numpy - pip install -r requirements.txt -# command to run tests script: - - export PYTHONPATH=./src:./src/models:./src/align + - >- + export + PYTHONPATH=./facenet_sandberg:./facenet_sandberg/models:./facenet_sandberg/align - python -m unittest discover -s test --pattern=*.py 1>&2 dist: trusty - diff --git a/facenet_sandberg/__init__.py b/facenet_sandberg/__init__.py index efa625274..9c0fa90a1 100644 --- a/facenet_sandberg/__init__.py +++ b/facenet_sandberg/__init__.py @@ -1,2 +1 @@ # flake8: noqa - diff --git a/facenet_sandberg/align/align_dataset_mtcnn.py b/facenet_sandberg/align/align_dataset_mtcnn.py index 07b7cbb69..19630fe32 100644 --- a/facenet_sandberg/align/align_dataset_mtcnn.py +++ b/facenet_sandberg/align/align_dataset_mtcnn.py @@ -128,7 +128,7 @@ def align_multiprocess(self, dataset: List[facenet.PersonClass]): self.align(person) print('Total number of images: %d' % int(self.num_images_total.value)) print('Number of successfully aligned images: %d' % - int(self.num_sucessful.value)) + int(self.num_sucessful.value)) def align(self, person: facenet.PersonClass): # import pdb;pdb.set_trace() @@ -155,7 +155,8 @@ def align(self, person: facenet.PersonClass): text_file, output_class_dir) self.timer.update(int(self.num_sucessful.value)) - def process_image(self, detector, image_path: str, text_file: str, output_class_dir: str): + def process_image(self, detector, image_path: str, + text_file: str, output_class_dir: str): output_filename = self.get_file_name(image_path, output_class_dir) if not os.path.exists(output_filename): try: @@ -198,7 +199,8 @@ def increment_total(self, add_amount: int=1): self.num_images_total.value += add_amount @staticmethod - def fix_image(image: np.ndarray, image_path: str, output_filename: str, text_file: str): + def fix_image(image: np.ndarray, image_path: str, + output_filename: str, text_file: str): if image.ndim < 2: print('Unable to align "%s"' % image_path) text_file.write('%s\n' % (output_filename)) diff --git a/facenet_sandberg/align/detect_face.py b/facenet_sandberg/align/detect_face.py deleted file mode 100644 index 7f98ca7fb..000000000 --- a/facenet_sandberg/align/detect_face.py +++ /dev/null @@ -1,781 +0,0 @@ -""" Tensorflow implementation of the face detection / alignment algorithm found at -https://github.com/kpzhang93/MTCNN_face_detection_alignment -""" -# MIT License -# -# Copyright (c) 2016 David Sandberg -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function -from six import string_types, iteritems - -import numpy as np -import tensorflow as tf -#from math import floor -import cv2 -import os - -def layer(op): - """Decorator for composable network layers.""" - - def layer_decorated(self, *args, **kwargs): - # Automatically set a name if not provided. - name = kwargs.setdefault('name', self.get_unique_name(op.__name__)) - # Figure out the layer inputs. - if len(self.terminals) == 0: - raise RuntimeError('No input variables found for layer %s.' % name) - elif len(self.terminals) == 1: - layer_input = self.terminals[0] - else: - layer_input = list(self.terminals) - # Perform the operation and get the output. - layer_output = op(self, layer_input, *args, **kwargs) - # Add to layer LUT. - self.layers[name] = layer_output - # This output is now the input for the next layer. - self.feed(layer_output) - # Return self for chained calls. - return self - - return layer_decorated - -class Network(object): - - def __init__(self, inputs, trainable=True): - # The input nodes for this network - self.inputs = inputs - # The current list of terminal nodes - self.terminals = [] - # Mapping from layer names to layers - self.layers = dict(inputs) - # If true, the resulting variables are set as trainable - self.trainable = trainable - - self.setup() - - def setup(self): - """Construct the network. """ - raise NotImplementedError('Must be implemented by the subclass.') - - def load(self, data_path, session, ignore_missing=False): - """Load network weights. - data_path: The path to the numpy-serialized network weights - session: The current TensorFlow session - ignore_missing: If true, serialized weights for missing layers are ignored. - """ - data_dict = np.load(data_path, encoding='latin1').item() #pylint: disable=no-member - - for op_name in data_dict: - with tf.variable_scope(op_name, reuse=True): - for param_name, data in iteritems(data_dict[op_name]): - try: - var = tf.get_variable(param_name) - session.run(var.assign(data)) - except ValueError: - if not ignore_missing: - raise - - def feed(self, *args): - """Set the input(s) for the next operation by replacing the terminal nodes. - The arguments can be either layer names or the actual layers. - """ - assert len(args) != 0 - self.terminals = [] - for fed_layer in args: - if isinstance(fed_layer, string_types): - try: - fed_layer = self.layers[fed_layer] - except KeyError: - raise KeyError('Unknown layer name fed: %s' % fed_layer) - self.terminals.append(fed_layer) - return self - - def get_output(self): - """Returns the current network output.""" - return self.terminals[-1] - - def get_unique_name(self, prefix): - """Returns an index-suffixed unique name for the given prefix. - This is used for auto-generating layer names based on the type-prefix. - """ - ident = sum(t.startswith(prefix) for t, _ in self.layers.items()) + 1 - return '%s_%d' % (prefix, ident) - - def make_var(self, name, shape): - """Creates a new TensorFlow variable.""" - return tf.get_variable(name, shape, trainable=self.trainable) - - def validate_padding(self, padding): - """Verifies that the padding is one of the supported ones.""" - assert padding in ('SAME', 'VALID') - - @layer - def conv(self, - inp, - k_h, - k_w, - c_o, - s_h, - s_w, - name, - relu=True, - padding='SAME', - group=1, - biased=True): - # Verify that the padding is acceptable - self.validate_padding(padding) - # Get the number of channels in the input - c_i = int(inp.get_shape()[-1]) - # Verify that the grouping parameter is valid - assert c_i % group == 0 - assert c_o % group == 0 - # Convolution for a given input and kernel - convolve = lambda i, k: tf.nn.conv2d(i, k, [1, s_h, s_w, 1], padding=padding) - with tf.variable_scope(name) as scope: - kernel = self.make_var('weights', shape=[k_h, k_w, c_i // group, c_o]) - # This is the common-case. Convolve the input without any further complications. - output = convolve(inp, kernel) - # Add the biases - if biased: - biases = self.make_var('biases', [c_o]) - output = tf.nn.bias_add(output, biases) - if relu: - # ReLU non-linearity - output = tf.nn.relu(output, name=scope.name) - return output - - @layer - def prelu(self, inp, name): - with tf.variable_scope(name): - i = int(inp.get_shape()[-1]) - alpha = self.make_var('alpha', shape=(i,)) - output = tf.nn.relu(inp) + tf.multiply(alpha, -tf.nn.relu(-inp)) - return output - - @layer - def max_pool(self, inp, k_h, k_w, s_h, s_w, name, padding='SAME'): - self.validate_padding(padding) - return tf.nn.max_pool(inp, - ksize=[1, k_h, k_w, 1], - strides=[1, s_h, s_w, 1], - padding=padding, - name=name) - - @layer - def fc(self, inp, num_out, name, relu=True): - with tf.variable_scope(name): - input_shape = inp.get_shape() - if input_shape.ndims == 4: - # The input is spatial. Vectorize it first. - dim = 1 - for d in input_shape[1:].as_list(): - dim *= int(d) - feed_in = tf.reshape(inp, [-1, dim]) - else: - feed_in, dim = (inp, input_shape[-1].value) - weights = self.make_var('weights', shape=[dim, num_out]) - biases = self.make_var('biases', [num_out]) - op = tf.nn.relu_layer if relu else tf.nn.xw_plus_b - fc = op(feed_in, weights, biases, name=name) - return fc - - - """ - Multi dimensional softmax, - refer to https://github.com/tensorflow/tensorflow/issues/210 - compute softmax along the dimension of target - the native softmax only supports batch_size x dimension - """ - @layer - def softmax(self, target, axis, name=None): - max_axis = tf.reduce_max(target, axis, keepdims=True) - target_exp = tf.exp(target-max_axis) - normalize = tf.reduce_sum(target_exp, axis, keepdims=True) - softmax = tf.div(target_exp, normalize, name) - return softmax - -class PNet(Network): - def setup(self): - (self.feed('data') #pylint: disable=no-value-for-parameter, no-member - .conv(3, 3, 10, 1, 1, padding='VALID', relu=False, name='conv1') - .prelu(name='PReLU1') - .max_pool(2, 2, 2, 2, name='pool1') - .conv(3, 3, 16, 1, 1, padding='VALID', relu=False, name='conv2') - .prelu(name='PReLU2') - .conv(3, 3, 32, 1, 1, padding='VALID', relu=False, name='conv3') - .prelu(name='PReLU3') - .conv(1, 1, 2, 1, 1, relu=False, name='conv4-1') - .softmax(3,name='prob1')) - - (self.feed('PReLU3') #pylint: disable=no-value-for-parameter - .conv(1, 1, 4, 1, 1, relu=False, name='conv4-2')) - -class RNet(Network): - def setup(self): - (self.feed('data') #pylint: disable=no-value-for-parameter, no-member - .conv(3, 3, 28, 1, 1, padding='VALID', relu=False, name='conv1') - .prelu(name='prelu1') - .max_pool(3, 3, 2, 2, name='pool1') - .conv(3, 3, 48, 1, 1, padding='VALID', relu=False, name='conv2') - .prelu(name='prelu2') - .max_pool(3, 3, 2, 2, padding='VALID', name='pool2') - .conv(2, 2, 64, 1, 1, padding='VALID', relu=False, name='conv3') - .prelu(name='prelu3') - .fc(128, relu=False, name='conv4') - .prelu(name='prelu4') - .fc(2, relu=False, name='conv5-1') - .softmax(1,name='prob1')) - - (self.feed('prelu4') #pylint: disable=no-value-for-parameter - .fc(4, relu=False, name='conv5-2')) - -class ONet(Network): - def setup(self): - (self.feed('data') #pylint: disable=no-value-for-parameter, no-member - .conv(3, 3, 32, 1, 1, padding='VALID', relu=False, name='conv1') - .prelu(name='prelu1') - .max_pool(3, 3, 2, 2, name='pool1') - .conv(3, 3, 64, 1, 1, padding='VALID', relu=False, name='conv2') - .prelu(name='prelu2') - .max_pool(3, 3, 2, 2, padding='VALID', name='pool2') - .conv(3, 3, 64, 1, 1, padding='VALID', relu=False, name='conv3') - .prelu(name='prelu3') - .max_pool(2, 2, 2, 2, name='pool3') - .conv(2, 2, 128, 1, 1, padding='VALID', relu=False, name='conv4') - .prelu(name='prelu4') - .fc(256, relu=False, name='conv5') - .prelu(name='prelu5') - .fc(2, relu=False, name='conv6-1') - .softmax(1, name='prob1')) - - (self.feed('prelu5') #pylint: disable=no-value-for-parameter - .fc(4, relu=False, name='conv6-2')) - - (self.feed('prelu5') #pylint: disable=no-value-for-parameter - .fc(10, relu=False, name='conv6-3')) - -def create_mtcnn(sess, model_path): - if not model_path: - model_path,_ = os.path.split(os.path.realpath(__file__)) - - with tf.variable_scope('pnet'): - data = tf.placeholder(tf.float32, (None,None,None,3), 'input') - pnet = PNet({'data':data}) - pnet.load(os.path.join(model_path, 'det1.npy'), sess) - with tf.variable_scope('rnet'): - data = tf.placeholder(tf.float32, (None,24,24,3), 'input') - rnet = RNet({'data':data}) - rnet.load(os.path.join(model_path, 'det2.npy'), sess) - with tf.variable_scope('onet'): - data = tf.placeholder(tf.float32, (None,48,48,3), 'input') - onet = ONet({'data':data}) - onet.load(os.path.join(model_path, 'det3.npy'), sess) - - pnet_fun = lambda img : sess.run(('pnet/conv4-2/BiasAdd:0', 'pnet/prob1:0'), feed_dict={'pnet/input:0':img}) - rnet_fun = lambda img : sess.run(('rnet/conv5-2/conv5-2:0', 'rnet/prob1:0'), feed_dict={'rnet/input:0':img}) - onet_fun = lambda img : sess.run(('onet/conv6-2/conv6-2:0', 'onet/conv6-3/conv6-3:0', 'onet/prob1:0'), feed_dict={'onet/input:0':img}) - return pnet_fun, rnet_fun, onet_fun - -def detect_face(img, minsize, pnet, rnet, onet, threshold, factor): - """Detects faces in an image, and returns bounding boxes and points for them. - img: input image - minsize: minimum faces' size - pnet, rnet, onet: caffemodel - threshold: threshold=[th1, th2, th3], th1-3 are three steps's threshold - factor: the factor used to create a scaling pyramid of face sizes to detect in the image. - """ - factor_count=0 - total_boxes=np.empty((0,9)) - points=np.empty(0) - h=img.shape[0] - w=img.shape[1] - minl=np.amin([h, w]) - m=12.0/minsize - minl=minl*m - # create scale pyramid - scales=[] - while minl>=12: - scales += [m*np.power(factor, factor_count)] - minl = minl*factor - factor_count += 1 - - # first stage - for scale in scales: - hs=int(np.ceil(h*scale)) - ws=int(np.ceil(w*scale)) - im_data = imresample(img, (hs, ws)) - im_data = (im_data-127.5)*0.0078125 - img_x = np.expand_dims(im_data, 0) - img_y = np.transpose(img_x, (0,2,1,3)) - out = pnet(img_y) - out0 = np.transpose(out[0], (0,2,1,3)) - out1 = np.transpose(out[1], (0,2,1,3)) - - boxes, _ = generateBoundingBox(out1[0,:,:,1].copy(), out0[0,:,:,:].copy(), scale, threshold[0]) - - # inter-scale nms - pick = nms(boxes.copy(), 0.5, 'Union') - if boxes.size>0 and pick.size>0: - boxes = boxes[pick,:] - total_boxes = np.append(total_boxes, boxes, axis=0) - - numbox = total_boxes.shape[0] - if numbox>0: - pick = nms(total_boxes.copy(), 0.7, 'Union') - total_boxes = total_boxes[pick,:] - regw = total_boxes[:,2]-total_boxes[:,0] - regh = total_boxes[:,3]-total_boxes[:,1] - qq1 = total_boxes[:,0]+total_boxes[:,5]*regw - qq2 = total_boxes[:,1]+total_boxes[:,6]*regh - qq3 = total_boxes[:,2]+total_boxes[:,7]*regw - qq4 = total_boxes[:,3]+total_boxes[:,8]*regh - total_boxes = np.transpose(np.vstack([qq1, qq2, qq3, qq4, total_boxes[:,4]])) - total_boxes = rerec(total_boxes.copy()) - total_boxes[:,0:4] = np.fix(total_boxes[:,0:4]).astype(np.int32) - dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph = pad(total_boxes.copy(), w, h) - - numbox = total_boxes.shape[0] - if numbox>0: - # second stage - tempimg = np.zeros((24,24,3,numbox)) - for k in range(0,numbox): - tmp = np.zeros((int(tmph[k]),int(tmpw[k]),3)) - tmp[dy[k]-1:edy[k],dx[k]-1:edx[k],:] = img[y[k]-1:ey[k],x[k]-1:ex[k],:] - if tmp.shape[0]>0 and tmp.shape[1]>0 or tmp.shape[0]==0 and tmp.shape[1]==0: - tempimg[:,:,:,k] = imresample(tmp, (24, 24)) - else: - return np.empty() - tempimg = (tempimg-127.5)*0.0078125 - tempimg1 = np.transpose(tempimg, (3,1,0,2)) - out = rnet(tempimg1) - out0 = np.transpose(out[0]) - out1 = np.transpose(out[1]) - score = out1[1,:] - ipass = np.where(score>threshold[1]) - total_boxes = np.hstack([total_boxes[ipass[0],0:4].copy(), np.expand_dims(score[ipass].copy(),1)]) - mv = out0[:,ipass[0]] - if total_boxes.shape[0]>0: - pick = nms(total_boxes, 0.7, 'Union') - total_boxes = total_boxes[pick,:] - total_boxes = bbreg(total_boxes.copy(), np.transpose(mv[:,pick])) - total_boxes = rerec(total_boxes.copy()) - - numbox = total_boxes.shape[0] - if numbox>0: - # third stage - total_boxes = np.fix(total_boxes).astype(np.int32) - dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph = pad(total_boxes.copy(), w, h) - tempimg = np.zeros((48,48,3,numbox)) - for k in range(0,numbox): - tmp = np.zeros((int(tmph[k]),int(tmpw[k]),3)) - tmp[dy[k]-1:edy[k],dx[k]-1:edx[k],:] = img[y[k]-1:ey[k],x[k]-1:ex[k],:] - if tmp.shape[0]>0 and tmp.shape[1]>0 or tmp.shape[0]==0 and tmp.shape[1]==0: - tempimg[:,:,:,k] = imresample(tmp, (48, 48)) - else: - return np.empty() - tempimg = (tempimg-127.5)*0.0078125 - tempimg1 = np.transpose(tempimg, (3,1,0,2)) - out = onet(tempimg1) - out0 = np.transpose(out[0]) - out1 = np.transpose(out[1]) - out2 = np.transpose(out[2]) - score = out2[1,:] - points = out1 - ipass = np.where(score>threshold[2]) - points = points[:,ipass[0]] - total_boxes = np.hstack([total_boxes[ipass[0],0:4].copy(), np.expand_dims(score[ipass].copy(),1)]) - mv = out0[:,ipass[0]] - - w = total_boxes[:,2]-total_boxes[:,0]+1 - h = total_boxes[:,3]-total_boxes[:,1]+1 - points[0:5,:] = np.tile(w,(5, 1))*points[0:5,:] + np.tile(total_boxes[:,0],(5, 1))-1 - points[5:10,:] = np.tile(h,(5, 1))*points[5:10,:] + np.tile(total_boxes[:,1],(5, 1))-1 - if total_boxes.shape[0]>0: - total_boxes = bbreg(total_boxes.copy(), np.transpose(mv)) - pick = nms(total_boxes.copy(), 0.7, 'Min') - total_boxes = total_boxes[pick,:] - points = points[:,pick] - - return total_boxes, points - - -def bulk_detect_face(images, detection_window_size_ratio, pnet, rnet, onet, threshold, factor): - """Detects faces in a list of images - images: list containing input images - detection_window_size_ratio: ratio of minimum face size to smallest image dimension - pnet, rnet, onet: caffemodel - threshold: threshold=[th1 th2 th3], th1-3 are three steps's threshold [0-1] - factor: the factor used to create a scaling pyramid of face sizes to detect in the image. - """ - all_scales = [None] * len(images) - images_with_boxes = [None] * len(images) - - for i in range(len(images)): - images_with_boxes[i] = {'total_boxes': np.empty((0, 9))} - - # create scale pyramid - for index, img in enumerate(images): - all_scales[index] = [] - h = img.shape[0] - w = img.shape[1] - minsize = int(detection_window_size_ratio * np.minimum(w, h)) - factor_count = 0 - minl = np.amin([h, w]) - if minsize <= 12: - minsize = 12 - - m = 12.0 / minsize - minl = minl * m - while minl >= 12: - all_scales[index].append(m * np.power(factor, factor_count)) - minl = minl * factor - factor_count += 1 - - # # # # # # # # # # # # # - # first stage - fast proposal network (pnet) to obtain face candidates - # # # # # # # # # # # # # - - images_obj_per_resolution = {} - - # TODO: use some type of rounding to number module 8 to increase probability that pyramid images will have the same resolution across input images - - for index, scales in enumerate(all_scales): - h = images[index].shape[0] - w = images[index].shape[1] - - for scale in scales: - hs = int(np.ceil(h * scale)) - ws = int(np.ceil(w * scale)) - - if (ws, hs) not in images_obj_per_resolution: - images_obj_per_resolution[(ws, hs)] = [] - - im_data = imresample(images[index], (hs, ws)) - im_data = (im_data - 127.5) * 0.0078125 - img_y = np.transpose(im_data, (1, 0, 2)) # caffe uses different dimensions ordering - images_obj_per_resolution[(ws, hs)].append({'scale': scale, 'image': img_y, 'index': index}) - - for resolution in images_obj_per_resolution: - images_per_resolution = [i['image'] for i in images_obj_per_resolution[resolution]] - outs = pnet(images_per_resolution) - - for index in range(len(outs[0])): - scale = images_obj_per_resolution[resolution][index]['scale'] - image_index = images_obj_per_resolution[resolution][index]['index'] - out0 = np.transpose(outs[0][index], (1, 0, 2)) - out1 = np.transpose(outs[1][index], (1, 0, 2)) - - boxes, _ = generateBoundingBox(out1[:, :, 1].copy(), out0[:, :, :].copy(), scale, threshold[0]) - - # inter-scale nms - pick = nms(boxes.copy(), 0.5, 'Union') - if boxes.size > 0 and pick.size > 0: - boxes = boxes[pick, :] - images_with_boxes[image_index]['total_boxes'] = np.append(images_with_boxes[image_index]['total_boxes'], - boxes, - axis=0) - - for index, image_obj in enumerate(images_with_boxes): - numbox = image_obj['total_boxes'].shape[0] - if numbox > 0: - h = images[index].shape[0] - w = images[index].shape[1] - pick = nms(image_obj['total_boxes'].copy(), 0.7, 'Union') - image_obj['total_boxes'] = image_obj['total_boxes'][pick, :] - regw = image_obj['total_boxes'][:, 2] - image_obj['total_boxes'][:, 0] - regh = image_obj['total_boxes'][:, 3] - image_obj['total_boxes'][:, 1] - qq1 = image_obj['total_boxes'][:, 0] + image_obj['total_boxes'][:, 5] * regw - qq2 = image_obj['total_boxes'][:, 1] + image_obj['total_boxes'][:, 6] * regh - qq3 = image_obj['total_boxes'][:, 2] + image_obj['total_boxes'][:, 7] * regw - qq4 = image_obj['total_boxes'][:, 3] + image_obj['total_boxes'][:, 8] * regh - image_obj['total_boxes'] = np.transpose(np.vstack([qq1, qq2, qq3, qq4, image_obj['total_boxes'][:, 4]])) - image_obj['total_boxes'] = rerec(image_obj['total_boxes'].copy()) - image_obj['total_boxes'][:, 0:4] = np.fix(image_obj['total_boxes'][:, 0:4]).astype(np.int32) - dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph = pad(image_obj['total_boxes'].copy(), w, h) - - numbox = image_obj['total_boxes'].shape[0] - tempimg = np.zeros((24, 24, 3, numbox)) - - if numbox > 0: - for k in range(0, numbox): - tmp = np.zeros((int(tmph[k]), int(tmpw[k]), 3)) - tmp[dy[k] - 1:edy[k], dx[k] - 1:edx[k], :] = images[index][y[k] - 1:ey[k], x[k] - 1:ex[k], :] - if tmp.shape[0] > 0 and tmp.shape[1] > 0 or tmp.shape[0] == 0 and tmp.shape[1] == 0: - tempimg[:, :, :, k] = imresample(tmp, (24, 24)) - else: - return np.empty() - - tempimg = (tempimg - 127.5) * 0.0078125 - image_obj['rnet_input'] = np.transpose(tempimg, (3, 1, 0, 2)) - - # # # # # # # # # # # # # - # second stage - refinement of face candidates with rnet - # # # # # # # # # # # # # - - bulk_rnet_input = np.empty((0, 24, 24, 3)) - for index, image_obj in enumerate(images_with_boxes): - if 'rnet_input' in image_obj: - bulk_rnet_input = np.append(bulk_rnet_input, image_obj['rnet_input'], axis=0) - - out = rnet(bulk_rnet_input) - out0 = np.transpose(out[0]) - out1 = np.transpose(out[1]) - score = out1[1, :] - - i = 0 - for index, image_obj in enumerate(images_with_boxes): - if 'rnet_input' not in image_obj: - continue - - rnet_input_count = image_obj['rnet_input'].shape[0] - score_per_image = score[i:i + rnet_input_count] - out0_per_image = out0[:, i:i + rnet_input_count] - - ipass = np.where(score_per_image > threshold[1]) - image_obj['total_boxes'] = np.hstack([image_obj['total_boxes'][ipass[0], 0:4].copy(), - np.expand_dims(score_per_image[ipass].copy(), 1)]) - - mv = out0_per_image[:, ipass[0]] - - if image_obj['total_boxes'].shape[0] > 0: - h = images[index].shape[0] - w = images[index].shape[1] - pick = nms(image_obj['total_boxes'], 0.7, 'Union') - image_obj['total_boxes'] = image_obj['total_boxes'][pick, :] - image_obj['total_boxes'] = bbreg(image_obj['total_boxes'].copy(), np.transpose(mv[:, pick])) - image_obj['total_boxes'] = rerec(image_obj['total_boxes'].copy()) - - numbox = image_obj['total_boxes'].shape[0] - - if numbox > 0: - tempimg = np.zeros((48, 48, 3, numbox)) - image_obj['total_boxes'] = np.fix(image_obj['total_boxes']).astype(np.int32) - dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph = pad(image_obj['total_boxes'].copy(), w, h) - - for k in range(0, numbox): - tmp = np.zeros((int(tmph[k]), int(tmpw[k]), 3)) - tmp[dy[k] - 1:edy[k], dx[k] - 1:edx[k], :] = images[index][y[k] - 1:ey[k], x[k] - 1:ex[k], :] - if tmp.shape[0] > 0 and tmp.shape[1] > 0 or tmp.shape[0] == 0 and tmp.shape[1] == 0: - tempimg[:, :, :, k] = imresample(tmp, (48, 48)) - else: - return np.empty() - tempimg = (tempimg - 127.5) * 0.0078125 - image_obj['onet_input'] = np.transpose(tempimg, (3, 1, 0, 2)) - - i += rnet_input_count - - # # # # # # # # # # # # # - # third stage - further refinement and facial landmarks positions with onet - # # # # # # # # # # # # # - - bulk_onet_input = np.empty((0, 48, 48, 3)) - for index, image_obj in enumerate(images_with_boxes): - if 'onet_input' in image_obj: - bulk_onet_input = np.append(bulk_onet_input, image_obj['onet_input'], axis=0) - - out = onet(bulk_onet_input) - - out0 = np.transpose(out[0]) - out1 = np.transpose(out[1]) - out2 = np.transpose(out[2]) - score = out2[1, :] - points = out1 - - i = 0 - ret = [] - for index, image_obj in enumerate(images_with_boxes): - if 'onet_input' not in image_obj: - ret.append(None) - continue - - onet_input_count = image_obj['onet_input'].shape[0] - - out0_per_image = out0[:, i:i + onet_input_count] - score_per_image = score[i:i + onet_input_count] - points_per_image = points[:, i:i + onet_input_count] - - ipass = np.where(score_per_image > threshold[2]) - points_per_image = points_per_image[:, ipass[0]] - - image_obj['total_boxes'] = np.hstack([image_obj['total_boxes'][ipass[0], 0:4].copy(), - np.expand_dims(score_per_image[ipass].copy(), 1)]) - mv = out0_per_image[:, ipass[0]] - - w = image_obj['total_boxes'][:, 2] - image_obj['total_boxes'][:, 0] + 1 - h = image_obj['total_boxes'][:, 3] - image_obj['total_boxes'][:, 1] + 1 - points_per_image[0:5, :] = np.tile(w, (5, 1)) * points_per_image[0:5, :] + np.tile( - image_obj['total_boxes'][:, 0], (5, 1)) - 1 - points_per_image[5:10, :] = np.tile(h, (5, 1)) * points_per_image[5:10, :] + np.tile( - image_obj['total_boxes'][:, 1], (5, 1)) - 1 - - if image_obj['total_boxes'].shape[0] > 0: - image_obj['total_boxes'] = bbreg(image_obj['total_boxes'].copy(), np.transpose(mv)) - pick = nms(image_obj['total_boxes'].copy(), 0.7, 'Min') - image_obj['total_boxes'] = image_obj['total_boxes'][pick, :] - points_per_image = points_per_image[:, pick] - - ret.append((image_obj['total_boxes'], points_per_image)) - else: - ret.append(None) - - i += onet_input_count - - return ret - - -# function [boundingbox] = bbreg(boundingbox,reg) -def bbreg(boundingbox,reg): - """Calibrate bounding boxes""" - if reg.shape[1]==1: - reg = np.reshape(reg, (reg.shape[2], reg.shape[3])) - - w = boundingbox[:,2]-boundingbox[:,0]+1 - h = boundingbox[:,3]-boundingbox[:,1]+1 - b1 = boundingbox[:,0]+reg[:,0]*w - b2 = boundingbox[:,1]+reg[:,1]*h - b3 = boundingbox[:,2]+reg[:,2]*w - b4 = boundingbox[:,3]+reg[:,3]*h - boundingbox[:,0:4] = np.transpose(np.vstack([b1, b2, b3, b4 ])) - return boundingbox - -def generateBoundingBox(imap, reg, scale, t): - """Use heatmap to generate bounding boxes""" - stride=2 - cellsize=12 - - imap = np.transpose(imap) - dx1 = np.transpose(reg[:,:,0]) - dy1 = np.transpose(reg[:,:,1]) - dx2 = np.transpose(reg[:,:,2]) - dy2 = np.transpose(reg[:,:,3]) - y, x = np.where(imap >= t) - if y.shape[0]==1: - dx1 = np.flipud(dx1) - dy1 = np.flipud(dy1) - dx2 = np.flipud(dx2) - dy2 = np.flipud(dy2) - score = imap[(y,x)] - reg = np.transpose(np.vstack([ dx1[(y,x)], dy1[(y,x)], dx2[(y,x)], dy2[(y,x)] ])) - if reg.size==0: - reg = np.empty((0,3)) - bb = np.transpose(np.vstack([y,x])) - q1 = np.fix((stride*bb+1)/scale) - q2 = np.fix((stride*bb+cellsize-1+1)/scale) - boundingbox = np.hstack([q1, q2, np.expand_dims(score,1), reg]) - return boundingbox, reg - -# function pick = nms(boxes,threshold,type) -def nms(boxes, threshold, method): - if boxes.size==0: - return np.empty((0,3)) - x1 = boxes[:,0] - y1 = boxes[:,1] - x2 = boxes[:,2] - y2 = boxes[:,3] - s = boxes[:,4] - area = (x2-x1+1) * (y2-y1+1) - I = np.argsort(s) - pick = np.zeros_like(s, dtype=np.int16) - counter = 0 - while I.size>0: - i = I[-1] - pick[counter] = i - counter += 1 - idx = I[0:-1] - xx1 = np.maximum(x1[i], x1[idx]) - yy1 = np.maximum(y1[i], y1[idx]) - xx2 = np.minimum(x2[i], x2[idx]) - yy2 = np.minimum(y2[i], y2[idx]) - w = np.maximum(0.0, xx2-xx1+1) - h = np.maximum(0.0, yy2-yy1+1) - inter = w * h - if method is 'Min': - o = inter / np.minimum(area[i], area[idx]) - else: - o = inter / (area[i] + area[idx] - inter) - I = I[np.where(o<=threshold)] - pick = pick[0:counter] - return pick - -# function [dy edy dx edx y ey x ex tmpw tmph] = pad(total_boxes,w,h) -def pad(total_boxes, w, h): - """Compute the padding coordinates (pad the bounding boxes to square)""" - tmpw = (total_boxes[:,2]-total_boxes[:,0]+1).astype(np.int32) - tmph = (total_boxes[:,3]-total_boxes[:,1]+1).astype(np.int32) - numbox = total_boxes.shape[0] - - dx = np.ones((numbox), dtype=np.int32) - dy = np.ones((numbox), dtype=np.int32) - edx = tmpw.copy().astype(np.int32) - edy = tmph.copy().astype(np.int32) - - x = total_boxes[:,0].copy().astype(np.int32) - y = total_boxes[:,1].copy().astype(np.int32) - ex = total_boxes[:,2].copy().astype(np.int32) - ey = total_boxes[:,3].copy().astype(np.int32) - - tmp = np.where(ex>w) - edx.flat[tmp] = np.expand_dims(-ex[tmp]+w+tmpw[tmp],1) - ex[tmp] = w - - tmp = np.where(ey>h) - edy.flat[tmp] = np.expand_dims(-ey[tmp]+h+tmph[tmp],1) - ey[tmp] = h - - tmp = np.where(x<1) - dx.flat[tmp] = np.expand_dims(2-x[tmp],1) - x[tmp] = 1 - - tmp = np.where(y<1) - dy.flat[tmp] = np.expand_dims(2-y[tmp],1) - y[tmp] = 1 - - return dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph - -# function [bboxA] = rerec(bboxA) -def rerec(bboxA): - """Convert bboxA to square.""" - h = bboxA[:,3]-bboxA[:,1] - w = bboxA[:,2]-bboxA[:,0] - l = np.maximum(w, h) - bboxA[:,0] = bboxA[:,0]+w*0.5-l*0.5 - bboxA[:,1] = bboxA[:,1]+h*0.5-l*0.5 - bboxA[:,2:4] = bboxA[:,0:2] + np.transpose(np.tile(l,(2,1))) - return bboxA - -def imresample(img, sz): - im_data = cv2.resize(img, (sz[1], sz[0]), interpolation=cv2.INTER_AREA) #@UndefinedVariable - return im_data - - # This method is kept for debugging purpose -# h=img.shape[0] -# w=img.shape[1] -# hs, ws = sz -# dx = float(w) / ws -# dy = float(h) / hs -# im_data = np.zeros((hs,ws,3)) -# for a1 in range(0,hs): -# for a2 in range(0,ws): -# for a3 in range(0,3): -# im_data[a1,a2,a3] = img[int(floor(a1*dy)),int(floor(a2*dx)),a3] -# return im_data - diff --git a/facenet_sandberg/calculate_filtering_metrics.py b/facenet_sandberg/calculate_filtering_metrics.py index 6f70a3afb..2f864b015 100644 --- a/facenet_sandberg/calculate_filtering_metrics.py +++ b/facenet_sandberg/calculate_filtering_metrics.py @@ -1,19 +1,19 @@ """Calculate filtering metrics for a dataset and store in a .hdf file. """ # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -22,58 +22,58 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function -import tensorflow as tf -import numpy as np import argparse -from facenet_sandberg import facenet +import math import os import sys import time + import h5py -import math -from tensorflow.python.platform import gfile +import numpy as np +import tensorflow as tf +from facenet_sandberg import facenet from six import iteritems +from tensorflow.python.platform import gfile + def main(args): dataset = facenet.get_dataset(args.dataset_dir) - + with tf.Graph().as_default(): - + # Get a list of image paths and their labels image_list, label_list = facenet.get_image_paths_and_labels(dataset) nrof_images = len(image_list) image_indices = range(nrof_images) image_batch, label_batch = facenet.read_and_augment_data(image_list, - image_indices, args.image_size, args.batch_size, None, - False, False, False, nrof_preprocess_threads=4, shuffle=False) - + image_indices, args.image_size, args.batch_size, None, + False, False, False, nrof_preprocess_threads=4, shuffle=False) + model_exp = os.path.expanduser(args.model_file) - with gfile.FastGFile(model_exp,'rb') as f: + with gfile.FastGFile(model_exp, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) - input_map={'input':image_batch, 'phase_train':False} + input_map = {'input': image_batch, 'phase_train': False} tf.import_graph_def(graph_def, input_map=input_map, name='net') - + embeddings = tf.get_default_graph().get_tensor_by_name("net/embeddings:0") with tf.Session() as sess: tf.train.start_queue_runners(sess=sess) - + embedding_size = int(embeddings.get_shape()[1]) nrof_batches = int(math.ceil(nrof_images / args.batch_size)) nrof_classes = len(dataset) label_array = np.array(label_list) class_names = [cls.name for cls in dataset] - nrof_examples_per_class = [ len(cls.image_paths) for cls in dataset ] + nrof_examples_per_class = [len(cls.image_paths) for cls in dataset] class_variance = np.zeros((nrof_classes,)) - class_center = np.zeros((nrof_classes,embedding_size)) - distance_to_center = np.ones((len(label_list),))*np.NaN - emb_array = np.zeros((0,embedding_size)) + class_center = np.zeros((nrof_classes, embedding_size)) + distance_to_center = np.ones((len(label_list),)) * np.NaN + emb_array = np.zeros((0, embedding_size)) idx_array = np.zeros((0,), dtype=np.int32) lab_array = np.zeros((0,), dtype=np.int32) index_arr = np.append(0, np.cumsum(nrof_examples_per_class)) @@ -84,45 +84,59 @@ def main(args): idx_array = np.append(idx_array, idx, axis=0) lab_array = np.append(lab_array, label_array[idx], axis=0) for cls in set(lab_array): - cls_idx = np.where(lab_array==cls)[0] - if cls_idx.shape[0]==nrof_examples_per_class[cls]: + cls_idx = np.where(lab_array == cls)[0] + if cls_idx.shape[0] == nrof_examples_per_class[cls]: # We have calculated all the embeddings for this class i2 = np.argsort(idx_array[cls_idx]) - emb_class = emb_array[cls_idx,:] - emb_sort = emb_class[i2,:] + emb_class = emb_array[cls_idx, :] + emb_sort = emb_class[i2, :] center = np.mean(emb_sort, axis=0) diffs = emb_sort - center dists_sqr = np.sum(np.square(diffs), axis=1) class_variance[cls] = np.mean(dists_sqr) - class_center[cls,:] = center - distance_to_center[index_arr[cls]:index_arr[cls+1]] = np.sqrt(dists_sqr) + class_center[cls, :] = center + distance_to_center[index_arr[cls]: index_arr[cls + 1]] = np.sqrt(dists_sqr) emb_array = np.delete(emb_array, cls_idx, axis=0) idx_array = np.delete(idx_array, cls_idx, axis=0) lab_array = np.delete(lab_array, cls_idx, axis=0) - - print('Batch %d in %.3f seconds' % (i, time.time()-t)) - + print('Batch %d in %.3f seconds' % (i, time.time() - t)) + print('Writing filtering data to %s' % args.data_file_name) - mdict = {'class_names':class_names, 'image_list':image_list, 'label_list':label_list, 'distance_to_center':distance_to_center } + mdict = { + 'class_names': class_names, + 'image_list': image_list, + 'label_list': label_list, + 'distance_to_center': distance_to_center} with h5py.File(args.data_file_name, 'w') as f: for key, value in iteritems(mdict): f.create_dataset(key, data=value) - + + def parse_arguments(argv): parser = argparse.ArgumentParser() - - parser.add_argument('dataset_dir', type=str, + + parser.add_argument( + 'dataset_dir', + type=str, help='Path to the directory containing aligned dataset.') - parser.add_argument('model_file', type=str, + parser.add_argument( + 'model_file', + type=str, help='File containing the frozen model in protobuf (.pb) format to use for feature extraction.') - parser.add_argument('data_file_name', type=str, + parser.add_argument( + 'data_file_name', + type=str, help='The name of the file to store filtering data in.') parser.add_argument('--image_size', type=int, - help='Image size.', default=160) - parser.add_argument('--batch_size', type=int, - help='Number of images to process in a batch.', default=90) + help='Image size.', default=160) + parser.add_argument( + '--batch_size', + type=int, + help='Number of images to process in a batch.', + default=90) return parser.parse_args(argv) + if __name__ == '__main__': main(parse_arguments(sys.argv[1:])) diff --git a/facenet_sandberg/classifier.py b/facenet_sandberg/classifier.py index 1eb79455c..076856cbb 100644 --- a/facenet_sandberg/classifier.py +++ b/facenet_sandberg/classifier.py @@ -1,19 +1,19 @@ """An example of how to use your own dataset to train a classifier that recognizes people. """ # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -22,149 +22,196 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function -import tensorflow as tf -import numpy as np import argparse -from facenet_sandberg import facenet -import os -import sys import math +import os import pickle +import sys + +import numpy as np +import tensorflow as tf +from facenet_sandberg import facenet from sklearn.svm import SVC + def main(args): - + with tf.Graph().as_default(): - + with tf.Session() as sess: - + np.random.seed(seed=args.seed) - + if args.use_split_dataset: dataset_tmp = facenet.get_dataset(args.data_dir) - train_set, test_set = split_dataset(dataset_tmp, args.min_nrof_images_per_class, args.nrof_train_images_per_class) - if (args.mode=='TRAIN'): + train_set, test_set = split_dataset( + dataset_tmp, args.min_nrof_images_per_class, args.nrof_train_images_per_class) + if (args.mode == 'TRAIN'): dataset = train_set - elif (args.mode=='CLASSIFY'): + elif (args.mode == 'CLASSIFY'): dataset = test_set else: dataset = facenet.get_dataset(args.data_dir) # Check that there are at least one training image per class for cls in dataset: - assert(len(cls.image_paths)>0, 'There must be at least one image for each class in the dataset') + assert(len(cls.image_paths) > 0, + 'There must be at least one image for each class in the dataset') - paths, labels = facenet.get_image_paths_and_labels(dataset) - + print('Number of classes: %d' % len(dataset)) print('Number of images: %d' % len(paths)) - + # Load the model print('Loading feature extraction model') facenet.load_model(args.model) - + # Get input and output tensors images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0") embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0") phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0") embedding_size = embeddings.get_shape()[1] - + # Run forward pass to calculate embeddings print('Calculating features for images') nrof_images = len(paths) - nrof_batches_per_epoch = int(math.ceil(1.0*nrof_images / args.batch_size)) + nrof_batches_per_epoch = int( + math.ceil(1.0 * nrof_images / args.batch_size)) emb_array = np.zeros((nrof_images, embedding_size)) for i in range(nrof_batches_per_epoch): - start_index = i*args.batch_size - end_index = min((i+1)*args.batch_size, nrof_images) + start_index = i * args.batch_size + end_index = min((i + 1) * args.batch_size, nrof_images) paths_batch = paths[start_index:end_index] - images = facenet.load_data(paths_batch, False, False, args.image_size) - feed_dict = { images_placeholder:images, phase_train_placeholder:False } - emb_array[start_index:end_index,:] = sess.run(embeddings, feed_dict=feed_dict) - - classifier_filename_exp = os.path.expanduser(args.classifier_filename) + images = facenet.load_data( + paths_batch, False, False, args.image_size) + feed_dict = { + images_placeholder: images, + phase_train_placeholder: False} + emb_array[start_index:end_index, :] = sess.run( + embeddings, feed_dict=feed_dict) - if (args.mode=='TRAIN'): + classifier_filename_exp = os.path.expanduser( + args.classifier_filename) + + if (args.mode == 'TRAIN'): # Train classifier print('Training classifier') model = SVC(kernel='linear', probability=True) model.fit(emb_array, labels) - + # Create a list of class names - class_names = [ cls.name.replace('_', ' ') for cls in dataset] + class_names = [cls.name.replace('_', ' ') for cls in dataset] # Saving classifier model with open(classifier_filename_exp, 'wb') as outfile: pickle.dump((model, class_names), outfile) - print('Saved classifier model to file "%s"' % classifier_filename_exp) - - elif (args.mode=='CLASSIFY'): + print( + 'Saved classifier model to file "%s"' % + classifier_filename_exp) + + elif (args.mode == 'CLASSIFY'): # Classify images print('Testing classifier') with open(classifier_filename_exp, 'rb') as infile: (model, class_names) = pickle.load(infile) - print('Loaded classifier model from file "%s"' % classifier_filename_exp) + print( + 'Loaded classifier model from file "%s"' % + classifier_filename_exp) predictions = model.predict_proba(emb_array) best_class_indices = np.argmax(predictions, axis=1) - best_class_probabilities = predictions[np.arange(len(best_class_indices)), best_class_indices] - + best_class_probabilities = predictions[np.arange( + len(best_class_indices)), best_class_indices] + for i in range(len(best_class_indices)): - print('%4d %s: %.3f' % (i, class_names[best_class_indices[i]], best_class_probabilities[i])) - + print('%4d %s: %.3f' % (i, + class_names[best_class_indices[i]], + best_class_probabilities[i])) + accuracy = np.mean(np.equal(best_class_indices, labels)) print('Accuracy: %.3f' % accuracy) - - -def split_dataset(dataset, min_nrof_images_per_class, nrof_train_images_per_class): + + +def split_dataset( + dataset, + min_nrof_images_per_class, + nrof_train_images_per_class): train_set = [] test_set = [] for cls in dataset: paths = cls.image_paths # Remove classes with less than min_nrof_images_per_class - if len(paths)>=min_nrof_images_per_class: + if len(paths) >= min_nrof_images_per_class: np.random.shuffle(paths) - train_set.append(facenet.PersonClass(cls.name, paths[:nrof_train_images_per_class])) - test_set.append(facenet.PersonClass(cls.name, paths[nrof_train_images_per_class:])) + train_set.append(facenet.PersonClass( + cls.name, paths[:nrof_train_images_per_class])) + test_set.append(facenet.PersonClass( + cls.name, paths[nrof_train_images_per_class:])) return train_set, test_set - + def parse_arguments(argv): parser = argparse.ArgumentParser() - - parser.add_argument('mode', type=str, choices=['TRAIN', 'CLASSIFY'], - help='Indicates if a new classifier should be trained or a classification ' + - 'model should be used for classification', default='CLASSIFY') - parser.add_argument('data_dir', type=str, + + parser.add_argument( + 'mode', + type=str, + choices=[ + 'TRAIN', + 'CLASSIFY'], + help='Indicates if a new classifier should be trained or a classification ' + + 'model should be used for classification', + default='CLASSIFY') + parser.add_argument( + 'data_dir', + type=str, help='Path to the data directory containing aligned LFW face patches.') - parser.add_argument('model', type=str, + parser.add_argument( + 'model', + type=str, help='Could be either a directory containing the meta_file and ckpt_file or a model protobuf (.pb) file') - parser.add_argument('classifier_filename', - help='Classifier model file name as a pickle (.pkl) file. ' + + parser.add_argument( + 'classifier_filename', + help='Classifier model file name as a pickle (.pkl) file. ' + 'For training this is the output and for classification this is an input.') - parser.add_argument('--use_split_dataset', - help='Indicates that the dataset specified by data_dir should be split into a training and test set. ' + - 'Otherwise a separate test set can be specified using the test_data_dir option.', action='store_true') - parser.add_argument('--test_data_dir', type=str, + parser.add_argument( + '--use_split_dataset', + help='Indicates that the dataset specified by data_dir should be split into a training and test set. ' + + 'Otherwise a separate test set can be specified using the test_data_dir option.', + action='store_true') + parser.add_argument( + '--test_data_dir', + type=str, help='Path to the test data directory containing aligned images used for testing.') - parser.add_argument('--batch_size', type=int, - help='Number of images to process in a batch.', default=90) - parser.add_argument('--image_size', type=int, - help='Image size (height, width) in pixels.', default=160) + parser.add_argument( + '--batch_size', + type=int, + help='Number of images to process in a batch.', + default=90) + parser.add_argument( + '--image_size', + type=int, + help='Image size (height, width) in pixels.', + default=160) parser.add_argument('--seed', type=int, - help='Random seed.', default=666) - parser.add_argument('--min_nrof_images_per_class', type=int, - help='Only include classes with at least this number of images in the dataset', default=20) - parser.add_argument('--nrof_train_images_per_class', type=int, - help='Use this number of images from each class for training and the rest for testing', default=10) - + help='Random seed.', default=666) + parser.add_argument( + '--min_nrof_images_per_class', + type=int, + help='Only include classes with at least this number of images in the dataset', + default=20) + parser.add_argument( + '--nrof_train_images_per_class', + type=int, + help='Use this number of images from each class for training and the rest for testing', + default=10) + return parser.parse_args(argv) + if __name__ == '__main__': main(parse_arguments(sys.argv[1:])) diff --git a/facenet_sandberg/compare.py b/facenet_sandberg/compare.py index c7d375327..7b96613c9 100644 --- a/facenet_sandberg/compare.py +++ b/facenet_sandberg/compare.py @@ -1,19 +1,19 @@ """Performs face alignment and calculates L2 distance between the embeddings of images.""" # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -22,46 +22,52 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function -from scipy import misc -import tensorflow as tf -import numpy as np -import sys -import os -import copy import argparse +import copy +import os +import sys + +import numpy as np +import tensorflow as tf from facenet_sandberg import facenet from facenet_sandberg.align import detect_face +from scipy import misc + def main(args): - images = load_and_align_data(args.image_files, args.image_size, args.margin, args.gpu_memory_fraction) + images = load_and_align_data( + args.image_files, + args.image_size, + args.margin, + args.gpu_memory_fraction) with tf.Graph().as_default(): with tf.Session() as sess: - + # Load the model facenet.load_model(args.model) - + # Get input and output tensors images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0") embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0") phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0") # Run forward pass to calculate embeddings - feed_dict = { images_placeholder: images, phase_train_placeholder:False } + feed_dict = { + images_placeholder: images, + phase_train_placeholder: False} emb = sess.run(embeddings, feed_dict=feed_dict) - + nrof_images = len(args.image_files) print('Images:') for i in range(nrof_images): print('%1d: %s' % (i, args.image_files[i])) print('') - + # Print distance matrix print('Distance matrix') print(' ', end='') @@ -71,60 +77,84 @@ def main(args): for i in range(nrof_images): print('%1d ' % i, end='') for j in range(nrof_images): - dist = np.sqrt(np.sum(np.square(np.subtract(emb[i,:], emb[j,:])))) + dist = np.sqrt( + np.sum(np.square(np.subtract(emb[i, :], emb[j, :])))) print(' %1.4f ' % dist, end='') print('') - - + + def load_and_align_data(image_paths, image_size, margin, gpu_memory_fraction): - minsize = 20 # minimum size of face - threshold = [ 0.6, 0.7, 0.7 ] # three steps's threshold - factor = 0.709 # scale factor - + minsize = 20 # minimum size of face + threshold = [0.6, 0.7, 0.7] # three steps's threshold + factor = 0.709 # scale factor + print('Creating networks and loading parameters') with tf.Graph().as_default(): - gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction) - sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False)) + gpu_options = tf.GPUOptions( + per_process_gpu_memory_fraction=gpu_memory_fraction) + sess = tf.Session( + config=tf.ConfigProto( + gpu_options=gpu_options, + log_device_placement=False)) with sess.as_default(): pnet, rnet, onet = align.detect_face.create_mtcnn(sess, None) - - tmp_image_paths=copy.copy(image_paths) + + tmp_image_paths = copy.copy(image_paths) img_list = [] for image in tmp_image_paths: img = misc.imread(os.path.expanduser(image), mode='RGB') img_size = np.asarray(img.shape)[0:2] - bounding_boxes, _ = align.detect_face.detect_face(img, minsize, pnet, rnet, onet, threshold, factor) + bounding_boxes, _ = align.detect_face.detect_face( + img, minsize, pnet, rnet, onet, threshold, factor) if len(bounding_boxes) < 1: - image_paths.remove(image) - print("can't detect face, remove ", image) - continue - det = np.squeeze(bounding_boxes[0,0:4]) + image_paths.remove(image) + print("can't detect face, remove ", image) + continue + det = np.squeeze(bounding_boxes[0, 0:4]) bb = np.zeros(4, dtype=np.int32) - bb[0] = np.maximum(det[0]-margin/2, 0) - bb[1] = np.maximum(det[1]-margin/2, 0) - bb[2] = np.minimum(det[2]+margin/2, img_size[1]) - bb[3] = np.minimum(det[3]+margin/2, img_size[0]) - cropped = img[bb[1]:bb[3],bb[0]:bb[2],:] - aligned = misc.imresize(cropped, (image_size, image_size), interp='bilinear') + bb[0] = np.maximum(det[0] - margin / 2, 0) + bb[1] = np.maximum(det[1] - margin / 2, 0) + bb[2] = np.minimum(det[2] + margin / 2, img_size[1]) + bb[3] = np.minimum(det[3] + margin / 2, img_size[0]) + cropped = img[bb[1]:bb[3], bb[0]:bb[2], :] + aligned = misc.imresize( + cropped, (image_size, image_size), interp='bilinear') prewhitened = facenet.prewhiten(aligned) img_list.append(prewhitened) images = np.stack(img_list) return images + def parse_arguments(argv): parser = argparse.ArgumentParser() - - parser.add_argument('model', type=str, + + parser.add_argument( + 'model', + type=str, help='Could be either a directory containing the meta_file and ckpt_file or a model protobuf (.pb) file') - parser.add_argument('image_files', type=str, nargs='+', help='Images to compare') - parser.add_argument('--image_size', type=int, - help='Image size (height, width) in pixels.', default=160) - parser.add_argument('--margin', type=int, - help='Margin for the crop around the bounding box (height, width) in pixels.', default=44) - parser.add_argument('--gpu_memory_fraction', type=float, - help='Upper bound on the amount of GPU memory that will be used by the process.', default=1.0) + parser.add_argument( + 'image_files', + type=str, + nargs='+', + help='Images to compare') + parser.add_argument( + '--image_size', + type=int, + help='Image size (height, width) in pixels.', + default=160) + parser.add_argument( + '--margin', + type=int, + help='Margin for the crop around the bounding box (height, width) in pixels.', + default=44) + parser.add_argument( + '--gpu_memory_fraction', + type=float, + help='Upper bound on the amount of GPU memory that will be used by the process.', + default=1.0) return parser.parse_args(argv) + if __name__ == '__main__': main(parse_arguments(sys.argv[1:])) diff --git a/facenet_sandberg/convert_to_keras.py b/facenet_sandberg/convert_to_keras.py index b12b9f1b4..950cb4b35 100644 --- a/facenet_sandberg/convert_to_keras.py +++ b/facenet_sandberg/convert_to_keras.py @@ -15,11 +15,12 @@ def main(tf_ckpt_path, output_base_path, output_model_name): weights_filename = output_model_name + '_weights.h5' model_filename = output_model_name + '.h5' - npy_weights_dir, weights_dir, model_dir = create_output_directories(output_base_path) + npy_weights_dir, weights_dir, model_dir = create_output_directories( + output_base_path) extract_tensors_from_checkpoint_file(tf_ckpt_path, npy_weights_dir) model = InceptionResNetV1() - + print('Loading numpy weights from', npy_weights_dir) for layer in model.layers: if layer.weights: @@ -27,7 +28,10 @@ def main(tf_ckpt_path, output_base_path, output_model_name): for w in layer.weights: weight_name = os.path.basename(w.name).replace(':0', '') weight_file = layer.name + '_' + weight_name + '.npy' - weight_arr = np.load(os.path.join(npy_weights_dir, weight_file)) + weight_arr = np.load( + os.path.join( + npy_weights_dir, + weight_file)) weights.append(weight_arr) layer.set_weights(weights) @@ -76,7 +80,8 @@ def extract_tensors_from_checkpoint_file(filename, output_folder): if 'AuxLogit' in key: continue - # convert tensor name into the corresponding Keras layer weight name and save + # convert tensor name into the corresponding Keras layer weight name + # and save path = os.path.join(output_folder, get_filename(key)) arr = reader.get_tensor(key) np.save(path, arr) @@ -92,12 +97,12 @@ def parse_arguments(argv): 'tf_ckpt_path', type=str, help='Path to the directory containing pretrained tensorflow checkpoints.') - + parser.add_argument( 'output_base_path', type=str, help='Base path for the desired output directory.') - + parser.add_argument( 'output_model_name', type=str, diff --git a/facenet_sandberg/decode_msceleb_dataset.py b/facenet_sandberg/decode_msceleb_dataset.py index 477dd3392..188f18bd5 100644 --- a/facenet_sandberg/decode_msceleb_dataset.py +++ b/facenet_sandberg/decode_msceleb_dataset.py @@ -2,19 +2,19 @@ https://www.microsoft.com/en-us/research/project/ms-celeb-1m-challenge-recognizing-one-million-celebrities-real-world/ """ # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -23,19 +23,17 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function -from scipy import misc -import numpy as np +import argparse import base64 -import sys import os +import sys + import cv2 -import argparse +import numpy as np from facenet_sandberg import facenet - +from scipy import misc # File format: text files, each line is an image record containing 6 columns, delimited by TAB. # Column1: Freebase MID @@ -45,16 +43,17 @@ # Column5: PageURL # Column6: ImageData_Base64Encoded + def main(args): output_dir = os.path.expanduser(args.output_dir) - + if not os.path.exists(output_dir): os.mkdir(output_dir) - + # Store some git revision info in a text file in the output directory - src_path,_ = os.path.split(os.path.realpath(__file__)) + src_path, _ = os.path.split(os.path.realpath(__file__)) facenet.store_revision_info(src_path, output_dir, ' '.join(sys.argv)) - + i = 0 for f in args.tsv_files: for line in f: @@ -64,24 +63,45 @@ def main(args): img_string = fields[5] img_dec_string = base64.b64decode(img_string) img_data = np.fromstring(img_dec_string, dtype=np.uint8) - img = cv2.imdecode(img_data, cv2.IMREAD_COLOR) #pylint: disable=maybe-no-member + img = cv2.imdecode( + img_data, cv2.IMREAD_COLOR) # pylint: disable=maybe-no-member if args.size: - img = misc.imresize(img, (args.size, args.size), interp='bilinear') + img = misc.imresize( + img, (args.size, args.size), interp='bilinear') full_class_dir = os.path.join(output_dir, class_dir) if not os.path.exists(full_class_dir): os.mkdir(full_class_dir) - full_path = os.path.join(full_class_dir, img_name.replace('/','_')) - cv2.imwrite(full_path, img) #pylint: disable=maybe-no-member + full_path = os.path.join( + full_class_dir, img_name.replace( + '/', '_')) + cv2.imwrite(full_path, img) # pylint: disable=maybe-no-member print('%8d: %s' % (i, full_path)) i += 1 - + + if __name__ == '__main__': parser = argparse.ArgumentParser() - parser.add_argument('output_dir', type=str, help='Output base directory for the image dataset') - parser.add_argument('tsv_files', type=argparse.FileType('r'), nargs='+', help='Input TSV file name(s)') - parser.add_argument('--size', type=int, help='Images are resized to the given size') - parser.add_argument('--output_format', type=str, help='Format of the output images', default='png', choices=['png', 'jpg']) + parser.add_argument( + 'output_dir', + type=str, + help='Output base directory for the image dataset') + parser.add_argument( + 'tsv_files', + type=argparse.FileType('r'), + nargs='+', + help='Input TSV file name(s)') + parser.add_argument( + '--size', + type=int, + help='Images are resized to the given size') + parser.add_argument( + '--output_format', + type=str, + help='Format of the output images', + default='png', + choices=[ + 'png', + 'jpg']) main(parser.parse_args()) - diff --git a/facenet_sandberg/download_and_extract.py b/facenet_sandberg/download_and_extract.py index a835ac284..cd2998b22 100644 --- a/facenet_sandberg/download_and_extract.py +++ b/facenet_sandberg/download_and_extract.py @@ -1,14 +1,16 @@ -import requests -import zipfile import os +import zipfile + +import requests model_dict = { - 'lfw-subset': '1B5BQUZuJO-paxdN8UclxeHAR1WnR_Tzi', + 'lfw-subset': '1B5BQUZuJO-paxdN8UclxeHAR1WnR_Tzi', '20170131-234652': '0B5MzpY9kBtDVSGM0RmVET2EwVEk', '20170216-091149': '0B5MzpY9kBtDVTGZjcWkzT3pldDA', '20170512-110547': '0B5MzpY9kBtDVZ2RpVDYwWmxoSUk', '20180402-114759': '1EXPBSXwTaqrSC0OhUdXNmKSh9qJUQ55-' - } +} + def download_and_extract_file(model_name, data_dir): file_id = model_dict[model_name] @@ -20,20 +22,22 @@ def download_and_extract_file(model_name, data_dir): print('Extracting file to %s' % data_dir) zip_ref.extractall(data_dir) + def download_file_from_google_drive(file_id, destination): - - URL = "https://drive.google.com/uc?export=download" - - session = requests.Session() - - response = session.get(URL, params = { 'id' : file_id }, stream = True) - token = get_confirm_token(response) - - if token: - params = { 'id' : file_id, 'confirm' : token } - response = session.get(URL, params = params, stream = True) - - save_response_content(response, destination) + + URL = "https://drive.google.com/uc?export=download" + + session = requests.Session() + + response = session.get(URL, params={'id': file_id}, stream=True) + token = get_confirm_token(response) + + if token: + params = {'id': file_id, 'confirm': token} + response = session.get(URL, params=params, stream=True) + + save_response_content(response, destination) + def get_confirm_token(response): for key, value in response.cookies.items(): @@ -42,10 +46,11 @@ def get_confirm_token(response): return None + def save_response_content(response, destination): CHUNK_SIZE = 32768 with open(destination, "wb") as f: for chunk in response.iter_content(CHUNK_SIZE): - if chunk: # filter out keep-alive new chunks + if chunk: # filter out keep-alive new chunks f.write(chunk) diff --git a/facenet_sandberg/face.py b/facenet_sandberg/face.py index ff1d094cd..440690b0c 100644 --- a/facenet_sandberg/face.py +++ b/facenet_sandberg/face.py @@ -18,6 +18,13 @@ os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' tf.logging.set_verbosity(tf.logging.ERROR) +Image = np.ndarray +Embedding = np.ndarray +EmbeddingsGenerator = Generator[List[Embedding], None, None] +ImageGenerator = Generator[Image, None, None] +FacesGenerator = Generator[List[Face], None, None] + + class Face: """Class representing a single face @@ -34,9 +41,9 @@ class Face: def __init__(self): self.name: str = None self.bounding_box: List[float] = None - self.image: np.ndarray = None - self.container_image: np.ndarray = None - self.embedding: np.ndarray = None + self.image: Image = None + self.container_image: Image = None + self.embedding: Embedding = None self.matches: List[Match] = [] self.url: str = None @@ -71,14 +78,14 @@ def __init__(self, facenet_model_checkpoint: str, threshold: float = 1.10): self.threshold: float = threshold @staticmethod - def download_image(url: str) -> np.ndarray: + def download_image(url: str) -> Image: """Downloads an image from the url as a numpy array (opencv format) Arguments: url {str} -- url of image Returns: - np.ndarray -- array representing image + Image -- array representing image """ req = urlopen(url) @@ -87,30 +94,30 @@ def download_image(url: str) -> np.ndarray: return Identifier.fix_image(image) @staticmethod - def get_image_from_path(image_path: str) -> np.ndarray: + def get_image_from_path(image_path: str) -> Image: """Reads an image path to a numpy array (opencv format) - + Arguments: image_path {str} -- path to image - + Returns: - np.ndarray -- array representing image + Image -- array representing image """ return Identifier.fix_image(cv2.imread(image_path)) @staticmethod def get_images_from_dir( - directory: str, recursive: bool) -> Generator[np.ndarray, None, None]: + directory: str, recursive: bool) -> ImageGenerator: """Gets images in a directory - + Arguments: directory {str} -- path to directory recursive {bool} -- if True searches all subfolders for images. else searches for images in folder only. - + Returns: - Generator[np.ndarray, None, None] -- generator of images + ImageGenerator -- generator of images """ if recursive: @@ -120,9 +127,9 @@ def get_images_from_dir( image_paths = iglob(os.path.join(directory, '*.*')) for image_path in image_paths: yield Identifier.fix_image(cv2.imread(image_path)) - + @staticmethod - def fix_image(image: np.ndarray): + def fix_image(image: Image): if image.ndim < 2: image = image[:, :, np.newaxis] if image.ndim == 2: @@ -130,58 +137,57 @@ def fix_image(image: np.ndarray): image = image[:, :, 0:3] return image - def vectorize(self, image: np.ndarray, - face_limit: int = 5) -> List[np.ndarray]: + def vectorize(self, image: Image, + prealigned: bool = False, + face_limit: int = 5) -> List[Image]: """Gets face embeddings in a single image - + Arguments: - image {np.ndarray} -- Image to find embeddings - + image {Image} -- Image to find embeddings + Keyword Arguments: - face_limit {int} -- max number of faces allowed + prealigned {bool} -- is the image already aligned + face_limit {int} -- max number of faces allowed before image is discarded. (default: {5}) - + Returns: - List[np.ndarray] -- list of embeddings + List[Image] -- list of embeddings """ - - faces: List[Face] = self.detect_encode(image, face_limit) - vectors = [face.embedding for face in faces] + if not prealigned: + faces: List[Face] = self.detect_encode(image, face_limit) + vectors = [face.embedding for face in faces] + else: + vectors = [self.encoder.generate_embedding(image)] return vectors def vectorize_all(self, - images: Generator[np.ndarray, - None, - None], - face_limit: int = 5) -> Generator[List[np.ndarray], - None, - None]: + images: ImageGenerator, + face_limit: int = 5) -> EmbeddingsGenerator: """Gets face embeddings from a generator of images - + Arguments: - image {np.ndarray} -- Image to find embeddings - + images {ImageGenerator} -- Images to find embeddings for + Keyword Arguments: - face_limit {int} -- max number of faces allowed + face_limit {int} -- max number of faces allowed before image is discarded. (default: {5}) - + Returns: - Generator[List[np.ndarray]]-- generator of lists of images found in + EmbeddingGenerator-- generator of lists of images found in each photo """ - all_faces: Generator[List[Face], None, None] = self.detect_encode_all( + all_faces = self.detect_encode_all( images=images, save_memory=True, face_limit=face_limit) - vectors: Generator[List[np.ndarray], None, None] = ( - face.embedding for faces in all_faces for face in faces) + vectors = (face.embedding for faces in all_faces for face in faces) return vectors - def detect_encode(self, image: np.ndarray, + def detect_encode(self, image: Image, face_limit: int=5) -> List[Face]: """Detects faces in an image and encodes them Arguments: - image {np.ndarray} -- image to find faces and encode + image {Image} -- image to find faces and encode face_limit {int} -- Maximum # of faces allowed in image. If over limit returns empty list @@ -189,24 +195,20 @@ def detect_encode(self, image: np.ndarray, List[Face] -- list of Face objects with embeddings attached """ - faces: List[Face] = self.detector.find_faces(image, face_limit) + faces = self.detector.find_faces(image, face_limit) for face in faces: face.embedding = self.encoder.generate_embedding(face.image) return faces def detect_encode_all(self, - images: Generator[np.ndarray, - None, - None], + images: ImageGenerator, urls: [str]=None, save_memory: bool=False, - face_limit: int=5) -> Generator[List[Face], - None, - None]: + face_limit: int=5) -> FacesGenerator: """For a list of images finds and encodes all faces Arguments: - images {List or iterable of cv2 images} -- images to encode + images {ImageGenerator} -- images to encode Keyword Arguments: urls {str[]} -- Optional list of urls to attach to Face objects. @@ -216,16 +218,15 @@ def detect_encode_all(self, of refference to the original image like a url. (default: {False}) Returns: - Generator[List[Face]] -- Generator of lists of Face objects in each image + FaceGenerator -- Generator of lists of Face objects in each image """ - all_faces: Generator[List[Face], None, None] = self.detector.bulk_find_face( - images, urls, face_limit) + all_faces = self.detector.bulk_find_face(images, urls, face_limit) return self.encoder.get_all_embeddings(all_faces, save_memory) def compare_embedding(self, - embedding_1: np.ndarray, - embedding_2: np.ndarray, + embedding_1: Embedding, + embedding_2: Embedding, distance_metric: int=0) -> (bool, float): """Compares the distance between two embeddings @@ -248,8 +249,8 @@ def compare_embedding(self, is_match = True return is_match, distance - def compare_images(self, image_1: np.ndarray, - image_2: np.ndarray) -> Match: + def compare_images(self, image_1: Image, + image_2: Image) -> Match: """Compares two images for matching faces Arguments: @@ -289,8 +290,7 @@ def find_all_matches(self, image_directory: str, all_images = self.get_images_from_dir(image_directory, recursive) all_matches = [] - all_faces_lists: Generator[List[Face], None, - None] = self.detect_encode_all(all_images) + all_faces_lists = self.detect_encode_all(all_images) all_faces: Generator[Face, None, None] = ( face for faces in all_faces_lists for face in faces) # Really inefficient way to check all combinations @@ -324,14 +324,14 @@ def __init__(self, facenet_model_checkpoint: str): self.phase_train_placeholder = tf.get_default_graph( ).get_tensor_by_name("phase_train:0") - def generate_embedding(self, image: np.ndarray) -> np.ndarray: + def generate_embedding(self, image: Image) -> Embedding: """Generates embeddings for a Face object with image Arguments: - image {cv2 image (np array)} -- Image of face. Should be aligned. + image {Image} -- Image of face. Should be aligned. Returns: - numpy.ndarray -- a single vector representing a face embedding + Embedding -- a single vector representing a face embedding """ prewhiten_face = facenet.prewhiten(image) @@ -342,12 +342,8 @@ def generate_embedding(self, image: np.ndarray) -> np.ndarray: return self.sess.run(self.embeddings, feed_dict=feed_dict)[0] def get_all_embeddings(self, - all_faces: Generator[List[Face], - None, - None], - save_memory: bool=False) -> Generator[List[Face], - None, - None]: + all_faces: FacesGenerator, + save_memory: bool=False) -> FacesGenerator: """Generates embeddings for list of images Arguments: @@ -359,21 +355,22 @@ def get_all_embeddings(self, Returns: Faces with embeddings """ - # import pdb;pdb.set_trace() face_list: List[List[Face]] = list(all_faces) - prewhitened_images = [facenet.prewhiten(face.image) for faces in face_list for face in faces] + prewhitened_images = [ + facenet.prewhiten( + face.image) for faces in face_list for face in faces] if face_list: feed_dict = {self.images_placeholder: prewhitened_images, - self.phase_train_placeholder: False} + self.phase_train_placeholder: False} embed_array = self.sess.run(self.embeddings, feed_dict=feed_dict) - index = 0 + index = 0 for faces in face_list: for face in faces: if save_memory: face.image = None face.container_image = None face.embedding = embed_array[index] - index+=1 + index += 1 yield faces def tear_down(self): @@ -404,11 +401,9 @@ def __init__( self.detect_multiple_faces = detect_multiple_faces def bulk_find_face(self, - images: Generator[np.ndarray, - None, None], + images: ImageGenerator, urls: List[str] = None, - face_limit: int=5) -> Generator[List[Face], - None, None]: + face_limit: int=5) -> FacesGenerator: for index, image in enumerate(images): faces = self.find_faces(image, face_limit) if urls and index < len(urls): @@ -418,7 +413,7 @@ def bulk_find_face(self, else: yield faces - def find_faces(self, image: np.ndarray, face_limit: int=5) -> List[Face]: + def find_faces(self, image: Image, face_limit: int=5) -> List[Face]: faces = [] results = self.detector.detect_faces(image) img_size = np.asarray(image.shape)[0:2] @@ -440,9 +435,7 @@ def find_faces(self, image: np.ndarray, face_limit: int=5) -> List[Face]: face.bounding_box = bb face.image = misc.imresize( - cropped, - (self.face_crop_size, self.face_crop_size), - interp='bilinear') + cropped, (self.face_crop_size, self.face_crop_size), interp='bilinear') faces.append(face) return faces @@ -459,8 +452,13 @@ def fit_bounding_box(max_x: int, max_y: int, x1: int, return [x1, y1, x2, y2] -def align_dataset(input_dir, output_dir, image_size=182, - margin=44, random_order=False, detect_multiple_faces=False): +def align_dataset( + input_dir, + output_dir, + image_size=182, + margin=44, + random_order=False, + detect_multiple_faces=False): align_dataset_mtcnn.main( input_dir, output_dir, diff --git a/facenet_sandberg/facenet.py b/facenet_sandberg/facenet.py index 2b72579a5..20c26cfae 100644 --- a/facenet_sandberg/facenet.py +++ b/facenet_sandberg/facenet.py @@ -1,19 +1,19 @@ """Functions for building the face recognition network. """ # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -23,51 +23,55 @@ # SOFTWARE. # pylint: disable=missing-docstring -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function +import math import os -from subprocess import Popen, PIPE -import tensorflow as tf -import numpy as np -from scipy import misc -from sklearn.model_selection import KFold -from scipy import interpolate -from tensorflow.python.training import training import random import re -from tensorflow.python.platform import gfile -import math +from subprocess import PIPE, Popen + +import numpy as np +import tensorflow as tf +from scipy import interpolate, misc from six import iteritems +from sklearn.model_selection import KFold +from tensorflow.python.platform import gfile +from tensorflow.python.training import training + def triplet_loss(anchor, positive, negative, alpha): """Calculate the triplet loss according to the FaceNet paper - + Args: anchor: the embeddings for the anchor images. positive: the embeddings for the positive images. negative: the embeddings for the negative images. - + Returns: the triplet loss according to the FaceNet paper as a float tensor. """ with tf.variable_scope('triplet_loss'): pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), 1) neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), 1) - - basic_loss = tf.add(tf.subtract(pos_dist,neg_dist), alpha) + + basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha) loss = tf.reduce_mean(tf.maximum(basic_loss, 0.0), 0) - + return loss - + + def center_loss(features, label, alfa, nrof_classes): """Center loss based on the paper "A Discriminative Feature Learning Approach for Deep Face Recognition" (http://ydwen.github.io/papers/WenECCV16.pdf) """ nrof_features = features.get_shape()[1] - centers = tf.get_variable('centers', [nrof_classes, nrof_features], dtype=tf.float32, - initializer=tf.constant_initializer(0), trainable=False) + centers = tf.get_variable('centers', + [nrof_classes, + nrof_features], + dtype=tf.float32, + initializer=tf.constant_initializer(0), + trainable=False) label = tf.reshape(label, [-1]) centers_batch = tf.gather(centers, label) diff = (1 - alfa) * (centers_batch - features) @@ -76,6 +80,7 @@ def center_loss(features, label, alfa, nrof_classes): loss = tf.reduce_mean(tf.square(features - centers_batch)) return loss, centers + def get_image_paths_and_labels(dataset): image_paths_flat = [] labels_flat = [] @@ -84,23 +89,33 @@ def get_image_paths_and_labels(dataset): labels_flat += [i] * len(dataset[i].image_paths) return image_paths_flat, labels_flat + def shuffle_examples(image_paths, labels): shuffle_list = list(zip(image_paths, labels)) random.shuffle(shuffle_list) image_paths_shuff, labels_shuff = zip(*shuffle_list) return image_paths_shuff, labels_shuff + def random_rotate_image(image): angle = np.random.uniform(low=-10.0, high=10.0) return misc.imrotate(image, angle, 'bicubic') - -# 1: Random rotate 2: Random crop 4: Random flip 8: Fixed image standardization 16: Flip + + +# 1: Random rotate 2: Random crop 4: Random flip 8: Fixed image +# standardization 16: Flip RANDOM_ROTATE = 1 RANDOM_CROP = 2 RANDOM_FLIP = 4 FIXED_STANDARDIZATION = 8 FLIP = 16 -def create_input_pipeline(input_queue, image_size, nrof_preprocess_threads, batch_size_placeholder): + + +def create_input_pipeline( + input_queue, + image_size, + nrof_preprocess_threads, + batch_size_placeholder): images_and_labels_list = [] for _ in range(nrof_preprocess_threads): filenames, label, control = input_queue.dequeue() @@ -108,43 +123,62 @@ def create_input_pipeline(input_queue, image_size, nrof_preprocess_threads, batc for filename in tf.unstack(filenames): file_contents = tf.read_file(filename) image = tf.image.decode_image(file_contents, 3) - image = tf.cond(get_control_flag(control[0], RANDOM_ROTATE), - lambda:tf.py_func(random_rotate_image, [image], tf.uint8), - lambda:tf.identity(image)) - image = tf.cond(get_control_flag(control[0], RANDOM_CROP), - lambda:tf.random_crop(image, image_size + (3,)), - lambda:tf.image.resize_image_with_crop_or_pad(image, image_size[0], image_size[1])) + image = tf.cond( + get_control_flag( + control[0], + RANDOM_ROTATE), + lambda: tf.py_func( + random_rotate_image, + [image], + tf.uint8), + lambda: tf.identity(image)) + image = tf.cond( + get_control_flag( + control[0], RANDOM_CROP), lambda: tf.random_crop( + image, image_size + ( + 3,)), lambda: tf.image.resize_image_with_crop_or_pad( + image, image_size[0], image_size[1])) image = tf.cond(get_control_flag(control[0], RANDOM_FLIP), - lambda:tf.image.random_flip_left_right(image), - lambda:tf.identity(image)) - image = tf.cond(get_control_flag(control[0], FIXED_STANDARDIZATION), - lambda:(tf.cast(image, tf.float32) - 127.5)/128.0, - lambda:tf.image.per_image_standardization(image)) + lambda: tf.image.random_flip_left_right(image), + lambda: tf.identity(image)) + image = tf.cond( + get_control_flag( + control[0], + FIXED_STANDARDIZATION), + lambda: ( + tf.cast( + image, + tf.float32) - + 127.5) / + 128.0, + lambda: tf.image.per_image_standardization(image)) image = tf.cond(get_control_flag(control[0], FLIP), - lambda:tf.image.flip_left_right(image), - lambda:tf.identity(image)) + lambda: tf.image.flip_left_right(image), + lambda: tf.identity(image)) #pylint: disable=no-member image.set_shape(image_size + (3,)) images.append(image) images_and_labels_list.append([images, label]) image_batch, label_batch = tf.train.batch_join( - images_and_labels_list, batch_size=batch_size_placeholder, + images_and_labels_list, batch_size=batch_size_placeholder, shapes=[image_size + (3,), ()], enqueue_many=True, capacity=4 * nrof_preprocess_threads * 100, allow_smaller_final_batch=True) - + return image_batch, label_batch + def get_control_flag(control, field): return tf.equal(tf.mod(tf.floor_div(control, field), 2), 1) - + + def _add_loss_summaries(total_loss): """Add summaries for losses. - + Generates moving average for all losses and associated summaries for visualizing the performance of the network. - + Args: total_loss: Total loss from loss(). Returns: @@ -154,93 +188,117 @@ def _add_loss_summaries(total_loss): loss_averages = tf.train.ExponentialMovingAverage(0.9, name='avg') losses = tf.get_collection('losses') loss_averages_op = loss_averages.apply(losses + [total_loss]) - + # Attach a scalar summmary to all individual losses and the total loss; do the # same for the averaged version of the losses. for l in losses + [total_loss]: # Name each loss as '(raw)' and name the moving average version of the loss # as the original loss name. - tf.summary.scalar(l.op.name +' (raw)', l) + tf.summary.scalar(l.op.name + ' (raw)', l) tf.summary.scalar(l.op.name, loss_averages.average(l)) - + return loss_averages_op -def train(total_loss, global_step, optimizer, learning_rate, moving_average_decay, update_gradient_vars, log_histograms=True): + +def train( + total_loss, + global_step, + optimizer, + learning_rate, + moving_average_decay, + update_gradient_vars, + log_histograms=True): # Generate moving averages of all losses and associated summaries. loss_averages_op = _add_loss_summaries(total_loss) # Compute gradients. with tf.control_dependencies([loss_averages_op]): - if optimizer=='ADAGRAD': + if optimizer == 'ADAGRAD': opt = tf.train.AdagradOptimizer(learning_rate) - elif optimizer=='ADADELTA': - opt = tf.train.AdadeltaOptimizer(learning_rate, rho=0.9, epsilon=1e-6) - elif optimizer=='ADAM': - opt = tf.train.AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999, epsilon=0.1) - elif optimizer=='RMSPROP': - opt = tf.train.RMSPropOptimizer(learning_rate, decay=0.9, momentum=0.9, epsilon=1.0) - elif optimizer=='MOM': - opt = tf.train.MomentumOptimizer(learning_rate, 0.9, use_nesterov=True) + elif optimizer == 'ADADELTA': + opt = tf.train.AdadeltaOptimizer( + learning_rate, rho=0.9, epsilon=1e-6) + elif optimizer == 'ADAM': + opt = tf.train.AdamOptimizer( + learning_rate, beta1=0.9, beta2=0.999, epsilon=0.1) + elif optimizer == 'RMSPROP': + opt = tf.train.RMSPropOptimizer( + learning_rate, decay=0.9, momentum=0.9, epsilon=1.0) + elif optimizer == 'MOM': + opt = tf.train.MomentumOptimizer( + learning_rate, 0.9, use_nesterov=True) else: raise ValueError('Invalid optimization algorithm') - + grads = opt.compute_gradients(total_loss, update_gradient_vars) - + # Apply gradients. apply_gradient_op = opt.apply_gradients(grads, global_step=global_step) - + # Add histograms for trainable variables. if log_histograms: for var in tf.trainable_variables(): tf.summary.histogram(var.op.name, var) - + # Add histograms for gradients. if log_histograms: for grad, var in grads: if grad is not None: tf.summary.histogram(var.op.name + '/gradients', grad) - + # Track the moving averages of all trainable variables. variable_averages = tf.train.ExponentialMovingAverage( moving_average_decay, global_step) variables_averages_op = variable_averages.apply(tf.trainable_variables()) - + with tf.control_dependencies([apply_gradient_op, variables_averages_op]): train_op = tf.no_op(name='train') - + return train_op + def prewhiten(x): mean = np.mean(x) std = np.std(x) - std_adj = np.maximum(std, 1.0/np.sqrt(x.size)) - y = np.multiply(np.subtract(x, mean), 1/std_adj) - return y + std_adj = np.maximum(std, 1.0 / np.sqrt(x.size)) + y = np.multiply(np.subtract(x, mean), 1 / std_adj) + return y + def crop(image, random_crop, image_size): - if image.shape[1]>image_size: - sz1 = int(image.shape[1]//2) - sz2 = int(image_size//2) + if image.shape[1] > image_size: + sz1 = int(image.shape[1] // 2) + sz2 = int(image_size // 2) if random_crop: - diff = sz1-sz2 - (h, v) = (np.random.randint(-diff, diff+1), np.random.randint(-diff, diff+1)) + diff = sz1 - sz2 + (h, v) = (np.random.randint(-diff, diff + 1), + np.random.randint(-diff, diff + 1)) else: - (h, v) = (0,0) - image = image[(sz1-sz2+v):(sz1+sz2+v),(sz1-sz2+h):(sz1+sz2+h),:] + (h, v) = (0, 0) + image = image[(sz1 - sz2 + v):(sz1 + sz2 + v), + (sz1 - sz2 + h):(sz1 + sz2 + h), :] return image - + + def flip(image, random_flip): if random_flip and np.random.choice([True, False]): image = np.fliplr(image) return image + def to_rgb(img): w, h = img.shape ret = np.empty((w, h, 3), dtype=np.uint8) ret[:, :, 0] = ret[:, :, 1] = ret[:, :, 2] = img return ret - -def load_data(image_paths, do_random_crop, do_random_flip, image_size, do_prewhiten=True): + + +def load_data( + image_paths, + do_random_crop, + do_random_flip, + image_size, + do_prewhiten=True): nrof_samples = len(image_paths) images = np.zeros((nrof_samples, image_size, image_size, 3)) for i in range(nrof_samples): @@ -251,41 +309,45 @@ def load_data(image_paths, do_random_crop, do_random_flip, image_size, do_prewhi img = prewhiten(img) img = crop(img, do_random_crop, image_size) img = flip(img, do_random_flip) - images[i,:,:,:] = img + images[i, :, :, :] = img return images + def get_label_batch(label_data, batch_size, batch_index): nrof_examples = np.size(label_data, 0) - j = batch_index*batch_size % nrof_examples - if j+batch_size<=nrof_examples: - batch = label_data[j:j+batch_size] + j = batch_index * batch_size % nrof_examples + if j + batch_size <= nrof_examples: + batch = label_data[j:j + batch_size] else: x1 = label_data[j:nrof_examples] - x2 = label_data[0:nrof_examples-j] - batch = np.vstack([x1,x2]) + x2 = label_data[0:nrof_examples - j] + batch = np.vstack([x1, x2]) batch_int = batch.astype(np.int64) return batch_int + def get_batch(image_data, batch_size, batch_index): nrof_examples = np.size(image_data, 0) - j = batch_index*batch_size % nrof_examples - if j+batch_size<=nrof_examples: - batch = image_data[j:j+batch_size,:,:,:] + j = batch_index * batch_size % nrof_examples + if j + batch_size <= nrof_examples: + batch = image_data[j:j + batch_size, :, :, :] else: - x1 = image_data[j:nrof_examples,:,:,:] - x2 = image_data[0:nrof_examples-j,:,:,:] - batch = np.vstack([x1,x2]) + x1 = image_data[j:nrof_examples, :, :, :] + x2 = image_data[0:nrof_examples - j, :, :, :] + batch = np.vstack([x1, x2]) batch_float = batch.astype(np.float32) return batch_float + def get_triplet_batch(triplets, batch_index, batch_size): ax, px, nx = triplets - a = get_batch(ax, int(batch_size/3), batch_index) - p = get_batch(px, int(batch_size/3), batch_index) - n = get_batch(nx, int(batch_size/3), batch_index) + a = get_batch(ax, int(batch_size / 3), batch_index) + p = get_batch(px, int(batch_size / 3), batch_index) + n = get_batch(nx, int(batch_size / 3), batch_index) batch = np.vstack([a, p, n]) return batch + def get_learning_rate_from_file(filename, epoch): with open(filename, 'r') as f: for line in f.readlines(): @@ -293,7 +355,7 @@ def get_learning_rate_from_file(filename, epoch): if line: par = line.strip().split(':') e = int(par[0]) - if par[1]=='-': + if par[1] == '-': lr = -1 else: lr = float(par[1]) @@ -302,87 +364,102 @@ def get_learning_rate_from_file(filename, epoch): else: return learning_rate + class PersonClass(): "Stores the paths to images for a given person" + def __init__(self, name, image_paths): self.name = name self.image_paths = image_paths - + def __str__(self): return self.name + ', ' + str(len(self.image_paths)) + ' images' - + def __len__(self): return len(self.image_paths) - + + def get_dataset(path, has_class_directories=True): dataset = [] path_exp = os.path.expanduser(path) - people = [path for path in os.listdir(path_exp) \ - if os.path.isdir(os.path.join(path_exp, path))] - people.sort() + people = sorted([path for path in os.listdir(path_exp) + if os.path.isdir(os.path.join(path_exp, path))]) num_people = len(people) for i in range(num_people): person_name = people[i] facedir = os.path.join(path_exp, person_name) image_paths = get_image_paths(facedir) dataset.append(PersonClass(person_name, image_paths)) - + return dataset + def get_image_paths(facedir): image_paths = [] if os.path.isdir(facedir): images = os.listdir(facedir) - image_paths = [os.path.join(facedir,img) for img in images] + image_paths = [os.path.join(facedir, img) for img in images] return image_paths - + + def split_dataset(dataset, split_ratio, min_nrof_images_per_class, mode): - if mode=='SPLIT_CLASSES': + if mode == 'SPLIT_CLASSES': nrof_classes = len(dataset) class_indices = np.arange(nrof_classes) np.random.shuffle(class_indices) - split = int(round(nrof_classes*(1-split_ratio))) + split = int(round(nrof_classes * (1 - split_ratio))) train_set = [dataset[i] for i in class_indices[0:split]] test_set = [dataset[i] for i in class_indices[split:-1]] - elif mode=='SPLIT_IMAGES': + elif mode == 'SPLIT_IMAGES': train_set = [] test_set = [] for cls in dataset: paths = cls.image_paths np.random.shuffle(paths) nrof_images_in_class = len(paths) - split = int(math.floor(nrof_images_in_class*(1-split_ratio))) - if split==nrof_images_in_class: - split = nrof_images_in_class-1 - if split>=min_nrof_images_per_class and nrof_images_in_class-split>=1: + split = int(math.floor(nrof_images_in_class * (1 - split_ratio))) + if split == nrof_images_in_class: + split = nrof_images_in_class - 1 + if split >= min_nrof_images_per_class and nrof_images_in_class - split >= 1: train_set.append(PersonClass(cls.name, paths[:split])) test_set.append(PersonClass(cls.name, paths[split:])) else: raise ValueError('Invalid train/test split mode "%s"' % mode) return train_set, test_set + def load_model(model, input_map=None): # Check if the model is a model directory (containing a metagraph and a checkpoint file) # or if it is a protobuf file with a frozen graph model_exp = os.path.expanduser(model) if (os.path.isfile(model_exp)): - with gfile.FastGFile(model_exp,'rb') as f: + with gfile.FastGFile(model_exp, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def, input_map=input_map, name='') else: meta_file, ckpt_file = get_model_filenames(model_exp) - - saver = tf.train.import_meta_graph(os.path.join(model_exp, meta_file), input_map=input_map) - saver.restore(tf.get_default_session(), os.path.join(model_exp, ckpt_file)) - + + saver = tf.train.import_meta_graph(os.path.join( + model_exp, meta_file), input_map=input_map) + saver.restore( + tf.get_default_session(), + os.path.join( + model_exp, + ckpt_file)) + + def get_model_filenames(model_dir): files = os.listdir(model_dir) meta_files = [s for s in files if s.endswith('.meta')] - if len(meta_files)==0: - raise ValueError('No meta file found in the model directory (%s)' % model_dir) - elif len(meta_files)>1: - raise ValueError('There should not be more than one meta file in the model directory (%s)' % model_dir) + if len(meta_files) == 0: + raise ValueError( + 'No meta file found in the model directory (%s)' % + model_dir) + elif len(meta_files) > 1: + raise ValueError( + 'There should not be more than one meta file in the model directory (%s)' % + model_dir) meta_file = meta_files[0] ckpt = tf.train.get_checkpoint_state(model_dir) if ckpt and ckpt.model_checkpoint_path: @@ -393,107 +470,141 @@ def get_model_filenames(model_dir): max_step = -1 for f in files: step_str = re.match(r'(^model-[\w\- ]+.ckpt-(\d+))', f) - if step_str is not None and len(step_str.groups())>=2: + if step_str is not None and len(step_str.groups()) >= 2: step = int(step_str.groups()[1]) if step > max_step: max_step = step ckpt_file = step_str.groups()[0] return meta_file, ckpt_file - + + def distance(embeddings1, embeddings2, distance_metric=0): - if distance_metric==0: + if distance_metric == 0: # Euclidian distance diff = np.subtract(embeddings1, embeddings2) - dist = np.sum(np.square(diff),1) - elif distance_metric==1: + dist = np.sum(np.square(diff), 1) + elif distance_metric == 1: # Distance based on cosine similarity dot = np.sum(np.multiply(embeddings1, embeddings2), axis=1) - norm = np.linalg.norm(embeddings1, axis=1) * np.linalg.norm(embeddings2, axis=1) + norm = np.linalg.norm(embeddings1, axis=1) * \ + np.linalg.norm(embeddings2, axis=1) similarity = dot / norm dist = np.arccos(similarity) / math.pi else: - raise 'Undefined distance metric %d' % distance_metric - + raise 'Undefined distance metric %d' % distance_metric + return dist -def calculate_roc(thresholds, embeddings1, embeddings2, actual_issame, nrof_folds=10, distance_metric=0, subtract_mean=False): + +def calculate_roc( + thresholds, + embeddings1, + embeddings2, + actual_issame, + nrof_folds=10, + distance_metric=0, + subtract_mean=False): assert(embeddings1.shape[0] == embeddings2.shape[0]) assert(embeddings1.shape[1] == embeddings2.shape[1]) nrof_pairs = min(len(actual_issame), embeddings1.shape[0]) nrof_thresholds = len(thresholds) k_fold = KFold(n_splits=nrof_folds, shuffle=False) - - tprs = np.zeros((nrof_folds,nrof_thresholds)) - fprs = np.zeros((nrof_folds,nrof_thresholds)) + + tprs = np.zeros((nrof_folds, nrof_thresholds)) + fprs = np.zeros((nrof_folds, nrof_thresholds)) accuracy = np.zeros((nrof_folds)) - + indices = np.arange(nrof_pairs) - + for fold_idx, (train_set, test_set) in enumerate(k_fold.split(indices)): if subtract_mean: - mean = np.mean(np.concatenate([embeddings1[train_set], embeddings2[train_set]]), axis=0) + mean = np.mean(np.concatenate( + [embeddings1[train_set], embeddings2[train_set]]), axis=0) else: - mean = 0.0 - dist = distance(embeddings1-mean, embeddings2-mean, distance_metric) - + mean = 0.0 + dist = distance( + embeddings1 - mean, + embeddings2 - mean, + distance_metric) + # Find the best threshold for the fold acc_train = np.zeros((nrof_thresholds)) for threshold_idx, threshold in enumerate(thresholds): - _, _, acc_train[threshold_idx] = calculate_accuracy(threshold, dist[train_set], actual_issame[train_set]) + _, _, acc_train[threshold_idx] = calculate_accuracy( + threshold, dist[train_set], actual_issame[train_set]) best_threshold_index = np.argmax(acc_train) for threshold_idx, threshold in enumerate(thresholds): - tprs[fold_idx,threshold_idx], fprs[fold_idx,threshold_idx], _ = calculate_accuracy(threshold, dist[test_set], actual_issame[test_set]) - _, _, accuracy[fold_idx] = calculate_accuracy(thresholds[best_threshold_index], dist[test_set], actual_issame[test_set]) - - tpr = np.mean(tprs,0) - fpr = np.mean(fprs,0) + tprs[fold_idx, threshold_idx], fprs[fold_idx, threshold_idx], _ = calculate_accuracy( + threshold, dist[test_set], actual_issame[test_set]) + _, _, accuracy[fold_idx] = calculate_accuracy( + thresholds[best_threshold_index], dist[test_set], actual_issame[test_set]) + + tpr = np.mean(tprs, 0) + fpr = np.mean(fprs, 0) return tpr, fpr, accuracy + def calculate_accuracy(threshold, dist, actual_issame): predict_issame = np.less(dist, threshold) tp = np.sum(np.logical_and(predict_issame, actual_issame)) fp = np.sum(np.logical_and(predict_issame, np.logical_not(actual_issame))) - tn = np.sum(np.logical_and(np.logical_not(predict_issame), np.logical_not(actual_issame))) + tn = np.sum( + np.logical_and( + np.logical_not(predict_issame), + np.logical_not(actual_issame))) fn = np.sum(np.logical_and(np.logical_not(predict_issame), actual_issame)) - - tpr = 0 if (tp+fn==0) else float(tp) / float(tp+fn) - fpr = 0 if (fp+tn==0) else float(fp) / float(fp+tn) - acc = float(tp+tn)/dist.size + + tpr = 0 if (tp + fn == 0) else float(tp) / float(tp + fn) + fpr = 0 if (fp + tn == 0) else float(fp) / float(fp + tn) + acc = float(tp + tn) / dist.size return tpr, fpr, acc - -def calculate_val(thresholds, embeddings1, embeddings2, actual_issame, far_target, nrof_folds=10, distance_metric=0, subtract_mean=False): +def calculate_val( + thresholds, + embeddings1, + embeddings2, + actual_issame, + far_target, + nrof_folds=10, + distance_metric=0, + subtract_mean=False): assert(embeddings1.shape[0] == embeddings2.shape[0]) assert(embeddings1.shape[1] == embeddings2.shape[1]) nrof_pairs = min(len(actual_issame), embeddings1.shape[0]) nrof_thresholds = len(thresholds) k_fold = KFold(n_splits=nrof_folds, shuffle=False) - + val = np.zeros(nrof_folds) far = np.zeros(nrof_folds) - + indices = np.arange(nrof_pairs) - + for fold_idx, (train_set, test_set) in enumerate(k_fold.split(indices)): if subtract_mean: - mean = np.mean(np.concatenate([embeddings1[train_set], embeddings2[train_set]]), axis=0) + mean = np.mean(np.concatenate( + [embeddings1[train_set], embeddings2[train_set]]), axis=0) else: - mean = 0.0 - dist = distance(embeddings1-mean, embeddings2-mean, distance_metric) - + mean = 0.0 + dist = distance( + embeddings1 - mean, + embeddings2 - mean, + distance_metric) + # Find the threshold that gives FAR = far_target far_train = np.zeros(nrof_thresholds) for threshold_idx, threshold in enumerate(thresholds): - _, far_train[threshold_idx] = calculate_val_far(threshold, dist[train_set], actual_issame[train_set]) - if np.max(far_train)>=far_target: + _, far_train[threshold_idx] = calculate_val_far( + threshold, dist[train_set], actual_issame[train_set]) + if np.max(far_train) >= far_target: f = interpolate.interp1d(far_train, thresholds, kind='slinear') threshold = f(far_target) else: threshold = 0.0 - - val[fold_idx], far[fold_idx] = calculate_val_far(threshold, dist[test_set], actual_issame[test_set]) - + + val[fold_idx], far[fold_idx] = calculate_val_far( + threshold, dist[test_set], actual_issame[test_set]) + val_mean = np.mean(val) far_mean = np.mean(far) val_std = np.std(val) @@ -503,63 +614,74 @@ def calculate_val(thresholds, embeddings1, embeddings2, actual_issame, far_targe def calculate_val_far(threshold, dist, actual_issame): predict_issame = np.less(dist, threshold) true_accept = np.sum(np.logical_and(predict_issame, actual_issame)) - false_accept = np.sum(np.logical_and(predict_issame, np.logical_not(actual_issame))) + false_accept = np.sum( + np.logical_and( + predict_issame, + np.logical_not(actual_issame))) n_same = np.sum(actual_issame) n_diff = np.sum(np.logical_not(actual_issame)) val = float(true_accept) / float(n_same) far = float(false_accept) / float(n_diff) return val, far + def store_revision_info(src_path, output_dir, arg_string): try: # Get git hash cmd = ['git', 'rev-parse', 'HEAD'] - gitproc = Popen(cmd, stdout = PIPE, cwd=src_path) + gitproc = Popen(cmd, stdout=PIPE, cwd=src_path) (stdout, _) = gitproc.communicate() git_hash = stdout.strip() except OSError as e: - git_hash = ' '.join(cmd) + ': ' + e.strerror - + git_hash = ' '.join(cmd) + ': ' + e.strerror + try: # Get local changes cmd = ['git', 'diff', 'HEAD'] - gitproc = Popen(cmd, stdout = PIPE, cwd=src_path) + gitproc = Popen(cmd, stdout=PIPE, cwd=src_path) (stdout, _) = gitproc.communicate() git_diff = stdout.strip() except OSError as e: - git_diff = ' '.join(cmd) + ': ' + e.strerror - + git_diff = ' '.join(cmd) + ': ' + e.strerror + # Store a text file in the log directory rev_info_filename = os.path.join(output_dir, 'revision_info.txt') with open(rev_info_filename, "w") as text_file: text_file.write('arguments: %s\n--------------------\n' % arg_string) - text_file.write('tensorflow version: %s\n--------------------\n' % tf.__version__) # @UndefinedVariable + text_file.write( + 'tensorflow version: %s\n--------------------\n' % + tf.__version__) # @UndefinedVariable text_file.write('git hash: %s\n--------------------\n' % git_hash) text_file.write('%s' % git_diff) + def list_variables(filename): reader = training.NewCheckpointReader(filename) variable_map = reader.get_variable_to_shape_map() names = sorted(variable_map.keys()) return names -def put_images_on_grid(images, shape=(16,8)): + +def put_images_on_grid(images, shape=(16, 8)): nrof_images = images.shape[0] img_size = images.shape[1] bw = 3 - img = np.zeros((shape[1]*(img_size+bw)+bw, shape[0]*(img_size+bw)+bw, 3), np.float32) + img = np.zeros((shape[1] * (img_size + bw) + bw, + shape[0] * (img_size + bw) + bw, 3), np.float32) for i in range(shape[1]): - x_start = i*(img_size+bw)+bw + x_start = i * (img_size + bw) + bw for j in range(shape[0]): - img_index = i*shape[0]+j - if img_index>=nrof_images: + img_index = i * shape[0] + j + if img_index >= nrof_images: break - y_start = j*(img_size+bw)+bw - img[x_start:x_start+img_size, y_start:y_start+img_size, :] = images[img_index, :, :, :] - if img_index>=nrof_images: + y_start = j * (img_size + bw) + bw + img[x_start:x_start + img_size, y_start:y_start + + img_size, :] = images[img_index, :, :, :] + if img_index >= nrof_images: break return img + def write_arguments_to_file(args, filename): with open(filename, 'w') as f: for key, value in iteritems(vars(args)): diff --git a/facenet_sandberg/freeze_graph.py b/facenet_sandberg/freeze_graph.py index 494fab0c2..eb8ceb4ea 100644 --- a/facenet_sandberg/freeze_graph.py +++ b/facenet_sandberg/freeze_graph.py @@ -2,19 +2,19 @@ and exports the model as a graphdef protobuf """ # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -23,45 +23,55 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function -from tensorflow.python.framework import graph_util -import tensorflow as tf import argparse import os import sys + +import tensorflow as tf from facenet_sandberg import facenet from six.moves import xrange # @UnresolvedImport +from tensorflow.python.framework import graph_util + def main(args): with tf.Graph().as_default(): with tf.Session() as sess: # Load the model metagraph and checkpoint print('Model directory: %s' % args.model_dir) - meta_file, ckpt_file = facenet.get_model_filenames(os.path.expanduser(args.model_dir)) - + meta_file, ckpt_file = facenet.get_model_filenames( + os.path.expanduser(args.model_dir)) + print('Metagraph file: %s' % meta_file) print('Checkpoint file: %s' % ckpt_file) model_dir_exp = os.path.expanduser(args.model_dir) - saver = tf.train.import_meta_graph(os.path.join(model_dir_exp, meta_file), clear_devices=True) + saver = tf.train.import_meta_graph(os.path.join( + model_dir_exp, meta_file), clear_devices=True) tf.get_default_session().run(tf.global_variables_initializer()) tf.get_default_session().run(tf.local_variables_initializer()) - saver.restore(tf.get_default_session(), os.path.join(model_dir_exp, ckpt_file)) - - # Retrieve the protobuf graph definition and fix the batch norm nodes + saver.restore( + tf.get_default_session(), + os.path.join( + model_dir_exp, + ckpt_file)) + + # Retrieve the protobuf graph definition and fix the batch norm + # nodes input_graph_def = sess.graph.as_graph_def() - + # Freeze the graph def - output_graph_def = freeze_graph_def(sess, input_graph_def, 'embeddings,label_batch') + output_graph_def = freeze_graph_def( + sess, input_graph_def, 'embeddings,label_batch') # Serialize and dump the output graph to the filesystem with tf.gfile.GFile(args.output_file, 'wb') as f: f.write(output_graph_def.SerializeToString()) - print("%d ops in the final graph: %s" % (len(output_graph_def.node), args.output_file)) - + print("%d ops in the final graph: %s" % + (len(output_graph_def.node), args.output_file)) + + def freeze_graph_def(sess, input_graph_def, output_node_names): for node in input_graph_def.node: if node.op == 'RefSwitch': @@ -71,15 +81,17 @@ def freeze_graph_def(sess, input_graph_def, output_node_names): node.input[index] = node.input[index] + '/read' elif node.op == 'AssignSub': node.op = 'Sub' - if 'use_locking' in node.attr: del node.attr['use_locking'] + if 'use_locking' in node.attr: + del node.attr['use_locking'] elif node.op == 'AssignAdd': node.op = 'Add' - if 'use_locking' in node.attr: del node.attr['use_locking'] - + if 'use_locking' in node.attr: + del node.attr['use_locking'] + # Get the list of important nodes whitelist_names = [] for node in input_graph_def.node: - if (node.name.startswith('InceptionResnet') or node.name.startswith('embeddings') or + if (node.name.startswith('InceptionResnet') or node.name.startswith('embeddings') or node.name.startswith('image_batch') or node.name.startswith('label_batch') or node.name.startswith('phase_train') or node.name.startswith('Logits')): whitelist_names.append(node.name) @@ -89,15 +101,21 @@ def freeze_graph_def(sess, input_graph_def, output_node_names): sess, input_graph_def, output_node_names.split(","), variable_names_whitelist=whitelist_names) return output_graph_def - + + def parse_arguments(argv): parser = argparse.ArgumentParser() - - parser.add_argument('model_dir', type=str, + + parser.add_argument( + 'model_dir', + type=str, help='Directory containing the metagraph (.meta) file and the checkpoint (ckpt) file containing model parameters') - parser.add_argument('output_file', type=str, + parser.add_argument( + 'output_file', + type=str, help='Filename for the exported graphdef protobuf (.pb)') return parser.parse_args(argv) + if __name__ == '__main__': main(parse_arguments(sys.argv[1:])) diff --git a/facenet_sandberg/generate_pairs.py b/facenet_sandberg/generate_pairs.py index 6333b691f..cc5e6d7ae 100644 --- a/facenet_sandberg/generate_pairs.py +++ b/facenet_sandberg/generate_pairs.py @@ -4,15 +4,29 @@ import os import random -import numpy as np from typing import List, Tuple +import numpy as np + +names = [ + d for d in os.listdir(image_dir) if os.path.isdir( + os.path.join(image_dir, d))] + + def split_people_into_sets(image_dir: str, k_num_sets: int) -> List[List[str]]: - names = [d for d in os.listdir(image_dir) if os.path.isdir(os.path.join(image_dir, d))] + names = [ + d for d in os.listdir(image_dir) if os.path.isdir( + os.path.join( + image_dir, + d))] random.shuffle(names) - return [list(arr) for arr in np.array_split(names, k_num_sets)] -def make_matches(image_dir:str , people: List[str], total_matches: int) -> List[Tuple[str, int, int]]: + +def make_matches(image_dir: str, + people: List[str], + total_matches: int) -> List[Tuple[str, + int, + int]]: matches: List[Tuple[str, int, int]] = [] curr_matches = 0 while curr_matches < total_matches: @@ -21,17 +35,24 @@ def make_matches(image_dir:str , people: List[str], total_matches: int) -> List[ if len(images) > 1: img1, img2 = sorted( [ - int(''.join([i for i in random.choice(images) if i.isnumeric()]).lstrip('0')), - int(''.join([i for i in random.choice(images) if i.isnumeric()]).lstrip('0')) + int(''.join([i for i in random.choice( + images) if i.isnumeric()]).lstrip('0')), + int(''.join([i for i in random.choice( + images) if i.isnumeric()]).lstrip('0')) ] ) match = (person, img1, img2) if (img1 != img2) and (match not in matches): matches.append(match) curr_matches += 1 - return sorted(matches, key=lambda x: x[0].lower()) -def make_mismatches(image_dir: str, people: List[str], total_matches: int) -> List[Tuple[str, int, str, int]]: + +def make_mismatches(image_dir: str, + people: List[str], + total_matches: int) -> List[Tuple[str, + int, + str, + int]]: mismatches: List[Tuple[str, int, str, int]] = [] curr_matches = 0 while curr_matches < total_matches: @@ -40,21 +61,34 @@ def make_mismatches(image_dir: str, people: List[str], total_matches: int) -> Li if person1 != person2: person1_images = os.listdir(os.path.join(image_dir, person1)) person2_images = os.listdir(os.path.join(image_dir, person2)) + img1 = int(''.join([i for i in random.choice( + person1_images) if i.isnumeric()]).lstrip('0')) + img2 = int(''.join([i for i in random.choice( + person2_images) if i.isnumeric()]).lstrip('0')) + img1 = int(''.join([i for i in random.choice( + person1_images) if i.isnumeric()]).lstrip('0')) + img2 = int(''.join([i for i in random.choice( + person2_images) if i.isnumeric()]).lstrip('0')) - if person1_images and person2_images: - img1 = int(''.join([i for i in random.choice(person1_images) if i.isnumeric()]).lstrip('0')) - img2 = int(''.join([i for i in random.choice(person2_images) if i.isnumeric()]).lstrip('0')) + if person1.lower() > person2.lower(): + person1, img1, person2, img2 = person2, img2, person1, img1 - if person1.lower() > person2.lower(): - person1, img1, person2, img2 = person2, img2, person1, img1 + mismatch = (person1, img1, person2, img2) + if mismatch not in mismatches: + mismatches.append(mismatch) + curr_matches += 1 - mismatch = (person1, img1, person2, img2) - if mismatch not in mismatches: - mismatches.append(mismatch) - curr_matches += 1 - return sorted(mismatches, key=lambda x: x[0].lower()) -def write_pairs(fname: str, match_sets: List[List[Tuple[str, int, int]]], mismatch_sets: List[List[Tuple[str, int, str, int]]], k_num_sets: int, total_matches_mismatches: int) -> None: +def write_pairs(fname: str, + match_sets: List[List[Tuple[str, + int, + int]]], + mismatch_sets: List[List[Tuple[str, + int, + str, + int]]], + k_num_sets: int, + total_matches_mismatches: int) -> None: file_contents = f'{k_num_sets}\t{total_matches_mismatches}\n' for match_set, mismatch_set in zip(match_sets, mismatch_sets): for match in match_set: @@ -63,12 +97,14 @@ def write_pairs(fname: str, match_sets: List[List[Tuple[str, int, int]]], mismat file_contents += f'{mismatch[0]}\t{mismatch[1]}\t{mismatch[2]}\t{mismatch[3]}\n' with open(fname, 'w') as fpairs: + fpairs.write(file_contents) + if __name__ == '__main__': - k_num_sets = 10 + # image_dir = os.path.join( total_matches_mismatches = 15 - #image_dir = os.path.join( + # image_dir = os.path.join( # os.path.dirname( # os.path.abspath(__file__) # ), @@ -77,10 +113,30 @@ def write_pairs(fname: str, match_sets: List[List[Tuple[str, int, int]]], mismat people_lists = split_people_into_sets(image_dir, k_num_sets) matches = [] - mismatches = [] - for people in people_lists: - matches.append(make_matches(image_dir, people, total_matches_mismatches)) - mismatches.append(make_mismatches(image_dir, people, total_matches_mismatches)) - + matches.append( + make_matches( + image_dir, + people, + total_matches_mismatches)) + mismatches.append( + make_mismatches( + image_dir, + people, + total_matches_mismatches)) + matches.append( + make_matches( + image_dir, + people, + total_matches_mismatches)) + mismatches.append( + make_mismatches( + image_dir, + people, + total_matches_mismatches)) + write_pairs( + fname, + matches, + mismatches, + k_num_sets, + total_matches_mismatches) fname = '/home/miperel/redcross/facenet/data/pairs.txt' - write_pairs(fname, matches, mismatches, k_num_sets, total_matches_mismatches) \ No newline at end of file diff --git a/facenet_sandberg/lfw.py b/facenet_sandberg/lfw.py index 69297c2ee..653cdff94 100644 --- a/facenet_sandberg/lfw.py +++ b/facenet_sandberg/lfw.py @@ -1,20 +1,20 @@ -"""Helper for evaluation on the Labeled Faces in the Wild dataset +"""Helper for evaluation on the Labeled Faces in the Wild dataset """ # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -37,30 +37,29 @@ from facenet_sandberg import facenet -def evaluate(embeddings, labels, nrof_folds=10, distance_metric=0, subtract_mean=False): +def evaluate(embeddings, labels, nrof_folds=10, + distance_metric=0, subtract_mean=False): # Calculate evaluation metrics thresholds = np.arange(0, 4, 0.01) embeddings1 = embeddings[0::2] embeddings2 = embeddings[1::2] - tpr, fpr, accuracy = facenet.calculate_roc(thresholds, embeddings1, embeddings2, - np.asarray(labels), nrof_folds=nrof_folds, - distance_metric=distance_metric, subtract_mean=subtract_mean) + tpr, fpr, accuracy = facenet.calculate_roc(thresholds, embeddings1, embeddings2, np.asarray( + labels), nrof_folds=nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean) thresholds = np.arange(0, 4, 0.001) - val, val_std, far = facenet.calculate_val(thresholds, embeddings1, embeddings2, - np.asarray(labels), 1e-3, nrof_folds=nrof_folds, - distance_metric=distance_metric, subtract_mean=subtract_mean) + val, val_std, far = facenet.calculate_val(thresholds, embeddings1, embeddings2, np.asarray( + labels), 1e-3, nrof_folds=nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean) return tpr, fpr, accuracy, val, val_std, far def get_paths(lfw_dir, pairs): """Gets full paths for image pairs and labels (same person or not) - + Arguments: lfw_dir {str} -- Base directory of testing data pairs {[[str]]} -- List of pairs of form: - For same person: [name, image 1 index, image 2 index] - For different: [name 1, image index 1, name 2, image index 2] - + Returns: [(str, str)], [bool] -- list of image pair paths and labels """ @@ -70,28 +69,33 @@ def get_paths(lfw_dir, pairs): labels = [] for pair in pairs: if len(pair) == 3: - path0 = add_extension(os.path.join(lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[1]))) - path1 = add_extension(os.path.join(lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[2]))) + path0 = add_extension(os.path.join( + lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[1]))) + path1 = add_extension(os.path.join( + lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[2]))) is_same_person = True elif len(pair) == 4: - path0 = add_extension(os.path.join(lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[1]))) - path1 = add_extension(os.path.join(lfw_dir, pair[2], pair[2] + '_' + '%04d' % int(pair[3]))) + path0 = add_extension(os.path.join( + lfw_dir, pair[0], pair[0] + '_' + '%04d' % int(pair[1]))) + path1 = add_extension(os.path.join( + lfw_dir, pair[2], pair[2] + '_' + '%04d' % int(pair[3]))) is_same_person = False - if os.path.exists(path0) and os.path.exists(path1): # Only add the pair if both paths exist - path_list += (path0,path1) + if os.path.exists(path0) and os.path.exists( + path1): # Only add the pair if both paths exist + path_list += (path0, path1) labels.append(is_same_person) else: nrof_skipped_pairs += 1 - if nrof_skipped_pairs>0: + if nrof_skipped_pairs > 0: print('Skipped %d image pairs' % nrof_skipped_pairs) - + return path_list, labels def transform_to_lfw_format(image_directory, num_processes=os.cpu_count()): """Transforms an image dataset to lfw format image names. Base directory should have a folder per person with the person's name. - + Arguments: image_directory {str} -- base directory of people folders """ @@ -105,7 +109,7 @@ def transform_to_lfw_format(image_directory, num_processes=os.cpu_count()): def rename(person_folder): """Renames all the images in a folder in lfw format - + Arguments: person_folder {str} -- path to folder named after person """ @@ -125,7 +129,7 @@ def add_extension(path): """Adds a image file extension to the path if it exists Arguments: - path {str} -- base path to image file + path {str} -- base path to image file Raises: RuntimeError -- [description] @@ -134,10 +138,10 @@ def add_extension(path): str -- base path plus image file extension """ - if os.path.exists(path+'.jpg'): - return path+'.jpg' - elif os.path.exists(path+'.png'): - return path+'.png' + if os.path.exists(path + '.jpg'): + return path + '.jpg' + elif os.path.exists(path + '.png'): + return path + '.png' else: raise RuntimeError('No file "%s" with extension png or jpg.' % path) @@ -146,10 +150,10 @@ def read_pairs(pairs_filename): """Reads a pairs.txt file to array. Each file line is of format: - If same person: "{person} {image 1 index} {image 2 index}" - If different: "{person 1} {image 1 index} {person 2} {image 2 index}" - + Arguments: - pairs_filename {str} -- path to pairs.txt file - + pairs_filename {str} -- path to pairs.txt file + Returns: np.ndarray -- numpy array of pairs """ diff --git a/facenet_sandberg/train_softmax.py b/facenet_sandberg/train_softmax.py index 79fa60933..d6cd2476c 100644 --- a/facenet_sandberg/train_softmax.py +++ b/facenet_sandberg/train_softmax.py @@ -1,19 +1,19 @@ """Training a face recognizer with TensorFlow using softmax cross entropy loss """ # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -22,164 +22,212 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function -from datetime import datetime +import argparse +import importlib +import math import os.path -import time -import sys import random -import tensorflow as tf -import numpy as np -import importlib -import argparse -from facenet_sandberg import facenet -from facenet_sandberg import lfw +import sys +import time +from datetime import datetime + import h5py -import math +import numpy as np +import tensorflow as tf import tensorflow.contrib.slim as slim -from tensorflow.python.ops import data_flow_ops +from facenet_sandberg import facenet, lfw from tensorflow.python.framework import ops -from tensorflow.python.ops import array_ops +from tensorflow.python.ops import array_ops, data_flow_ops + def main(args): - + network = importlib.import_module(args.model_def) image_size = (args.image_size, args.image_size) subdir = datetime.strftime(datetime.now(), '%Y%m%d-%H%M%S') log_dir = os.path.join(os.path.expanduser(args.logs_base_dir), subdir) - if not os.path.isdir(log_dir): # Create the log directory if it doesn't exist + if not os.path.isdir( + log_dir): # Create the log directory if it doesn't exist os.makedirs(log_dir) model_dir = os.path.join(os.path.expanduser(args.models_base_dir), subdir) - if not os.path.isdir(model_dir): # Create the model directory if it doesn't exist + if not os.path.isdir( + model_dir): # Create the model directory if it doesn't exist os.makedirs(model_dir) stat_file_name = os.path.join(log_dir, 'stat.h5') # Write arguments to a text file - facenet.write_arguments_to_file(args, os.path.join(log_dir, 'arguments.txt')) - + facenet.write_arguments_to_file( + args, os.path.join(log_dir, 'arguments.txt')) + # Store some git revision info in a text file in the log directory - src_path,_ = os.path.split(os.path.realpath(__file__)) + src_path, _ = os.path.split(os.path.realpath(__file__)) facenet.store_revision_info(src_path, log_dir, ' '.join(sys.argv)) np.random.seed(seed=args.seed) random.seed(args.seed) dataset = facenet.get_dataset(args.data_dir) if args.filter_filename: - dataset = filter_dataset(dataset, os.path.expanduser(args.filter_filename), - args.filter_percentile, args.filter_min_nrof_images_per_class) - - if args.validation_set_split_ratio>0.0: - train_set, val_set = facenet.split_dataset(dataset, args.validation_set_split_ratio, args.min_nrof_val_images_per_class, 'SPLIT_IMAGES') + dataset = filter_dataset( + dataset, + os.path.expanduser( + args.filter_filename), + args.filter_percentile, + args.filter_min_nrof_images_per_class) + + if args.validation_set_split_ratio > 0.0: + train_set, val_set = facenet.split_dataset( + dataset, args.validation_set_split_ratio, args.min_nrof_val_images_per_class, 'SPLIT_IMAGES') else: train_set, val_set = dataset, [] - + nrof_classes = len(train_set) - + print('Model directory: %s' % model_dir) print('Log directory: %s' % log_dir) pretrained_model = None if args.pretrained_model: pretrained_model = os.path.expanduser(args.pretrained_model) print('Pre-trained model: %s' % pretrained_model) - + if args.lfw_dir: print('LFW directory: %s' % args.lfw_dir) # Read the file containing the pairs used for testing pairs = lfw.read_pairs(os.path.expanduser(args.lfw_pairs)) # Get the paths for the corresponding images - lfw_paths, actual_issame = lfw.get_paths(os.path.expanduser(args.lfw_dir), pairs) - + lfw_paths, actual_issame = lfw.get_paths( + os.path.expanduser(args.lfw_dir), pairs) + with tf.Graph().as_default(): tf.set_random_seed(args.seed) global_step = tf.Variable(0, trainable=False) - + # Get a list of image paths and their labels image_list, label_list = facenet.get_image_paths_and_labels(train_set) - assert len(image_list)>0, 'The training set should not be empty' - - val_image_list, val_label_list = facenet.get_image_paths_and_labels(val_set) + assert len(image_list) > 0, 'The training set should not be empty' + + val_image_list, val_label_list = facenet.get_image_paths_and_labels( + val_set) - # Create a queue that produces indices into the image_list and label_list + # Create a queue that produces indices into the image_list and + # label_list labels = ops.convert_to_tensor(label_list, dtype=tf.int32) range_size = array_ops.shape(labels)[0] - index_queue = tf.train.range_input_producer(range_size, num_epochs=None, - shuffle=True, seed=None, capacity=32) - - index_dequeue_op = index_queue.dequeue_many(args.batch_size*args.epoch_size, 'index_dequeue') - - learning_rate_placeholder = tf.placeholder(tf.float32, name='learning_rate') + index_queue = tf.train.range_input_producer( + range_size, num_epochs=None, shuffle=True, seed=None, capacity=32) + + index_dequeue_op = index_queue.dequeue_many( + args.batch_size * args.epoch_size, 'index_dequeue') + + learning_rate_placeholder = tf.placeholder( + tf.float32, name='learning_rate') batch_size_placeholder = tf.placeholder(tf.int32, name='batch_size') phase_train_placeholder = tf.placeholder(tf.bool, name='phase_train') - image_paths_placeholder = tf.placeholder(tf.string, shape=(None,1), name='image_paths') - labels_placeholder = tf.placeholder(tf.int32, shape=(None,1), name='labels') - control_placeholder = tf.placeholder(tf.int32, shape=(None,1), name='control') - + image_paths_placeholder = tf.placeholder( + tf.string, shape=(None, 1), name='image_paths') + labels_placeholder = tf.placeholder( + tf.int32, shape=(None, 1), name='labels') + control_placeholder = tf.placeholder( + tf.int32, shape=(None, 1), name='control') + nrof_preprocess_threads = 4 input_queue = data_flow_ops.FIFOQueue(capacity=2000000, - dtypes=[tf.string, tf.int32, tf.int32], - shapes=[(1,), (1,), (1,)], - shared_name=None, name=None) - enqueue_op = input_queue.enqueue_many([image_paths_placeholder, labels_placeholder, control_placeholder], name='enqueue_op') - image_batch, label_batch = facenet.create_input_pipeline(input_queue, image_size, nrof_preprocess_threads, batch_size_placeholder) + dtypes=[tf.string, + tf.int32, tf.int32], + shapes=[(1,), (1,), (1,)], + shared_name=None, name=None) + enqueue_op = input_queue.enqueue_many( + [image_paths_placeholder, labels_placeholder, control_placeholder], name='enqueue_op') + image_batch, label_batch = facenet.create_input_pipeline( + input_queue, image_size, nrof_preprocess_threads, batch_size_placeholder) image_batch = tf.identity(image_batch, 'image_batch') image_batch = tf.identity(image_batch, 'input') label_batch = tf.identity(label_batch, 'label_batch') - + print('Number of classes in training set: %d' % nrof_classes) print('Number of examples in training set: %d' % len(image_list)) print('Number of classes in validation set: %d' % len(val_set)) print('Number of examples in validation set: %d' % len(val_image_list)) - + print('Building training graph') - + # Build the inference graph - prelogits, _ = network.inference(image_batch, args.keep_probability, - phase_train=phase_train_placeholder, bottleneck_layer_size=args.embedding_size, - weight_decay=args.weight_decay) - logits = slim.fully_connected(prelogits, len(train_set), activation_fn=None, - weights_initializer=slim.initializers.xavier_initializer(), - weights_regularizer=slim.l2_regularizer(args.weight_decay), - scope='Logits', reuse=False) + prelogits, _ = network.inference(image_batch, args.keep_probability, + phase_train=phase_train_placeholder, bottleneck_layer_size=args.embedding_size, + weight_decay=args.weight_decay) + logits = slim.fully_connected( + prelogits, + len(train_set), + activation_fn=None, + weights_initializer=slim.initializers.xavier_initializer(), + weights_regularizer=slim.l2_regularizer( + args.weight_decay), + scope='Logits', + reuse=False) embeddings = tf.nn.l2_normalize(prelogits, 1, 1e-10, name='embeddings') # Norm for the prelogits eps = 1e-4 - prelogits_norm = tf.reduce_mean(tf.norm(tf.abs(prelogits)+eps, ord=args.prelogits_norm_p, axis=1)) - tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, prelogits_norm * args.prelogits_norm_loss_factor) + prelogits_norm = tf.reduce_mean( + tf.norm( + tf.abs(prelogits) + eps, + ord=args.prelogits_norm_p, + axis=1)) + tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, + prelogits_norm * args.prelogits_norm_loss_factor) # Add center loss - prelogits_center_loss, _ = facenet.center_loss(prelogits, label_batch, args.center_loss_alfa, nrof_classes) - tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, prelogits_center_loss * args.center_loss_factor) - - learning_rate = tf.train.exponential_decay(learning_rate_placeholder, global_step, - args.learning_rate_decay_epochs*args.epoch_size, args.learning_rate_decay_factor, staircase=True) + prelogits_center_loss, _ = facenet.center_loss( + prelogits, label_batch, args.center_loss_alfa, nrof_classes) + tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, + prelogits_center_loss * args.center_loss_factor) + + learning_rate = tf.train.exponential_decay( + learning_rate_placeholder, + global_step, + args.learning_rate_decay_epochs * + args.epoch_size, + args.learning_rate_decay_factor, + staircase=True) tf.summary.scalar('learning_rate', learning_rate) # Calculate the average cross entropy loss across the batch cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( labels=label_batch, logits=logits, name='cross_entropy_per_example') - cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy') + cross_entropy_mean = tf.reduce_mean( + cross_entropy, name='cross_entropy') tf.add_to_collection('losses', cross_entropy_mean) - - correct_prediction = tf.cast(tf.equal(tf.argmax(logits, 1), tf.cast(label_batch, tf.int64)), tf.float32) + + correct_prediction = tf.cast( + tf.equal( + tf.argmax( + logits, 1), tf.cast( + label_batch, tf.int64)), tf.float32) accuracy = tf.reduce_mean(correct_prediction) - + # Calculate the total losses - regularization_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) - total_loss = tf.add_n([cross_entropy_mean] + regularization_losses, name='total_loss') + regularization_losses = tf.get_collection( + tf.GraphKeys.REGULARIZATION_LOSSES) + total_loss = tf.add_n([cross_entropy_mean] + + regularization_losses, name='total_loss') + + # Build a Graph that trains the model with one batch of examples and + # updates the model parameters + train_op = facenet.train( + total_loss, + global_step, + args.optimizer, + learning_rate, + args.moving_average_decay, + tf.global_variables(), + args.log_histograms) - # Build a Graph that trains the model with one batch of examples and updates the model parameters - train_op = facenet.train(total_loss, global_step, args.optimizer, - learning_rate, args.moving_average_decay, tf.global_variables(), args.log_histograms) - # Create a saver saver = tf.train.Saver(tf.trainable_variables(), max_to_keep=3) @@ -187,8 +235,10 @@ def main(args): summary_op = tf.summary.merge_all() # Start running operations on the Graph. - gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=args.gpu_memory_fraction) - sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False)) + gpu_options = tf.GPUOptions( + per_process_gpu_memory_fraction=args.gpu_memory_fraction) + sess = tf.Session(config=tf.ConfigProto( + gpu_options=gpu_options, log_device_placement=False)) sess.run(tf.global_variables_initializer()) sess.run(tf.local_variables_initializer()) summary_writer = tf.summary.FileWriter(log_dir, sess.graph) @@ -203,8 +253,11 @@ def main(args): # Training and validation loop print('Running training') - nrof_steps = args.max_nrof_epochs*args.epoch_size - nrof_val_samples = int(math.ceil(args.max_nrof_epochs / args.validate_every_n_epochs)) # Validate every validate_every_n_epochs as well as in the last epoch + nrof_steps = args.max_nrof_epochs * args.epoch_size + # Validate every validate_every_n_epochs as well as in the last + # epoch + nrof_val_samples = int( + math.ceil(args.max_nrof_epochs / args.validate_every_n_epochs)) stat = { 'loss': np.zeros((nrof_steps,), np.float32), 'center_loss': np.zeros((nrof_steps,), np.float32), @@ -222,61 +275,135 @@ def main(args): 'time_validate': np.zeros((args.max_nrof_epochs,), np.float32), 'time_evaluate': np.zeros((args.max_nrof_epochs,), np.float32), 'prelogits_hist': np.zeros((args.max_nrof_epochs, 1000), np.float32), - } - for epoch in range(1,args.max_nrof_epochs+1): + } + for epoch in range(1, args.max_nrof_epochs + 1): step = sess.run(global_step, feed_dict=None) # Train for one epoch t = time.time() - cont = train(args, sess, epoch, image_list, label_list, index_dequeue_op, enqueue_op, image_paths_placeholder, labels_placeholder, - learning_rate_placeholder, phase_train_placeholder, batch_size_placeholder, control_placeholder, global_step, - total_loss, train_op, summary_op, summary_writer, regularization_losses, args.learning_rate_schedule_file, - stat, cross_entropy_mean, accuracy, learning_rate, - prelogits, prelogits_center_loss, args.random_rotate, args.random_crop, args.random_flip, prelogits_norm, args.prelogits_hist_max, args.use_fixed_image_standardization) - stat['time_train'][epoch-1] = time.time() - t - + cont = train( + args, + sess, + epoch, + image_list, + label_list, + index_dequeue_op, + enqueue_op, + image_paths_placeholder, + labels_placeholder, + learning_rate_placeholder, + phase_train_placeholder, + batch_size_placeholder, + control_placeholder, + global_step, + total_loss, + train_op, + summary_op, + summary_writer, + regularization_losses, + args.learning_rate_schedule_file, + stat, + cross_entropy_mean, + accuracy, + learning_rate, + prelogits, + prelogits_center_loss, + args.random_rotate, + args.random_crop, + args.random_flip, + prelogits_norm, + args.prelogits_hist_max, + args.use_fixed_image_standardization) + stat['time_train'][epoch - 1] = time.time() - t + if not cont: break - + t = time.time() - if len(val_image_list)>0 and ((epoch-1) % args.validate_every_n_epochs == args.validate_every_n_epochs-1 or epoch==args.max_nrof_epochs): - validate(args, sess, epoch, val_image_list, val_label_list, enqueue_op, image_paths_placeholder, labels_placeholder, control_placeholder, - phase_train_placeholder, batch_size_placeholder, - stat, total_loss, regularization_losses, cross_entropy_mean, accuracy, args.validate_every_n_epochs, args.use_fixed_image_standardization) - stat['time_validate'][epoch-1] = time.time() - t + if len(val_image_list) > 0 and ( + (epoch - + 1) % + args.validate_every_n_epochs == args.validate_every_n_epochs - + 1 or epoch == args.max_nrof_epochs): + validate( + args, + sess, + epoch, + val_image_list, + val_label_list, + enqueue_op, + image_paths_placeholder, + labels_placeholder, + control_placeholder, + phase_train_placeholder, + batch_size_placeholder, + stat, + total_loss, + regularization_losses, + cross_entropy_mean, + accuracy, + args.validate_every_n_epochs, + args.use_fixed_image_standardization) + stat['time_validate'][epoch - 1] = time.time() - t # Save variables and the metagraph if it doesn't exist already - save_variables_and_metagraph(sess, saver, summary_writer, model_dir, subdir, epoch) + save_variables_and_metagraph( + sess, saver, summary_writer, model_dir, subdir, epoch) # Evaluate on LFW t = time.time() if args.lfw_dir: - evaluate(sess, enqueue_op, image_paths_placeholder, labels_placeholder, phase_train_placeholder, batch_size_placeholder, control_placeholder, - embeddings, label_batch, lfw_paths, actual_issame, args.lfw_batch_size, args.lfw_nrof_folds, log_dir, step, summary_writer, stat, epoch, - args.lfw_distance_metric, args.lfw_subtract_mean, args.lfw_use_flipped_images, args.use_fixed_image_standardization) - stat['time_evaluate'][epoch-1] = time.time() - t + evaluate( + sess, + enqueue_op, + image_paths_placeholder, + labels_placeholder, + phase_train_placeholder, + batch_size_placeholder, + control_placeholder, + embeddings, + label_batch, + lfw_paths, + actual_issame, + args.lfw_batch_size, + args.lfw_nrof_folds, + log_dir, + step, + summary_writer, + stat, + epoch, + args.lfw_distance_metric, + args.lfw_subtract_mean, + args.lfw_use_flipped_images, + args.use_fixed_image_standardization) + stat['time_evaluate'][epoch - 1] = time.time() - t print('Saving statistics') with h5py.File(stat_file_name, 'w') as f: for key, value in stat.iteritems(): f.create_dataset(key, data=value) - + return model_dir - + + def find_threshold(var, percentile): hist, bin_edges = np.histogram(var, 100) cdf = np.float32(np.cumsum(hist)) / np.sum(hist) - bin_centers = (bin_edges[:-1]+bin_edges[1:])/2 - #plt.plot(bin_centers, cdf) - threshold = np.interp(percentile*0.01, cdf, bin_centers) + bin_centers = (bin_edges[:-1] + bin_edges[1:]) / 2 + # plt.plot(bin_centers, cdf) + threshold = np.interp(percentile * 0.01, cdf, bin_centers) return threshold - -def filter_dataset(dataset, data_filename, percentile, min_nrof_images_per_class): - with h5py.File(data_filename,'r') as f: + + +def filter_dataset(dataset, data_filename, percentile, + min_nrof_images_per_class): + with h5py.File(data_filename, 'r') as f: distance_to_center = np.array(f.get('distance_to_center')) label_list = np.array(f.get('label_list')) image_list = np.array(f.get('image_list')) - distance_to_center_threshold = find_threshold(distance_to_center, percentile) - indices = np.where(distance_to_center>=distance_to_center_threshold)[0] + distance_to_center_threshold = find_threshold( + distance_to_center, percentile) + indices = np.where(distance_to_center >= + distance_to_center_threshold)[0] filtered_dataset = dataset removelist = [] for i in indices: @@ -284,7 +411,8 @@ def filter_dataset(dataset, data_filename, percentile, min_nrof_images_per_class image = image_list[i] if image in filtered_dataset[label].image_paths: filtered_dataset[label].image_paths.remove(image) - if len(filtered_dataset[label].image_paths)0.0: + + if args.learning_rate > 0.0: lr = args.learning_rate else: - lr = facenet.get_learning_rate_from_file(learning_rate_schedule_file, epoch) - - if lr<=0: - return False + lr = facenet.get_learning_rate_from_file( + learning_rate_schedule_file, epoch) + + if lr <= 0: + return False index_epoch = sess.run(index_dequeue_op) label_epoch = np.array(label_list)[index_epoch] image_epoch = np.array(image_list)[index_epoch] - + # Enqueue one epoch of image paths and labels - labels_array = np.expand_dims(np.array(label_epoch),1) - image_paths_array = np.expand_dims(np.array(image_epoch),1) - control_value = facenet.RANDOM_ROTATE * random_rotate + facenet.RANDOM_CROP * random_crop + facenet.RANDOM_FLIP * random_flip + facenet.FIXED_STANDARDIZATION * use_fixed_image_standardization + labels_array = np.expand_dims(np.array(label_epoch), 1) + image_paths_array = np.expand_dims(np.array(image_epoch), 1) + control_value = facenet.RANDOM_ROTATE * random_rotate + facenet.RANDOM_CROP * random_crop + \ + facenet.RANDOM_FLIP * random_flip + \ + facenet.FIXED_STANDARDIZATION * use_fixed_image_standardization control_array = np.ones_like(labels_array) * control_value - sess.run(enqueue_op, {image_paths_placeholder: image_paths_array, labels_placeholder: labels_array, control_placeholder: control_array}) + sess.run(enqueue_op, + {image_paths_placeholder: image_paths_array, + labels_placeholder: labels_array, + control_placeholder: control_array}) # Training loop train_time = 0 while batch_number < args.epoch_size: start_time = time.time() - feed_dict = {learning_rate_placeholder: lr, phase_train_placeholder:True, batch_size_placeholder:args.batch_size} - tensor_list = [loss, train_op, step, reg_losses, prelogits, cross_entropy_mean, learning_rate, prelogits_norm, accuracy, prelogits_center_loss] + feed_dict = { + learning_rate_placeholder: lr, + phase_train_placeholder: True, + batch_size_placeholder: args.batch_size} + tensor_list = [ + loss, + train_op, + step, + reg_losses, + prelogits, + cross_entropy_mean, + learning_rate, + prelogits_norm, + accuracy, + prelogits_center_loss] if batch_number % 100 == 0: - loss_, _, step_, reg_losses_, prelogits_, cross_entropy_mean_, lr_, prelogits_norm_, accuracy_, center_loss_, summary_str = sess.run(tensor_list + [summary_op], feed_dict=feed_dict) + loss_, _, step_, reg_losses_, prelogits_, cross_entropy_mean_, lr_, prelogits_norm_, accuracy_, center_loss_, summary_str = sess.run( + tensor_list + [summary_op], feed_dict=feed_dict) summary_writer.add_summary(summary_str, global_step=step_) else: - loss_, _, step_, reg_losses_, prelogits_, cross_entropy_mean_, lr_, prelogits_norm_, accuracy_, center_loss_ = sess.run(tensor_list, feed_dict=feed_dict) - + loss_, _, step_, reg_losses_, prelogits_, cross_entropy_mean_, lr_, prelogits_norm_, accuracy_, center_loss_ = sess.run( + tensor_list, feed_dict=feed_dict) + duration = time.time() - start_time - stat['loss'][step_-1] = loss_ - stat['center_loss'][step_-1] = center_loss_ - stat['reg_loss'][step_-1] = np.sum(reg_losses_) - stat['xent_loss'][step_-1] = cross_entropy_mean_ - stat['prelogits_norm'][step_-1] = prelogits_norm_ - stat['learning_rate'][epoch-1] = lr_ - stat['accuracy'][step_-1] = accuracy_ - stat['prelogits_hist'][epoch-1,:] += np.histogram(np.minimum(np.abs(prelogits_), prelogits_hist_max), bins=1000, range=(0.0, prelogits_hist_max))[0] - + stat['loss'][step_ - 1] = loss_ + stat['center_loss'][step_ - 1] = center_loss_ + stat['reg_loss'][step_ - 1] = np.sum(reg_losses_) + stat['xent_loss'][step_ - 1] = cross_entropy_mean_ + stat['prelogits_norm'][step_ - 1] = prelogits_norm_ + stat['learning_rate'][epoch - 1] = lr_ + stat['accuracy'][step_ - 1] = accuracy_ + stat['prelogits_hist'][epoch - 1, + :] += np.histogram(np.minimum(np.abs(prelogits_), + prelogits_hist_max), + bins=1000, + range=(0.0, + prelogits_hist_max))[0] + duration = time.time() - start_time - print('Epoch: [%d][%d/%d]\tTime %.3f\tLoss %2.3f\tXent %2.3f\tRegLoss %2.3f\tAccuracy %2.3f\tLr %2.5f\tCl %2.3f' % - (epoch, batch_number+1, args.epoch_size, duration, loss_, cross_entropy_mean_, np.sum(reg_losses_), accuracy_, lr_, center_loss_)) + print( + 'Epoch: [%d][%d/%d]\tTime %.3f\tLoss %2.3f\tXent %2.3f\tRegLoss %2.3f\tAccuracy %2.3f\tLr %2.5f\tCl %2.3f' % + (epoch, + batch_number + + 1, + args.epoch_size, + duration, + loss_, + cross_entropy_mean_, + np.sum(reg_losses_), + accuracy_, + lr_, + center_loss_)) batch_number += 1 train_time += duration # Add validation loss and accuracy to summary summary = tf.Summary() - #pylint: disable=maybe-no-member + # pylint: disable=maybe-no-member summary.value.add(tag='time/total', simple_value=train_time) summary_writer.add_summary(summary, global_step=step_) return True -def validate(args, sess, epoch, image_list, label_list, enqueue_op, image_paths_placeholder, labels_placeholder, control_placeholder, - phase_train_placeholder, batch_size_placeholder, - stat, loss, regularization_losses, cross_entropy_mean, accuracy, validate_every_n_epochs, use_fixed_image_standardization): - + +def validate( + args, + sess, + epoch, + image_list, + label_list, + enqueue_op, + image_paths_placeholder, + labels_placeholder, + control_placeholder, + phase_train_placeholder, + batch_size_placeholder, + stat, + loss, + regularization_losses, + cross_entropy_mean, + accuracy, + validate_every_n_epochs, + use_fixed_image_standardization): + print('Running forward pass on validation set') nrof_batches = len(label_list) // args.lfw_batch_size nrof_images = nrof_batches * args.lfw_batch_size - + # Enqueue one epoch of image paths and labels - labels_array = np.expand_dims(np.array(label_list[:nrof_images]),1) - image_paths_array = np.expand_dims(np.array(image_list[:nrof_images]),1) - control_array = np.ones_like(labels_array, np.int32)*facenet.FIXED_STANDARDIZATION * use_fixed_image_standardization - sess.run(enqueue_op, {image_paths_placeholder: image_paths_array, labels_placeholder: labels_array, control_placeholder: control_array}) + labels_array = np.expand_dims(np.array(label_list[:nrof_images]), 1) + image_paths_array = np.expand_dims(np.array(image_list[:nrof_images]), 1) + control_array = np.ones_like( + labels_array, + np.int32) * facenet.FIXED_STANDARDIZATION * use_fixed_image_standardization + sess.run(enqueue_op, + {image_paths_placeholder: image_paths_array, + labels_placeholder: labels_array, + control_placeholder: control_array}) loss_array = np.zeros((nrof_batches,), np.float32) xent_array = np.zeros((nrof_batches,), np.float32) @@ -375,9 +591,12 @@ def validate(args, sess, epoch, image_list, label_list, enqueue_op, image_paths_ # Training loop start_time = time.time() for i in range(nrof_batches): - feed_dict = {phase_train_placeholder:False, batch_size_placeholder:args.lfw_batch_size} - loss_, cross_entropy_mean_, accuracy_ = sess.run([loss, cross_entropy_mean, accuracy], feed_dict=feed_dict) - loss_array[i], xent_array[i], accuracy_array[i] = (loss_, cross_entropy_mean_, accuracy_) + feed_dict = {phase_train_placeholder: False, + batch_size_placeholder: args.lfw_batch_size} + loss_, cross_entropy_mean_, accuracy_ = sess.run( + [loss, cross_entropy_mean, accuracy], feed_dict=feed_dict) + loss_array[i], xent_array[i], accuracy_array[i] = ( + loss_, cross_entropy_mean_, accuracy_) if i % 10 == 9: print('.', end='') sys.stdout.flush() @@ -385,42 +604,70 @@ def validate(args, sess, epoch, image_list, label_list, enqueue_op, image_paths_ duration = time.time() - start_time - val_index = (epoch-1)//validate_every_n_epochs + val_index = (epoch - 1) // validate_every_n_epochs stat['val_loss'][val_index] = np.mean(loss_array) stat['val_xent_loss'][val_index] = np.mean(xent_array) stat['val_accuracy'][val_index] = np.mean(accuracy_array) - print('Validation Epoch: %d\tTime %.3f\tLoss %2.3f\tXent %2.3f\tAccuracy %2.3f' % - (epoch, duration, np.mean(loss_array), np.mean(xent_array), np.mean(accuracy_array))) - - -def evaluate(sess, enqueue_op, image_paths_placeholder, labels_placeholder, phase_train_placeholder, batch_size_placeholder, control_placeholder, - embeddings, labels, image_paths, actual_issame, batch_size, nrof_folds, log_dir, step, summary_writer, stat, epoch, distance_metric, subtract_mean, use_flipped_images, use_fixed_image_standardization): + print('Validation Epoch: %d\tTime %.3f\tLoss %2.3f\tXent %2.3f\tAccuracy %2.3f' % ( + epoch, duration, np.mean(loss_array), np.mean(xent_array), np.mean(accuracy_array))) + + +def evaluate( + sess, + enqueue_op, + image_paths_placeholder, + labels_placeholder, + phase_train_placeholder, + batch_size_placeholder, + control_placeholder, + embeddings, + labels, + image_paths, + actual_issame, + batch_size, + nrof_folds, + log_dir, + step, + summary_writer, + stat, + epoch, + distance_metric, + subtract_mean, + use_flipped_images, + use_fixed_image_standardization): start_time = time.time() # Run forward pass to calculate embeddings print('Runnning forward pass on LFW images') - + # Enqueue one epoch of image paths and labels - nrof_embeddings = len(actual_issame)*2 # nrof_pairs * nrof_images_per_pair + # nrof_pairs * nrof_images_per_pair + nrof_embeddings = len(actual_issame) * 2 nrof_flips = 2 if use_flipped_images else 1 nrof_images = nrof_embeddings * nrof_flips - labels_array = np.expand_dims(np.arange(0,nrof_images),1) - image_paths_array = np.expand_dims(np.repeat(np.array(image_paths),nrof_flips),1) + labels_array = np.expand_dims(np.arange(0, nrof_images), 1) + image_paths_array = np.expand_dims( + np.repeat(np.array(image_paths), nrof_flips), 1) control_array = np.zeros_like(labels_array, np.int32) if use_fixed_image_standardization: - control_array += np.ones_like(labels_array)*facenet.FIXED_STANDARDIZATION + control_array += np.ones_like(labels_array) * \ + facenet.FIXED_STANDARDIZATION if use_flipped_images: # Flip every second image - control_array += (labels_array % 2)*facenet.FLIP - sess.run(enqueue_op, {image_paths_placeholder: image_paths_array, labels_placeholder: labels_array, control_placeholder: control_array}) - + control_array += (labels_array % 2) * facenet.FLIP + sess.run(enqueue_op, + {image_paths_placeholder: image_paths_array, + labels_placeholder: labels_array, + control_placeholder: control_array}) + embedding_size = int(embeddings.get_shape()[1]) assert nrof_images % batch_size == 0, 'The number of LFW images must be an integer multiple of the LFW batch size' nrof_batches = nrof_images // batch_size emb_array = np.zeros((nrof_images, embedding_size)) lab_array = np.zeros((nrof_images,)) for i in range(nrof_batches): - feed_dict = {phase_train_placeholder:False, batch_size_placeholder:batch_size} + feed_dict = {phase_train_placeholder: False, + batch_size_placeholder: batch_size} emb, lab = sess.run([embeddings, labels], feed_dict=feed_dict) lab_array[lab] = lab emb_array[lab, :] = emb @@ -428,33 +675,38 @@ def evaluate(sess, enqueue_op, image_paths_placeholder, labels_placeholder, phas print('.', end='') sys.stdout.flush() print('') - embeddings = np.zeros((nrof_embeddings, embedding_size*nrof_flips)) + embeddings = np.zeros((nrof_embeddings, embedding_size * nrof_flips)) if use_flipped_images: - # Concatenate embeddings for flipped and non flipped version of the images - embeddings[:,:embedding_size] = emb_array[0::2,:] - embeddings[:,embedding_size:] = emb_array[1::2,:] + # Concatenate embeddings for flipped and non flipped version of the + # images + embeddings[:, :embedding_size] = emb_array[0::2, :] + embeddings[:, embedding_size:] = emb_array[1::2, :] else: embeddings = emb_array - assert np.array_equal(lab_array, np.arange(nrof_images))==True, 'Wrong labels used for evaluation, possibly caused by training examples left in the input pipeline' - _, _, accuracy, val, val_std, far = lfw.evaluate(embeddings, actual_issame, nrof_folds=nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean) - + assert np.array_equal(lab_array, np.arange( + nrof_images)), 'Wrong labels used for evaluation, possibly caused by training examples left in the input pipeline' + _, _, accuracy, val, val_std, far = lfw.evaluate( + embeddings, actual_issame, nrof_folds=nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean) + print('Accuracy: %2.5f+-%2.5f' % (np.mean(accuracy), np.std(accuracy))) print('Validation rate: %2.5f+-%2.5f @ FAR=%2.5f' % (val, val_std, far)) lfw_time = time.time() - start_time # Add validation loss and accuracy to summary summary = tf.Summary() - #pylint: disable=maybe-no-member + # pylint: disable=maybe-no-member summary.value.add(tag='lfw/accuracy', simple_value=np.mean(accuracy)) summary.value.add(tag='lfw/val_rate', simple_value=val) summary.value.add(tag='time/lfw', simple_value=lfw_time) summary_writer.add_summary(summary, step) - with open(os.path.join(log_dir,'lfw_result.txt'),'at') as f: + with open(os.path.join(log_dir, 'lfw_result.txt'), 'at') as f: f.write('%d\t%.5f\t%.5f\n' % (step, np.mean(accuracy), val)) - stat['lfw_accuracy'][epoch-1] = np.mean(accuracy) - stat['lfw_valrate'][epoch-1] = val + stat['lfw_accuracy'][epoch - 1] = np.mean(accuracy) + stat['lfw_valrate'][epoch - 1] = val + -def save_variables_and_metagraph(sess, saver, summary_writer, model_dir, model_name, step): +def save_variables_and_metagraph( + sess, saver, summary_writer, model_dir, model_name, step): # Save the model checkpoint print('Saving variables') start_time = time.time() @@ -463,7 +715,7 @@ def save_variables_and_metagraph(sess, saver, summary_writer, model_dir, model_n save_time_variables = time.time() - start_time print('Variables saved in %.2f seconds' % save_time_variables) metagraph_filename = os.path.join(model_dir, 'model-%s.meta' % model_name) - save_time_metagraph = 0 + save_time_metagraph = 0 if not os.path.exists(metagraph_filename): print('Saving metagraph') start_time = time.time() @@ -471,110 +723,215 @@ def save_variables_and_metagraph(sess, saver, summary_writer, model_dir, model_n save_time_metagraph = time.time() - start_time print('Metagraph saved in %.2f seconds' % save_time_metagraph) summary = tf.Summary() - #pylint: disable=maybe-no-member - summary.value.add(tag='time/save_variables', simple_value=save_time_variables) - summary.value.add(tag='time/save_metagraph', simple_value=save_time_metagraph) + # pylint: disable=maybe-no-member + summary.value.add(tag='time/save_variables', + simple_value=save_time_variables) + summary.value.add(tag='time/save_metagraph', + simple_value=save_time_metagraph) summary_writer.add_summary(summary, step) - + def parse_arguments(argv): parser = argparse.ArgumentParser() - - parser.add_argument('--logs_base_dir', type=str, - help='Directory where to write event logs.', default='~/logs/facenet') - parser.add_argument('--models_base_dir', type=str, - help='Directory where to write trained models and checkpoints.', default='~/models/facenet') - parser.add_argument('--gpu_memory_fraction', type=float, - help='Upper bound on the amount of GPU memory that will be used by the process.', default=1.0) + + parser.add_argument( + '--logs_base_dir', + type=str, + help='Directory where to write event logs.', + default='~/logs/facenet') + parser.add_argument( + '--models_base_dir', + type=str, + help='Directory where to write trained models and checkpoints.', + default='~/models/facenet') + parser.add_argument( + '--gpu_memory_fraction', + type=float, + help='Upper bound on the amount of GPU memory that will be used by the process.', + default=1.0) parser.add_argument('--pretrained_model', type=str, - help='Load a pretrained model before training starts.') - parser.add_argument('--data_dir', type=str, + help='Load a pretrained model before training starts.') + parser.add_argument( + '--data_dir', + type=str, help='Path to the data directory containing aligned face patches.', default='~/datasets/casia/casia_maxpy_mtcnnalign_182_160') - parser.add_argument('--model_def', type=str, - help='Model definition. Points to a module containing the definition of the inference graph.', default='models.inception_resnet_v1') + parser.add_argument( + '--model_def', + type=str, + help='Model definition. Points to a module containing the definition of the inference graph.', + default='models.inception_resnet_v1') parser.add_argument('--max_nrof_epochs', type=int, - help='Number of epochs to run.', default=500) - parser.add_argument('--batch_size', type=int, - help='Number of images to process in a batch.', default=90) - parser.add_argument('--image_size', type=int, - help='Image size (height, width) in pixels.', default=160) + help='Number of epochs to run.', default=500) + parser.add_argument( + '--batch_size', + type=int, + help='Number of images to process in a batch.', + default=90) + parser.add_argument( + '--image_size', + type=int, + help='Image size (height, width) in pixels.', + default=160) parser.add_argument('--epoch_size', type=int, - help='Number of batches per epoch.', default=1000) + help='Number of batches per epoch.', default=1000) parser.add_argument('--embedding_size', type=int, - help='Dimensionality of the embedding.', default=128) - parser.add_argument('--random_crop', + help='Dimensionality of the embedding.', default=128) + parser.add_argument( + '--random_crop', help='Performs random cropping of training images. If false, the center image_size pixels from the training images are used. ' + - 'If the size of the images in the data directory is equal to image_size no cropping is performed', action='store_true') - parser.add_argument('--random_flip', - help='Performs random horizontal flipping of training images.', action='store_true') - parser.add_argument('--random_rotate', - help='Performs random rotations of training images.', action='store_true') - parser.add_argument('--use_fixed_image_standardization', - help='Performs fixed standardization of images.', action='store_true') - parser.add_argument('--keep_probability', type=float, - help='Keep probability of dropout for the fully connected layer(s).', default=1.0) + 'If the size of the images in the data directory is equal to image_size no cropping is performed', + action='store_true') + parser.add_argument( + '--random_flip', + help='Performs random horizontal flipping of training images.', + action='store_true') + parser.add_argument( + '--random_rotate', + help='Performs random rotations of training images.', + action='store_true') + parser.add_argument( + '--use_fixed_image_standardization', + help='Performs fixed standardization of images.', + action='store_true') + parser.add_argument( + '--keep_probability', + type=float, + help='Keep probability of dropout for the fully connected layer(s).', + default=1.0) parser.add_argument('--weight_decay', type=float, - help='L2 weight regularization.', default=0.0) + help='L2 weight regularization.', default=0.0) parser.add_argument('--center_loss_factor', type=float, - help='Center loss factor.', default=0.0) - parser.add_argument('--center_loss_alfa', type=float, - help='Center update rate for center loss.', default=0.95) - parser.add_argument('--prelogits_norm_loss_factor', type=float, - help='Loss based on the norm of the activations in the prelogits layer.', default=0.0) - parser.add_argument('--prelogits_norm_p', type=float, - help='Norm to use for prelogits norm loss.', default=1.0) - parser.add_argument('--prelogits_hist_max', type=float, - help='The max value for the prelogits histogram.', default=10.0) - parser.add_argument('--optimizer', type=str, choices=['ADAGRAD', 'ADADELTA', 'ADAM', 'RMSPROP', 'MOM'], - help='The optimization algorithm to use', default='ADAGRAD') - parser.add_argument('--learning_rate', type=float, + help='Center loss factor.', default=0.0) + parser.add_argument( + '--center_loss_alfa', + type=float, + help='Center update rate for center loss.', + default=0.95) + parser.add_argument( + '--prelogits_norm_loss_factor', + type=float, + help='Loss based on the norm of the activations in the prelogits layer.', + default=0.0) + parser.add_argument( + '--prelogits_norm_p', + type=float, + help='Norm to use for prelogits norm loss.', + default=1.0) + parser.add_argument( + '--prelogits_hist_max', + type=float, + help='The max value for the prelogits histogram.', + default=10.0) + parser.add_argument( + '--optimizer', + type=str, + choices=[ + 'ADAGRAD', + 'ADADELTA', + 'ADAM', + 'RMSPROP', + 'MOM'], + help='The optimization algorithm to use', + default='ADAGRAD') + parser.add_argument( + '--learning_rate', + type=float, help='Initial learning rate. If set to a negative value a learning rate ' + - 'schedule can be specified in the file "learning_rate_schedule.txt"', default=0.1) - parser.add_argument('--learning_rate_decay_epochs', type=int, - help='Number of epochs between learning rate decay.', default=100) + 'schedule can be specified in the file "learning_rate_schedule.txt"', + default=0.1) + parser.add_argument( + '--learning_rate_decay_epochs', + type=int, + help='Number of epochs between learning rate decay.', + default=100) parser.add_argument('--learning_rate_decay_factor', type=float, - help='Learning rate decay factor.', default=1.0) - parser.add_argument('--moving_average_decay', type=float, - help='Exponential decay for tracking of training parameters.', default=0.9999) + help='Learning rate decay factor.', default=1.0) + parser.add_argument( + '--moving_average_decay', + type=float, + help='Exponential decay for tracking of training parameters.', + default=0.9999) parser.add_argument('--seed', type=int, - help='Random seed.', default=666) - parser.add_argument('--nrof_preprocess_threads', type=int, - help='Number of preprocessing (data loading and augmentation) threads.', default=4) - parser.add_argument('--log_histograms', - help='Enables logging of weight/bias histograms in tensorboard.', action='store_true') - parser.add_argument('--learning_rate_schedule_file', type=str, - help='File containing the learning rate schedule that is used when learning_rate is set to to -1.', default='data/learning_rate_schedule.txt') - parser.add_argument('--filter_filename', type=str, - help='File containing image data used for dataset filtering', default='') - parser.add_argument('--filter_percentile', type=float, - help='Keep only the percentile images closed to its class center', default=100.0) - parser.add_argument('--filter_min_nrof_images_per_class', type=int, - help='Keep only the classes with this number of examples or more', default=0) + help='Random seed.', default=666) + parser.add_argument( + '--nrof_preprocess_threads', + type=int, + help='Number of preprocessing (data loading and augmentation) threads.', + default=4) + parser.add_argument( + '--log_histograms', + help='Enables logging of weight/bias histograms in tensorboard.', + action='store_true') + parser.add_argument( + '--learning_rate_schedule_file', + type=str, + help='File containing the learning rate schedule that is used when learning_rate is set to to -1.', + default='data/learning_rate_schedule.txt') + parser.add_argument( + '--filter_filename', + type=str, + help='File containing image data used for dataset filtering', + default='') + parser.add_argument( + '--filter_percentile', + type=float, + help='Keep only the percentile images closed to its class center', + default=100.0) + parser.add_argument( + '--filter_min_nrof_images_per_class', + type=int, + help='Keep only the classes with this number of examples or more', + default=0) parser.add_argument('--validate_every_n_epochs', type=int, - help='Number of epoch between validation', default=5) - parser.add_argument('--validation_set_split_ratio', type=float, - help='The ratio of the total dataset to use for validation', default=0.0) - parser.add_argument('--min_nrof_val_images_per_class', type=float, - help='Classes with fewer images will be removed from the validation set', default=0) - + help='Number of epoch between validation', default=5) + parser.add_argument( + '--validation_set_split_ratio', + type=float, + help='The ratio of the total dataset to use for validation', + default=0.0) + parser.add_argument( + '--min_nrof_val_images_per_class', + type=float, + help='Classes with fewer images will be removed from the validation set', + default=0) + # Parameters for validation on LFW - parser.add_argument('--lfw_pairs', type=str, - help='The file containing the pairs to use for validation.', default='data/pairs.txt') - parser.add_argument('--lfw_dir', type=str, - help='Path to the data directory containing aligned face patches.', default='') - parser.add_argument('--lfw_batch_size', type=int, - help='Number of images to process in a batch in the LFW test set.', default=100) - parser.add_argument('--lfw_nrof_folds', type=int, - help='Number of folds to use for cross validation. Mainly used for testing.', default=10) - parser.add_argument('--lfw_distance_metric', type=int, - help='Type of distance metric to use. 0: Euclidian, 1:Cosine similarity distance.', default=0) - parser.add_argument('--lfw_use_flipped_images', - help='Concatenates embeddings for the image and its horizontally flipped counterpart.', action='store_true') - parser.add_argument('--lfw_subtract_mean', - help='Subtract feature mean before calculating distance.', action='store_true') + parser.add_argument( + '--lfw_pairs', + type=str, + help='The file containing the pairs to use for validation.', + default='data/pairs.txt') + parser.add_argument( + '--lfw_dir', + type=str, + help='Path to the data directory containing aligned face patches.', + default='') + parser.add_argument( + '--lfw_batch_size', + type=int, + help='Number of images to process in a batch in the LFW test set.', + default=100) + parser.add_argument( + '--lfw_nrof_folds', + type=int, + help='Number of folds to use for cross validation. Mainly used for testing.', + default=10) + parser.add_argument( + '--lfw_distance_metric', + type=int, + help='Type of distance metric to use. 0: Euclidian, 1:Cosine similarity distance.', + default=0) + parser.add_argument( + '--lfw_use_flipped_images', + help='Concatenates embeddings for the image and its horizontally flipped counterpart.', + action='store_true') + parser.add_argument( + '--lfw_subtract_mean', + help='Subtract feature mean before calculating distance.', + action='store_true') return parser.parse_args(argv) - + if __name__ == '__main__': main(parse_arguments(sys.argv[1:])) diff --git a/facenet_sandberg/train_tripletloss.py b/facenet_sandberg/train_tripletloss.py index d10c8d3f8..2ba136332 100644 --- a/facenet_sandberg/train_tripletloss.py +++ b/facenet_sandberg/train_tripletloss.py @@ -2,19 +2,19 @@ FaceNet: A Unified Embedding for Face Recognition and Clustering: http://arxiv.org/abs/1503.03832 """ # MIT License -# +# # Copyright (c) 2016 David Sandberg -# +# # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: -# +# # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. -# +# # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -23,81 +23,89 @@ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function +from __future__ import absolute_import, division, print_function -from datetime import datetime -import os.path -import time -import sys -import tensorflow as tf -import numpy as np +import argparse import importlib import itertools -import argparse -from facenet_sandberg import facenet -from facenet_sandberg import lfw +import os.path +import sys +import time +from datetime import datetime +import numpy as np +import tensorflow as tf +from facenet_sandberg import facenet, lfw +from six.moves import xrange # @UnresolvedImport from tensorflow.python.ops import data_flow_ops -from six.moves import xrange # @UnresolvedImport def main(args): - + network = importlib.import_module(args.model_def) subdir = datetime.strftime(datetime.now(), '%Y%m%d-%H%M%S') log_dir = os.path.join(os.path.expanduser(args.logs_base_dir), subdir) - if not os.path.isdir(log_dir): # Create the log directory if it doesn't exist + if not os.path.isdir( + log_dir): # Create the log directory if it doesn't exist os.makedirs(log_dir) model_dir = os.path.join(os.path.expanduser(args.models_base_dir), subdir) - if not os.path.isdir(model_dir): # Create the model directory if it doesn't exist + if not os.path.isdir( + model_dir): # Create the model directory if it doesn't exist os.makedirs(model_dir) # Write arguments to a text file - facenet.write_arguments_to_file(args, os.path.join(log_dir, 'arguments.txt')) - + facenet.write_arguments_to_file( + args, os.path.join( + log_dir, 'arguments.txt')) + # Store some git revision info in a text file in the log directory - src_path,_ = os.path.split(os.path.realpath(__file__)) + src_path, _ = os.path.split(os.path.realpath(__file__)) facenet.store_revision_info(src_path, log_dir, ' '.join(sys.argv)) np.random.seed(seed=args.seed) train_set = facenet.get_dataset(args.data_dir) - + print('Model directory: %s' % model_dir) print('Log directory: %s' % log_dir) if args.pretrained_model: - print('Pre-trained model: %s' % os.path.expanduser(args.pretrained_model)) - + print( + 'Pre-trained model: %s' % + os.path.expanduser( + args.pretrained_model)) + if args.lfw_dir: print('LFW directory: %s' % args.lfw_dir) # Read the file containing the pairs used for testing pairs = lfw.read_pairs(os.path.expanduser(args.lfw_pairs)) # Get the paths for the corresponding images - lfw_paths, actual_issame = lfw.get_paths(os.path.expanduser(args.lfw_dir), pairs) - - + lfw_paths, actual_issame = lfw.get_paths( + os.path.expanduser(args.lfw_dir), pairs) + with tf.Graph().as_default(): tf.set_random_seed(args.seed) global_step = tf.Variable(0, trainable=False) # Placeholder for the learning rate - learning_rate_placeholder = tf.placeholder(tf.float32, name='learning_rate') - + learning_rate_placeholder = tf.placeholder( + tf.float32, name='learning_rate') + batch_size_placeholder = tf.placeholder(tf.int32, name='batch_size') - + phase_train_placeholder = tf.placeholder(tf.bool, name='phase_train') - - image_paths_placeholder = tf.placeholder(tf.string, shape=(None,3), name='image_paths') - labels_placeholder = tf.placeholder(tf.int64, shape=(None,3), name='labels') - + + image_paths_placeholder = tf.placeholder( + tf.string, shape=(None, 3), name='image_paths') + labels_placeholder = tf.placeholder( + tf.int64, shape=(None, 3), name='labels') + input_queue = data_flow_ops.FIFOQueue(capacity=100000, - dtypes=[tf.string, tf.int64], - shapes=[(3,), (3,)], - shared_name=None, name=None) - enqueue_op = input_queue.enqueue_many([image_paths_placeholder, labels_placeholder]) - + dtypes=[tf.string, tf.int64], + shapes=[(3,), (3,)], + shared_name=None, name=None) + enqueue_op = input_queue.enqueue_many( + [image_paths_placeholder, labels_placeholder]) + nrof_preprocess_threads = 4 images_and_labels = [] for _ in range(nrof_preprocess_threads): @@ -106,21 +114,23 @@ def main(args): for filename in tf.unstack(filenames): file_contents = tf.read_file(filename) image = tf.image.decode_image(file_contents, channels=3) - + if args.random_crop: - image = tf.random_crop(image, [args.image_size, args.image_size, 3]) + image = tf.random_crop( + image, [args.image_size, args.image_size, 3]) else: - image = tf.image.resize_image_with_crop_or_pad(image, args.image_size, args.image_size) + image = tf.image.resize_image_with_crop_or_pad( + image, args.image_size, args.image_size) if args.random_flip: image = tf.image.random_flip_left_right(image) - - #pylint: disable=no-member + + # pylint: disable=no-member image.set_shape((args.image_size, args.image_size, 3)) images.append(tf.image.per_image_standardization(image)) images_and_labels.append([images, label]) - + image_batch, labels_batch = tf.train.batch_join( - images_and_labels, batch_size=batch_size_placeholder, + images_and_labels, batch_size=batch_size_placeholder, shapes=[(args.image_size, args.image_size, 3), ()], enqueue_many=True, capacity=4 * nrof_preprocess_threads * args.batch_size, allow_smaller_final_batch=True) @@ -129,27 +139,45 @@ def main(args): labels_batch = tf.identity(labels_batch, 'label_batch') # Build the inference graph - prelogits, _ = network.inference(image_batch, args.keep_probability, - phase_train=phase_train_placeholder, bottleneck_layer_size=args.embedding_size, - weight_decay=args.weight_decay) - + prelogits, _ = network.inference(image_batch, args.keep_probability, + phase_train=phase_train_placeholder, bottleneck_layer_size=args.embedding_size, + weight_decay=args.weight_decay) + embeddings = tf.nn.l2_normalize(prelogits, 1, 1e-10, name='embeddings') - # Split embeddings into anchor, positive and negative and calculate triplet loss - anchor, positive, negative = tf.unstack(tf.reshape(embeddings, [-1,3,args.embedding_size]), 3, 1) - triplet_loss = facenet.triplet_loss(anchor, positive, negative, args.alpha) - - learning_rate = tf.train.exponential_decay(learning_rate_placeholder, global_step, - args.learning_rate_decay_epochs*args.epoch_size, args.learning_rate_decay_factor, staircase=True) + # Split embeddings into anchor, positive and negative and calculate + # triplet loss + anchor, positive, negative = tf.unstack(tf.reshape( + embeddings, [-1, 3, args.embedding_size]), 3, 1) + triplet_loss = facenet.triplet_loss( + anchor, positive, negative, args.alpha) + + learning_rate = tf.train.exponential_decay( + learning_rate_placeholder, + global_step, + args.learning_rate_decay_epochs * + args.epoch_size, + args.learning_rate_decay_factor, + staircase=True) tf.summary.scalar('learning_rate', learning_rate) # Calculate the total losses - regularization_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) - total_loss = tf.add_n([triplet_loss] + regularization_losses, name='total_loss') + regularization_losses = tf.get_collection( + tf.GraphKeys.REGULARIZATION_LOSSES) + total_loss = tf.add_n( + [triplet_loss] + + regularization_losses, + name='total_loss') + + # Build a Graph that trains the model with one batch of examples and + # updates the model parameters + train_op = facenet.train( + total_loss, + global_step, + args.optimizer, + learning_rate, + args.moving_average_decay, + tf.global_variables()) - # Build a Graph that trains the model with one batch of examples and updates the model parameters - train_op = facenet.train(total_loss, global_step, args.optimizer, - learning_rate, args.moving_average_decay, tf.global_variables()) - # Create a saver saver = tf.train.Saver(tf.trainable_variables(), max_to_keep=3) @@ -157,12 +185,15 @@ def main(args): summary_op = tf.summary.merge_all() # Start running operations on the Graph. - gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=args.gpu_memory_fraction) - sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) + gpu_options = tf.GPUOptions( + per_process_gpu_memory_fraction=args.gpu_memory_fraction) + sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) # Initialize variables - sess.run(tf.global_variables_initializer(), feed_dict={phase_train_placeholder:True}) - sess.run(tf.local_variables_initializer(), feed_dict={phase_train_placeholder:True}) + sess.run(tf.global_variables_initializer(), + feed_dict={phase_train_placeholder: True}) + sess.run(tf.local_variables_initializer(), + feed_dict={phase_train_placeholder: True}) summary_writer = tf.summary.FileWriter(log_dir, sess.graph) coord = tf.train.Coordinator() @@ -180,66 +211,136 @@ def main(args): step = sess.run(global_step, feed_dict=None) epoch = step // args.epoch_size # Train for one epoch - train(args, sess, train_set, epoch, image_paths_placeholder, labels_placeholder, labels_batch, - batch_size_placeholder, learning_rate_placeholder, phase_train_placeholder, enqueue_op, input_queue, global_step, - embeddings, total_loss, train_op, summary_op, summary_writer, args.learning_rate_schedule_file, - args.embedding_size, anchor, positive, negative, triplet_loss) + train( + args, + sess, + train_set, + epoch, + image_paths_placeholder, + labels_placeholder, + labels_batch, + batch_size_placeholder, + learning_rate_placeholder, + phase_train_placeholder, + enqueue_op, + input_queue, + global_step, + embeddings, + total_loss, + train_op, + summary_op, + summary_writer, + args.learning_rate_schedule_file, + args.embedding_size, + anchor, + positive, + negative, + triplet_loss) # Save variables and the metagraph if it doesn't exist already - save_variables_and_metagraph(sess, saver, summary_writer, model_dir, subdir, step) + save_variables_and_metagraph( + sess, saver, summary_writer, model_dir, subdir, step) # Evaluate on LFW if args.lfw_dir: - evaluate(sess, lfw_paths, embeddings, labels_batch, image_paths_placeholder, labels_placeholder, - batch_size_placeholder, learning_rate_placeholder, phase_train_placeholder, enqueue_op, actual_issame, args.batch_size, - args.lfw_nrof_folds, log_dir, step, summary_writer, args.embedding_size) + evaluate( + sess, + lfw_paths, + embeddings, + labels_batch, + image_paths_placeholder, + labels_placeholder, + batch_size_placeholder, + learning_rate_placeholder, + phase_train_placeholder, + enqueue_op, + actual_issame, + args.batch_size, + args.lfw_nrof_folds, + log_dir, + step, + summary_writer, + args.embedding_size) return model_dir -def train(args, sess, dataset, epoch, image_paths_placeholder, labels_placeholder, labels_batch, - batch_size_placeholder, learning_rate_placeholder, phase_train_placeholder, enqueue_op, input_queue, global_step, - embeddings, loss, train_op, summary_op, summary_writer, learning_rate_schedule_file, - embedding_size, anchor, positive, negative, triplet_loss): +def train( + args, + sess, + dataset, + epoch, + image_paths_placeholder, + labels_placeholder, + labels_batch, + batch_size_placeholder, + learning_rate_placeholder, + phase_train_placeholder, + enqueue_op, + input_queue, + global_step, + embeddings, + loss, + train_op, + summary_op, + summary_writer, + learning_rate_schedule_file, + embedding_size, + anchor, + positive, + negative, + triplet_loss): batch_number = 0 - - if args.learning_rate>0.0: + + if args.learning_rate > 0.0: lr = args.learning_rate else: - lr = facenet.get_learning_rate_from_file(learning_rate_schedule_file, epoch) + lr = facenet.get_learning_rate_from_file( + learning_rate_schedule_file, epoch) while batch_number < args.epoch_size: # Sample people randomly from the dataset - image_paths, num_per_class = sample_people(dataset, args.people_per_batch, args.images_per_person) - + image_paths, num_per_class = sample_people( + dataset, args.people_per_batch, args.images_per_person) + print('Running forward pass on sampled images: ', end='') start_time = time.time() nrof_examples = args.people_per_batch * args.images_per_person - labels_array = np.reshape(np.arange(nrof_examples),(-1,3)) - image_paths_array = np.reshape(np.expand_dims(np.array(image_paths),1), (-1,3)) - sess.run(enqueue_op, {image_paths_placeholder: image_paths_array, labels_placeholder: labels_array}) + labels_array = np.reshape(np.arange(nrof_examples), (-1, 3)) + image_paths_array = np.reshape( + np.expand_dims( + np.array(image_paths), 1), (-1, 3)) + sess.run(enqueue_op, + {image_paths_placeholder: image_paths_array, + labels_placeholder: labels_array}) emb_array = np.zeros((nrof_examples, embedding_size)) nrof_batches = int(np.ceil(nrof_examples / args.batch_size)) for i in range(nrof_batches): - batch_size = min(nrof_examples-i*args.batch_size, args.batch_size) - emb, lab = sess.run([embeddings, labels_batch], feed_dict={batch_size_placeholder: batch_size, - learning_rate_placeholder: lr, phase_train_placeholder: True}) - emb_array[lab,:] = emb - print('%.3f' % (time.time()-start_time)) + batch_size = min( + nrof_examples - i * args.batch_size, + args.batch_size) + emb, lab = sess.run([embeddings, labels_batch], feed_dict={ + batch_size_placeholder: batch_size, learning_rate_placeholder: lr, phase_train_placeholder: True}) + emb_array[lab, :] = emb + print('%.3f' % (time.time() - start_time)) # Select triplets based on the embeddings print('Selecting suitable triplets for training') - triplets, nrof_random_negs, nrof_triplets = select_triplets(emb_array, num_per_class, - image_paths, args.people_per_batch, args.alpha) + triplets, nrof_random_negs, nrof_triplets = select_triplets( + emb_array, num_per_class, image_paths, args.people_per_batch, args.alpha) selection_time = time.time() - start_time - print('(nrof_random_negs, nrof_triplets) = (%d, %d): time=%.3f seconds' % + print( + '(nrof_random_negs, nrof_triplets) = (%d, %d): time=%.3f seconds' % (nrof_random_negs, nrof_triplets, selection_time)) # Perform training on the selected triplets - nrof_batches = int(np.ceil(nrof_triplets*3/args.batch_size)) + nrof_batches = int(np.ceil(nrof_triplets * 3 / args.batch_size)) triplet_paths = list(itertools.chain(*triplets)) - labels_array = np.reshape(np.arange(len(triplet_paths)),(-1,3)) - triplet_paths_array = np.reshape(np.expand_dims(np.array(triplet_paths),1), (-1,3)) - sess.run(enqueue_op, {image_paths_placeholder: triplet_paths_array, labels_placeholder: labels_array}) + labels_array = np.reshape(np.arange(len(triplet_paths)), (-1, 3)) + triplet_paths_array = np.reshape( + np.expand_dims(np.array(triplet_paths), 1), (-1, 3)) + sess.run(enqueue_op, + {image_paths_placeholder: triplet_paths_array, + labels_placeholder: labels_array}) nrof_examples = len(triplet_paths) train_time = 0 i = 0 @@ -249,57 +350,84 @@ def train(args, sess, dataset, epoch, image_paths_placeholder, labels_placeholde step = 0 while i < nrof_batches: start_time = time.time() - batch_size = min(nrof_examples-i*args.batch_size, args.batch_size) - feed_dict = {batch_size_placeholder: batch_size, learning_rate_placeholder: lr, phase_train_placeholder: True} - err, _, step, emb, lab = sess.run([loss, train_op, global_step, embeddings, labels_batch], feed_dict=feed_dict) - emb_array[lab,:] = emb + batch_size = min( + nrof_examples - i * args.batch_size, + args.batch_size) + feed_dict = { + batch_size_placeholder: batch_size, + learning_rate_placeholder: lr, + phase_train_placeholder: True} + err, _, step, emb, lab = sess.run( + [loss, train_op, global_step, embeddings, labels_batch], feed_dict=feed_dict) + emb_array[lab, :] = emb loss_array[i] = err duration = time.time() - start_time print('Epoch: [%d][%d/%d]\tTime %.3f\tLoss %2.3f' % - (epoch, batch_number+1, args.epoch_size, duration, err)) + (epoch, batch_number + 1, args.epoch_size, duration, err)) batch_number += 1 i += 1 train_time += duration summary.value.add(tag='loss', simple_value=err) - + # Add validation loss and accuracy to summary - #pylint: disable=maybe-no-member + # pylint: disable=maybe-no-member summary.value.add(tag='time/selection', simple_value=selection_time) summary_writer.add_summary(summary, step) return step - -def select_triplets(embeddings, nrof_images_per_class, image_paths, people_per_batch, alpha): + + +def select_triplets( + embeddings, + nrof_images_per_class, + image_paths, + people_per_batch, + alpha): """ Select the triplets for training """ trip_idx = 0 emb_start_idx = 0 num_trips = 0 triplets = [] - + # VGG Face: Choosing good triplets is crucial and should strike a balance between # selecting informative (i.e. challenging) examples and swamping training with examples that # are too hard. This is achieve by extending each pair (a, p) to a triplet (a, p, n) by sampling # the image n at random, but only between the ones that violate the triplet loss margin. The # latter is a form of hard-negative mining, but it is not as aggressive (and much cheaper) than - # choosing the maximally violating example, as often done in structured output learning. + # choosing the maximally violating example, as often done in structured + # output learning. for i in xrange(people_per_batch): nrof_images = int(nrof_images_per_class[i]) - for j in xrange(1,nrof_images): + for j in xrange(1, nrof_images): a_idx = emb_start_idx + j - 1 - neg_dists_sqr = np.sum(np.square(embeddings[a_idx] - embeddings), 1) - for pair in xrange(j, nrof_images): # For every possible positive pair. + neg_dists_sqr = np.sum( + np.square( + embeddings[a_idx] - + embeddings), + 1) + # For every possible positive pair. + for pair in xrange(j, nrof_images): p_idx = emb_start_idx + pair - pos_dist_sqr = np.sum(np.square(embeddings[a_idx]-embeddings[p_idx])) - neg_dists_sqr[emb_start_idx:emb_start_idx+nrof_images] = np.NaN - #all_neg = np.where(np.logical_and(neg_dists_sqr-pos_dist_sqr0: + if nrof_random_negs > 0: rnd_idx = np.random.randint(nrof_random_negs) n_idx = all_neg[rnd_idx] - triplets.append((image_paths[a_idx], image_paths[p_idx], image_paths[n_idx])) - #print('Triplet %d: (%d, %d, %d), pos_dist=%2.6f, neg_dist=%2.6f (%d, %d, %d, %d, %d)' % + triplets.append( + (image_paths[a_idx], image_paths[p_idx], image_paths[n_idx])) + # print('Triplet %d: (%d, %d, %d), pos_dist=%2.6f, neg_dist=%2.6f (%d, %d, %d, %d, %d)' % # (trip_idx, a_idx, p_idx, n_idx, pos_dist_sqr, neg_dists_sqr[n_idx], nrof_random_negs, rnd_idx, i, j, emb_start_idx)) trip_idx += 1 @@ -310,75 +438,108 @@ def select_triplets(embeddings, nrof_images_per_class, image_paths, people_per_b np.random.shuffle(triplets) return triplets, num_trips, len(triplets) + def sample_people(dataset, people_per_batch, images_per_person): nrof_images = people_per_batch * images_per_person - + # Sample classes from the dataset nrof_classes = len(dataset) class_indices = np.arange(nrof_classes) np.random.shuffle(class_indices) - + i = 0 image_paths = [] num_per_class = [] sampled_class_indices = [] # Sample images from these classes until we have enough - while len(image_paths) Date: Mon, 13 Aug 2018 16:15:48 +0000 Subject: [PATCH 16/50] Script to generate pairs --- facenet_sandberg/generate_pairs.py | 191 +++++++++++++---------------- 1 file changed, 88 insertions(+), 103 deletions(-) diff --git a/facenet_sandberg/generate_pairs.py b/facenet_sandberg/generate_pairs.py index cc5e6d7ae..2dea736af 100644 --- a/facenet_sandberg/generate_pairs.py +++ b/facenet_sandberg/generate_pairs.py @@ -4,30 +4,44 @@ import os import random -from typing import List, Tuple - +import sys +import argparse import numpy as np +from typing import List, Tuple, cast + -names = [ - d for d in os.listdir(image_dir) if os.path.isdir( - os.path.join(image_dir, d))] +Mismatch = Tuple[str, int, str, int] +Match = Tuple[str, int, int] -def split_people_into_sets(image_dir: str, k_num_sets: int) -> List[List[str]]: - names = [ - d for d in os.listdir(image_dir) if os.path.isdir( - os.path.join( - image_dir, - d))] +def write_pairs(fname: str, + match_folds: List[List[Match]], + mismatch_folds: List[List[Mismatch]], + k_num_sets: int, + total_matches_mismatches: int) -> None: + file_contents = f'{k_num_sets}\t{total_matches_mismatches}\n' + for match_fold, mismatch_fold in zip(match_folds, mismatch_folds): + for match in match_fold: + file_contents += f'{match[0]}\t{match[1]}\t{match[2]}\n' + for mismatch in mismatch_fold: + file_contents += f'{mismatch[0]}\t{mismatch[1]}\t\ +{mismatch[2]}\t{mismatch[3]}\n' + with open(fname, 'w') as fpairs: + fpairs.write(file_contents) + + +def _split_people_into_folds(image_dir: str, + k_num_sets: int) -> List[List[str]]: + names = [d for d in os.listdir(image_dir) + if os.path.isdir(os.path.join(image_dir, d))] random.shuffle(names) + return [list(arr) for arr in np.array_split(names, k_num_sets)] -def make_matches(image_dir: str, - people: List[str], - total_matches: int) -> List[Tuple[str, - int, - int]]: - matches: List[Tuple[str, int, int]] = [] +def _make_matches(image_dir: str, + people: List[str], + total_matches: int) -> List[Match]: + matches = cast(List[Match], []) curr_matches = 0 while curr_matches < total_matches: person = random.choice(people) @@ -35,25 +49,23 @@ def make_matches(image_dir: str, if len(images) > 1: img1, img2 = sorted( [ - int(''.join([i for i in random.choice( - images) if i.isnumeric()]).lstrip('0')), - int(''.join([i for i in random.choice( - images) if i.isnumeric()]).lstrip('0')) + int(''.join([i for i in random.choice(images) + if i.isnumeric()]).lstrip('0')), + int(''.join([i for i in random.choice(images) + if i.isnumeric()]).lstrip('0')) ] ) match = (person, img1, img2) if (img1 != img2) and (match not in matches): matches.append(match) curr_matches += 1 + return sorted(matches, key=lambda x: x[0].lower()) -def make_mismatches(image_dir: str, - people: List[str], - total_matches: int) -> List[Tuple[str, - int, - str, - int]]: - mismatches: List[Tuple[str, int, str, int]] = [] +def _make_mismatches(image_dir: str, + people: List[str], + total_matches: int) -> List[Mismatch]: + mismatches = cast(List[Mismatch], []) curr_matches = 0 while curr_matches < total_matches: person1 = random.choice(people) @@ -61,82 +73,55 @@ def make_mismatches(image_dir: str, if person1 != person2: person1_images = os.listdir(os.path.join(image_dir, person1)) person2_images = os.listdir(os.path.join(image_dir, person2)) - img1 = int(''.join([i for i in random.choice( - person1_images) if i.isnumeric()]).lstrip('0')) - img2 = int(''.join([i for i in random.choice( - person2_images) if i.isnumeric()]).lstrip('0')) - img1 = int(''.join([i for i in random.choice( - person1_images) if i.isnumeric()]).lstrip('0')) - img2 = int(''.join([i for i in random.choice( - person2_images) if i.isnumeric()]).lstrip('0')) - - if person1.lower() > person2.lower(): - person1, img1, person2, img2 = person2, img2, person1, img1 - - mismatch = (person1, img1, person2, img2) - if mismatch not in mismatches: - mismatches.append(mismatch) - curr_matches += 1 - - -def write_pairs(fname: str, - match_sets: List[List[Tuple[str, - int, - int]]], - mismatch_sets: List[List[Tuple[str, - int, - str, - int]]], - k_num_sets: int, - total_matches_mismatches: int) -> None: - file_contents = f'{k_num_sets}\t{total_matches_mismatches}\n' - for match_set, mismatch_set in zip(match_sets, mismatch_sets): - for match in match_set: - file_contents += f'{match[0]}\t{match[1]}\t{match[2]}\n' - for mismatch in mismatch_set: - file_contents += f'{mismatch[0]}\t{mismatch[1]}\t{mismatch[2]}\t{mismatch[3]}\n' - - with open(fname, 'w') as fpairs: - - fpairs.write(file_contents) + if person1_images and person2_images: + img1 = int(''.join([i for i in random.choice(person1_images) + if i.isnumeric()]).lstrip('0')) + img2 = int(''.join([i for i in random.choice(person2_images) + if i.isnumeric()]).lstrip('0')) + if person1.lower() > person2.lower(): + person1, img1, person2, img2 = person2, img2, person1, img1 + mismatch = (person1, img1, person2, img2) + if mismatch not in mismatches: + mismatches.append(mismatch) + curr_matches += 1 + return sorted(mismatches, key=lambda x: x[0].lower()) + + +def _parse_arguments(argv): + parser = argparse.ArgumentParser() + parser.add_argument('--image_dir', + type=str, + required=True, + help='Path to the image directory.') + parser.add_argument('--pairs_file_name', + type=str, + required=True, + help='Filename of pairs.txt') + parser.add_argument('--num_folds', + type=int, + required=True, + help='Number of folds for k-fold cross validation.') + parser.add_argument('--num_matches_mismatches', + type=int, + required=True, + help='Number of matches/mismatches per fold.') + return parser.parse_args(sys.argv[1:]) if __name__ == '__main__': - # image_dir = os.path.join( - total_matches_mismatches = 15 - # image_dir = os.path.join( - # os.path.dirname( - # os.path.abspath(__file__) - # ), - # 'images') - image_dir = '/home/miperel/redcross/facenet/datasets/lfw/raw_mtcnn' - - people_lists = split_people_into_sets(image_dir, k_num_sets) + args = _parse_arguments(sys.argv[1:]) + people_folds = _split_people_into_folds(args.image_dir, args.num_folds) matches = [] - matches.append( - make_matches( - image_dir, - people, - total_matches_mismatches)) - mismatches.append( - make_mismatches( - image_dir, - people, - total_matches_mismatches)) - matches.append( - make_matches( - image_dir, - people, - total_matches_mismatches)) - mismatches.append( - make_mismatches( - image_dir, - people, - total_matches_mismatches)) - write_pairs( - fname, - matches, - mismatches, - k_num_sets, - total_matches_mismatches) - fname = '/home/miperel/redcross/facenet/data/pairs.txt' + mismatches = [] + for fold in people_folds: + matches.append(_make_matches(args.image_dir, + fold, + args.num_matches_mismatches)) + mismatches.append(_make_mismatches(args.image_dir, + fold, + args.num_matches_mismatches)) + write_pairs(args.pairs_file_name, + matches, + mismatches, + args.num_folds, + args.num_matches_mismatches) From 7c3e1ea8994372de396638563aa4228e84765052 Mon Sep 17 00:00:00 2001 From: Ubuntu Date: Mon, 13 Aug 2018 18:51:51 +0000 Subject: [PATCH 17/50] fixed comments from previous PR --- facenet_sandberg/generate_pairs.py | 97 +++++++++++++++++------------- 1 file changed, 56 insertions(+), 41 deletions(-) diff --git a/facenet_sandberg/generate_pairs.py b/facenet_sandberg/generate_pairs.py index 2dea736af..b89c05b94 100644 --- a/facenet_sandberg/generate_pairs.py +++ b/facenet_sandberg/generate_pairs.py @@ -2,46 +2,53 @@ # Section f: http://vis-www.cs.umass.edu/lfw/lfw.pdf # More succint, less explicit: http://vis-www.cs.umass.edu/lfw/README.txt +import io import os import random -import sys -import argparse import numpy as np -from typing import List, Tuple, cast +from argparse import ArgumentParser, Namespace +from typing import List, Tuple, Set Mismatch = Tuple[str, int, str, int] Match = Tuple[str, int, int] +CommandLineArgs = Namespace def write_pairs(fname: str, match_folds: List[List[Match]], mismatch_folds: List[List[Mismatch]], - k_num_sets: int, - total_matches_mismatches: int) -> None: - file_contents = f'{k_num_sets}\t{total_matches_mismatches}\n' - for match_fold, mismatch_fold in zip(match_folds, mismatch_folds): - for match in match_fold: - file_contents += f'{match[0]}\t{match[1]}\t{match[2]}\n' - for mismatch in mismatch_fold: - file_contents += f'{mismatch[0]}\t{mismatch[1]}\t\ -{mismatch[2]}\t{mismatch[3]}\n' - with open(fname, 'w') as fpairs: - fpairs.write(file_contents) + num_folds: int, + num_matches_mismatches: int) -> None: + metadata = f'{num_folds}\t{num_matches_mismatches}\n' + with io.open(fname, + 'w', + io.DEFAULT_BUFFER_SIZE, + encoding='utf-8') as fpairs: + fpairs.write(metadata) + for match_fold, mismatch_fold in zip(match_folds, mismatch_folds): + for match in match_fold: + line = f'{match[0]}\t{match[1]}\t{match[2]}\n' + fpairs.write(line) + for mismatch in mismatch_fold: + line = f'{mismatch[0]}\t{mismatch[1]}\t{mismatch[2]}\t\ +{mismatch[3]}\n' + fpairs.write(line) + fpairs.flush() def _split_people_into_folds(image_dir: str, - k_num_sets: int) -> List[List[str]]: + num_folds: int) -> List[List[str]]: names = [d for d in os.listdir(image_dir) if os.path.isdir(os.path.join(image_dir, d))] random.shuffle(names) - return [list(arr) for arr in np.array_split(names, k_num_sets)] + return [list(arr) for arr in np.array_split(names, num_folds)] def _make_matches(image_dir: str, people: List[str], total_matches: int) -> List[Match]: - matches = cast(List[Match], []) + matches: Set[Match] = set() curr_matches = 0 while curr_matches < total_matches: person = random.choice(people) @@ -57,15 +64,15 @@ def _make_matches(image_dir: str, ) match = (person, img1, img2) if (img1 != img2) and (match not in matches): - matches.append(match) + matches.add(match) curr_matches += 1 - return sorted(matches, key=lambda x: x[0].lower()) + return sorted(list(matches), key=lambda x: x[0].lower()) def _make_mismatches(image_dir: str, people: List[str], total_matches: int) -> List[Mismatch]: - mismatches = cast(List[Mismatch], []) + mismatches: Set[Mismatch] = set() curr_matches = 0 while curr_matches < total_matches: person1 = random.choice(people) @@ -82,13 +89,36 @@ def _make_mismatches(image_dir: str, person1, img1, person2, img2 = person2, img2, person1, img1 mismatch = (person1, img1, person2, img2) if mismatch not in mismatches: - mismatches.append(mismatch) + mismatches.add(mismatch) curr_matches += 1 - return sorted(mismatches, key=lambda x: x[0].lower()) + return sorted(list(mismatches), key=lambda x: x[0].lower()) -def _parse_arguments(argv): - parser = argparse.ArgumentParser() +def _main(args: CommandLineArgs) -> None: + people_folds = _split_people_into_folds(args.image_dir, args.num_folds) + matches = [] + mismatches = [] + for fold in people_folds: + matches.append(_make_matches(args.image_dir, + fold, + args.num_matches_mismatches)) + mismatches.append(_make_mismatches(args.image_dir, + fold, + args.num_matches_mismatches)) + write_pairs(args.pairs_file_name, + matches, + mismatches, + args.num_folds, + args.num_matches_mismatches) + + +def _cli() -> None: + args = _parse_arguments() + _main(args) + + +def _parse_arguments() -> CommandLineArgs: + parser = ArgumentParser() parser.add_argument('--image_dir', type=str, required=True, @@ -105,23 +135,8 @@ def _parse_arguments(argv): type=int, required=True, help='Number of matches/mismatches per fold.') - return parser.parse_args(sys.argv[1:]) + return parser.parse_args() if __name__ == '__main__': - args = _parse_arguments(sys.argv[1:]) - people_folds = _split_people_into_folds(args.image_dir, args.num_folds) - matches = [] - mismatches = [] - for fold in people_folds: - matches.append(_make_matches(args.image_dir, - fold, - args.num_matches_mismatches)) - mismatches.append(_make_mismatches(args.image_dir, - fold, - args.num_matches_mismatches)) - write_pairs(args.pairs_file_name, - matches, - mismatches, - args.num_folds, - args.num_matches_mismatches) + _cli() From d2458e8451cd63ca0cbe21f69e473f55ad78c71d Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Mon, 13 Aug 2018 14:08:36 -0500 Subject: [PATCH 18/50] reformatting --- facenet_sandberg/train_softmax.py | 273 +++++++++++++++++++++--------- 1 file changed, 189 insertions(+), 84 deletions(-) diff --git a/facenet_sandberg/train_softmax.py b/facenet_sandberg/train_softmax.py index d6cd2476c..cdffc499c 100644 --- a/facenet_sandberg/train_softmax.py +++ b/facenet_sandberg/train_softmax.py @@ -41,21 +41,128 @@ from tensorflow.python.framework import ops from tensorflow.python.ops import array_ops, data_flow_ops - -def main(args): - - network = importlib.import_module(args.model_def) - image_size = (args.image_size, args.image_size) +# parser.add_argument( +# '--optimizer', +# type=str, +# choices=[ +# 'ADAGRAD', +# 'ADADELTA', +# 'ADAM', +# 'RMSPROP', +# 'MOM'], +# help='The optimization algorithm to use', +# default='ADAGRAD') + + +def main( + pretrained_model: str, + logs_base_dir: str='~/logs/facenet', + models_base_dir: str ='~/models/facenet', + gpu_memory_fraction: float=1.0, + data_dir: str='~/datasets/casia/casia_maxpy_mtcnnalign_182_160', + model_def: str='models.inception_resnet_v1', + max_nrof_epochs: int=500, + batch_size: int=100, + image_size: int=160, + epoch_size: int=1000, + embedding_size: int=128, + random_crop: bool=False, + random_flip: bool=False, + random_rotate: bool=False, + use_fixed_image_standardization: bool=False, + keep_probability: float=1.0, + weight_decay: float=0.0, + center_loss_factor: float=0.0, + center_loss_alfa: float=0.95, + prelogits_norm_loss_factor: float=0.0, + prelogits_norm_p: float=1.0, + prelogits_hist_max: float=10.0, + optimizer: str='ADAGRAD', + learning_rate: float=0.1, + learning_rate_decay_epochs: int=100, + learning_rate_decay_factor: float=1.0, + moving_average_decay: float=0.9999, + seed: int=666, + nrof_preprocess_threads: int=4, + log_histograms: bool=False, + learning_rate_schedule_file: str='data/learning_rate_schedule.txt', + filter_filename: str='', + filter_percentile: float=100.0, + filter_min_nrof_images_per_class: int=0, + validate_every_n_epochs: int=5, + validation_set_split_ratio: float=0.0, + min_nrof_val_images_per_class: int=0, + lfw_pairs: str='data/pairs.txt', + lfw_dir: str='', + lfw_batch_size: int=100, + lfw_nrof_folds: int=10, + lfw_distance_metric: int=0, + lfw_use_flipped_images: bool=False, + lfw_subtract_mean: bool=False): + """Train with softmax + + Arguments: + pretrained_model {str} -- Load a pretrained model before training starts. + + Keyword Arguments: + logs_base_dir {str} -- Directory where to write event logs. (default: {'~/logs/facenet'}) + models_base_dir {str} -- Directory where to write trained models and checkpoints. (default: {'~/models/facenet'}) + gpu_memory_fraction {float} -- Upper bound on the amount of GPU memory that will be used by the process. (default: {1.0}) + data_dir {str} -- Path to the data directory containing aligned face patches. (default: {'~/datasets/casia/casia_maxpy_mtcnnalign_182_160'}) + model_def {str} -- Model definition. Points to a module containing the definition of the inference graph. (default: {'models.inception_resnet_v1'}) + max_nrof_epochs {int} -- Number of epochs to run. (default: {500}) + batch_size {int} -- Number of images to process in a batch. (default: {100}) + image_size {int} -- Image size (height, width) in pixels. (default: {160}) + epoch_size {int} -- Number of batches per epoch. (default: {1000}) + embedding_size {int} -- Dimensionality of the embedding. (default: {128}) + random_crop {bool} -- Performs random cropping of training images. If false, the center image_size pixels from the training images are used. If the size of the images in the data directory is equal to image_size no cropping is performed (default: {False}) + random_flip {bool} -- Performs random horizontal flipping of training images. (default: {False}) + random_rotate {bool} -- Performs random rotations of training images. (default: {False}) + use_fixed_image_standardization {bool} -- Performs fixed standardization of images. (default: {False}) + keep_probability {float} -- Keep probability of dropout for the fully connected layer(s). (default: {1.0}) + weight_decay {float} -- L2 weight regularization. (default: {0.0}) + center_loss_factor {float} -- Center loss factor. (default: {0.0}) + center_loss_alfa {float} -- Center update rate for center loss. (default: {0.95}) + prelogits_norm_loss_factor {float} -- Loss based on the norm of the activations in the prelogits layer. (default: {0.0}) + prelogits_norm_p {float} -- Norm to use for prelogits norm loss. (default: {1.0}) + prelogits_hist_max {float} -- The max value for the prelogits histogram. (default: {10.0}) + optimizer {str} -- The optimization algorithm to use (default: {'ADAGRAD'}) + learning_rate {float} -- Initial learning rate. If set to a negative value a learning rate schedule can be specified in the file "learning_rate_schedule.txt" (default: {0.1}) + learning_rate_decay_epochs {int} -- Number of epochs between learning rate decay. (default: {100}) + learning_rate_decay_factor {float} -- Learning rate decay factor. (default: {1.0}) + moving_average_decay {float} -- Exponential decay for tracking of training parameters. (default: {0.9999}) + seed {int} -- Random seed. (default: {666}) + nrof_preprocess_threads {int} -- Number of preprocessing (data loading and augmentation) threads. (default: {4}) + log_histograms {bool} -- Enables logging of weight/bias histograms in tensorboard. (default: {False}) + learning_rate_schedule_file {str} -- File containing the learning rate schedule that is used when learning_rate is set to to -1. (default: {'data/learning_rate_schedule.txt'}) + filter_filename {str} -- File containing image data used for dataset filtering (default: {''}) + filter_percentile {float} -- Keep only the percentile images closed to its class center (default: {100.0}) + filter_min_nrof_images_per_class {int} -- Keep only the classes with this number of examples or more (default: {0}) + validate_every_n_epochs {int} -- Number of epoch between validation (default: {5}) + validation_set_split_ratio {float} -- The ratio of the total dataset to use for validation (default: {0.0}) + min_nrof_val_images_per_class {int} -- Classes with fewer images will be removed from the validation set (default: {0}) + lfw_pairs {str} -- The file containing the pairs to use for validation. (default: {'data/pairs.txt'}) + lfw_dir {str} -- Path to the data directory containing aligned face patches. (default: {''}) + lfw_batch_size {int} -- Number of images to process in a batch in the LFW test set. (default: {100}) + lfw_nrof_folds {int} -- Number of folds to use for cross validation. Mainly used for testing. (default: {10}) + lfw_distance_metric {int} -- Type of distance metric to use. 0: Euclidian, 1:Cosine similarity distance. (default: {0}) + lfw_use_flipped_images {bool} -- Concatenates embeddings for the image and its horizontally flipped counterpart. (default: {False}) + lfw_subtract_mean {bool} -- Subtract feature mean before calculating distance. (default: {False}) + + Returns: + [type] -- [description] + """ + + network = importlib.import_module(model_def) + image_size = (image_size, image_size) subdir = datetime.strftime(datetime.now(), '%Y%m%d-%H%M%S') - log_dir = os.path.join(os.path.expanduser(args.logs_base_dir), subdir) - if not os.path.isdir( - log_dir): # Create the log directory if it doesn't exist - os.makedirs(log_dir) - model_dir = os.path.join(os.path.expanduser(args.models_base_dir), subdir) - if not os.path.isdir( - model_dir): # Create the model directory if it doesn't exist - os.makedirs(model_dir) + log_dir = os.path.join(os.path.expanduser(logs_base_dir), subdir) + # Create the log directory if it doesn't exist + os.makedirs(log_dir, exist_ok=True) + model_dir = os.path.join(os.path.expanduser(models_base_dir), subdir) + # Create the model directory if it doesn't exist + os.makedirs(model_dir, exist_ok=True) stat_file_name = os.path.join(log_dir, 'stat.h5') @@ -67,20 +174,20 @@ def main(args): src_path, _ = os.path.split(os.path.realpath(__file__)) facenet.store_revision_info(src_path, log_dir, ' '.join(sys.argv)) - np.random.seed(seed=args.seed) - random.seed(args.seed) - dataset = facenet.get_dataset(args.data_dir) - if args.filter_filename: + np.random.seed(seed=seed) + random.seed(seed) + dataset = facenet.get_dataset(data_dir) + if filter_filename: dataset = filter_dataset( dataset, os.path.expanduser( - args.filter_filename), - args.filter_percentile, - args.filter_min_nrof_images_per_class) + filter_filename), + filter_percentile, + filter_min_nrof_images_per_class) - if args.validation_set_split_ratio > 0.0: + if validation_set_split_ratio > 0.0: train_set, val_set = facenet.split_dataset( - dataset, args.validation_set_split_ratio, args.min_nrof_val_images_per_class, 'SPLIT_IMAGES') + dataset, validation_set_split_ratio, min_nrof_val_images_per_class, 'SPLIT_IMAGES') else: train_set, val_set = dataset, [] @@ -89,20 +196,20 @@ def main(args): print('Model directory: %s' % model_dir) print('Log directory: %s' % log_dir) pretrained_model = None - if args.pretrained_model: - pretrained_model = os.path.expanduser(args.pretrained_model) + if pretrained_model: + pretrained_model = os.path.expanduser(pretrained_model) print('Pre-trained model: %s' % pretrained_model) - if args.lfw_dir: - print('LFW directory: %s' % args.lfw_dir) + if lfw_dir: + print('LFW directory: %s' % lfw_dir) # Read the file containing the pairs used for testing - pairs = lfw.read_pairs(os.path.expanduser(args.lfw_pairs)) + pairs = lfw.read_pairs(os.path.expanduser(lfw_pairs)) # Get the paths for the corresponding images lfw_paths, actual_issame = lfw.get_paths( - os.path.expanduser(args.lfw_dir), pairs) + os.path.expanduser(lfw_dir), pairs) with tf.Graph().as_default(): - tf.set_random_seed(args.seed) + tf.set_random_seed(seed) global_step = tf.Variable(0, trainable=False) # Get a list of image paths and their labels @@ -120,7 +227,7 @@ def main(args): range_size, num_epochs=None, shuffle=True, seed=None, capacity=32) index_dequeue_op = index_queue.dequeue_many( - args.batch_size * args.epoch_size, 'index_dequeue') + batch_size * epoch_size, 'index_dequeue') learning_rate_placeholder = tf.placeholder( tf.float32, name='learning_rate') @@ -157,16 +264,16 @@ def main(args): print('Building training graph') # Build the inference graph - prelogits, _ = network.inference(image_batch, args.keep_probability, - phase_train=phase_train_placeholder, bottleneck_layer_size=args.embedding_size, - weight_decay=args.weight_decay) + prelogits, _ = network.inference(image_batch, keep_probability, + phase_train=phase_train_placeholder, bottleneck_layer_size=embedding_size, + weight_decay=weight_decay) logits = slim.fully_connected( prelogits, len(train_set), activation_fn=None, weights_initializer=slim.initializers.xavier_initializer(), weights_regularizer=slim.l2_regularizer( - args.weight_decay), + weight_decay), scope='Logits', reuse=False) @@ -177,23 +284,23 @@ def main(args): prelogits_norm = tf.reduce_mean( tf.norm( tf.abs(prelogits) + eps, - ord=args.prelogits_norm_p, + ord=prelogits_norm_p, axis=1)) tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, - prelogits_norm * args.prelogits_norm_loss_factor) + prelogits_norm * prelogits_norm_loss_factor) # Add center loss prelogits_center_loss, _ = facenet.center_loss( - prelogits, label_batch, args.center_loss_alfa, nrof_classes) + prelogits, label_batch, center_loss_alfa, nrof_classes) tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, - prelogits_center_loss * args.center_loss_factor) + prelogits_center_loss * center_loss_factor) learning_rate = tf.train.exponential_decay( learning_rate_placeholder, global_step, - args.learning_rate_decay_epochs * - args.epoch_size, - args.learning_rate_decay_factor, + learning_rate_decay_epochs * + epoch_size, + learning_rate_decay_factor, staircase=True) tf.summary.scalar('learning_rate', learning_rate) @@ -222,11 +329,11 @@ def main(args): train_op = facenet.train( total_loss, global_step, - args.optimizer, + optimizer, learning_rate, - args.moving_average_decay, + moving_average_decay, tf.global_variables(), - args.log_histograms) + log_histograms) # Create a saver saver = tf.train.Saver(tf.trainable_variables(), max_to_keep=3) @@ -236,7 +343,7 @@ def main(args): # Start running operations on the Graph. gpu_options = tf.GPUOptions( - per_process_gpu_memory_fraction=args.gpu_memory_fraction) + per_process_gpu_memory_fraction=gpu_memory_fraction) sess = tf.Session(config=tf.ConfigProto( gpu_options=gpu_options, log_device_placement=False)) sess.run(tf.global_variables_initializer()) @@ -253,11 +360,11 @@ def main(args): # Training and validation loop print('Running training') - nrof_steps = args.max_nrof_epochs * args.epoch_size + nrof_steps = max_nrof_epochs * epoch_size # Validate every validate_every_n_epochs as well as in the last # epoch nrof_val_samples = int( - math.ceil(args.max_nrof_epochs / args.validate_every_n_epochs)) + math.ceil(max_nrof_epochs / validate_every_n_epochs)) stat = { 'loss': np.zeros((nrof_steps,), np.float32), 'center_loss': np.zeros((nrof_steps,), np.float32), @@ -268,15 +375,15 @@ def main(args): 'val_loss': np.zeros((nrof_val_samples,), np.float32), 'val_xent_loss': np.zeros((nrof_val_samples,), np.float32), 'val_accuracy': np.zeros((nrof_val_samples,), np.float32), - 'lfw_accuracy': np.zeros((args.max_nrof_epochs,), np.float32), - 'lfw_valrate': np.zeros((args.max_nrof_epochs,), np.float32), - 'learning_rate': np.zeros((args.max_nrof_epochs,), np.float32), - 'time_train': np.zeros((args.max_nrof_epochs,), np.float32), - 'time_validate': np.zeros((args.max_nrof_epochs,), np.float32), - 'time_evaluate': np.zeros((args.max_nrof_epochs,), np.float32), - 'prelogits_hist': np.zeros((args.max_nrof_epochs, 1000), np.float32), + 'lfw_accuracy': np.zeros((max_nrof_epochs,), np.float32), + 'lfw_valrate': np.zeros((max_nrof_epochs,), np.float32), + 'learning_rate': np.zeros((max_nrof_epochs,), np.float32), + 'time_train': np.zeros((max_nrof_epochs,), np.float32), + 'time_validate': np.zeros((max_nrof_epochs,), np.float32), + 'time_evaluate': np.zeros((max_nrof_epochs,), np.float32), + 'prelogits_hist': np.zeros((max_nrof_epochs, 1000), np.float32), } - for epoch in range(1, args.max_nrof_epochs + 1): + for epoch in range(1, max_nrof_epochs + 1): step = sess.run(global_step, feed_dict=None) # Train for one epoch t = time.time() @@ -300,19 +407,19 @@ def main(args): summary_op, summary_writer, regularization_losses, - args.learning_rate_schedule_file, + learning_rate_schedule_file, stat, cross_entropy_mean, accuracy, learning_rate, prelogits, prelogits_center_loss, - args.random_rotate, - args.random_crop, - args.random_flip, + random_rotate, + random_crop, + random_flip, prelogits_norm, - args.prelogits_hist_max, - args.use_fixed_image_standardization) + prelogits_hist_max, + use_fixed_image_standardization) stat['time_train'][epoch - 1] = time.time() - t if not cont: @@ -320,10 +427,9 @@ def main(args): t = time.time() if len(val_image_list) > 0 and ( - (epoch - - 1) % - args.validate_every_n_epochs == args.validate_every_n_epochs - - 1 or epoch == args.max_nrof_epochs): + (epoch - 1) % + validate_every_n_epochs == validate_every_n_epochs - + 1 or epoch == max_nrof_epochs): validate( args, sess, @@ -341,8 +447,8 @@ def main(args): regularization_losses, cross_entropy_mean, accuracy, - args.validate_every_n_epochs, - args.use_fixed_image_standardization) + validate_every_n_epochs, + use_fixed_image_standardization) stat['time_validate'][epoch - 1] = time.time() - t # Save variables and the metagraph if it doesn't exist already @@ -351,7 +457,7 @@ def main(args): # Evaluate on LFW t = time.time() - if args.lfw_dir: + if lfw_dir: evaluate( sess, enqueue_op, @@ -364,17 +470,17 @@ def main(args): label_batch, lfw_paths, actual_issame, - args.lfw_batch_size, - args.lfw_nrof_folds, + lfw_batch_size, + lfw_nrof_folds, log_dir, step, summary_writer, stat, epoch, - args.lfw_distance_metric, - args.lfw_subtract_mean, - args.lfw_use_flipped_images, - args.use_fixed_image_standardization) + lfw_distance_metric, + lfw_subtract_mean, + lfw_use_flipped_images, + use_fixed_image_standardization) stat['time_evaluate'][epoch - 1] = time.time() - t print('Saving statistics') @@ -411,8 +517,7 @@ def filter_dataset(dataset, data_filename, percentile, image = image_list[i] if image in filtered_dataset[label].image_paths: filtered_dataset[label].image_paths.remove(image) - if len( - filtered_dataset[label].image_paths) < min_nrof_images_per_class: + if len(filtered_dataset[label].image_paths) < min_nrof_images_per_class: removelist.append(label) ix = sorted(list(set(removelist)), reverse=True) @@ -457,8 +562,8 @@ def train( use_fixed_image_standardization): batch_number = 0 - if args.learning_rate > 0.0: - lr = args.learning_rate + if learning_rate > 0.0: + lr = learning_rate else: lr = facenet.get_learning_rate_from_file( learning_rate_schedule_file, epoch) @@ -484,12 +589,12 @@ def train( # Training loop train_time = 0 - while batch_number < args.epoch_size: + while batch_number < epoch_size: start_time = time.time() feed_dict = { learning_rate_placeholder: lr, phase_train_placeholder: True, - batch_size_placeholder: args.batch_size} + batch_size_placeholder: batch_size} tensor_list = [ loss, train_op, @@ -530,7 +635,7 @@ def train( (epoch, batch_number + 1, - args.epoch_size, + epoch_size, duration, loss_, cross_entropy_mean_, @@ -570,8 +675,8 @@ def validate( print('Running forward pass on validation set') - nrof_batches = len(label_list) // args.lfw_batch_size - nrof_images = nrof_batches * args.lfw_batch_size + nrof_batches = len(label_list) // lfw_batch_size + nrof_images = nrof_batches * lfw_batch_size # Enqueue one epoch of image paths and labels labels_array = np.expand_dims(np.array(label_list[:nrof_images]), 1) @@ -592,7 +697,7 @@ def validate( start_time = time.time() for i in range(nrof_batches): feed_dict = {phase_train_placeholder: False, - batch_size_placeholder: args.lfw_batch_size} + batch_size_placeholder: lfw_batch_size} loss_, cross_entropy_mean_, accuracy_ = sess.run( [loss, cross_entropy_mean, accuracy], feed_dict=feed_dict) loss_array[i], xent_array[i], accuracy_array[i] = ( From 644d296015645e45a9e5ba313ea04e930d41407b Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Mon, 13 Aug 2018 15:04:39 -0500 Subject: [PATCH 19/50] clean images list for gen pairs --- .DS_Store | Bin 0 -> 6148 bytes facenet_sandberg/face.py | 4 ++-- facenet_sandberg/generate_pairs.py | 32 ++++++++++++++--------------- 3 files changed, 18 insertions(+), 18 deletions(-) create mode 100644 .DS_Store diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..5008ddfcf53c02e82d7eee2e57c38e5672ef89f6 GIT binary patch literal 6148 zcmeH~Jr2S!425mzP>H1@V-^m;4Wg<&0T*E43hX&L&p$$qDprKhvt+--jT7}7np#A3 zem<@ulZcFPQ@L2!n>{z**++&mCkOWA81W14cNZlEfg7;MkzE(HCqgga^y>{tEnwC%0;vJ&^%eQ zLs35+`xjp>T0 1: img1, img2 = sorted( - [ - int(''.join([i for i in random.choice(images) - if i.isnumeric()]).lstrip('0')), - int(''.join([i for i in random.choice(images) - if i.isnumeric()]).lstrip('0')) - ] - ) + [images.index(random.choice(images)), + images.index(random.choice(images))]) match = (person, img1, img2) if (img1 != img2) and (match not in matches): matches.add(match) @@ -78,13 +73,11 @@ def _make_mismatches(image_dir: str, person1 = random.choice(people) person2 = random.choice(people) if person1 != person2: - person1_images = os.listdir(os.path.join(image_dir, person1)) - person2_images = os.listdir(os.path.join(image_dir, person2)) + person1_images = _clean_images(image_dir, person1) + person2_images = _clean_images(image_dir, person2) if person1_images and person2_images: - img1 = int(''.join([i for i in random.choice(person1_images) - if i.isnumeric()]).lstrip('0')) - img2 = int(''.join([i for i in random.choice(person2_images) - if i.isnumeric()]).lstrip('0')) + img1 = person1_images.index(random.choice(person1_images)) + img2 = person2_images.index(random.choice(person2_images)) if person1.lower() > person2.lower(): person1, img1, person2, img2 = person2, img2, person1, img1 mismatch = (person1, img1, person2, img2) @@ -94,6 +87,13 @@ def _make_mismatches(image_dir: str, return sorted(list(mismatches), key=lambda x: x[0].lower()) +def _clean_images(base: str, folder: str): + images = os.listdir(os.path.join(base, folder)) + images = [image for image in images if image.endswith( + ".jpg") or image.endswith(".png")] + return images + + def _main(args: CommandLineArgs) -> None: people_folds = _split_people_into_folds(args.image_dir, args.num_folds) matches = [] From 9cfdccab75c79ac6ad22fbedda80a704692cd9e3 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Mon, 13 Aug 2018 15:19:58 -0500 Subject: [PATCH 20/50] added cleaning to lfw gen --- facenet_sandberg/lfw.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/facenet_sandberg/lfw.py b/facenet_sandberg/lfw.py index 653cdff94..2ccda8bfc 100644 --- a/facenet_sandberg/lfw.py +++ b/facenet_sandberg/lfw.py @@ -114,7 +114,9 @@ def rename(person_folder): person_folder {str} -- path to folder named after person """ - all_image_paths = glob.glob(os.path.join(person_folder, "*")) + all_image_paths = glob.glob(os.path.join(person_folder, "*.*")) + all_image_paths = [image for image in all_image_paths if image.endswith( + ".jpg") or image.endswith(".png")] person_name = os.path.basename(os.path.normpath(person_folder)) concat_name = '_'.join(person_name.split()) for index, image_path in enumerate(all_image_paths): From 20820605ebad4f5e7e69a439e3c9b8cdfdc5a548 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Mon, 13 Aug 2018 21:14:44 -0500 Subject: [PATCH 21/50] fixes --- facenet_sandberg/face.py | 171 +++++++------------- facenet_sandberg/generate_pairs.py | 11 +- facenet_sandberg/train_softmax.py | 244 ++--------------------------- facenet_sandberg/train_utils.py | 236 ++++++++++++++++++++++++++++ setup.py | 4 +- 5 files changed, 306 insertions(+), 360 deletions(-) create mode 100644 facenet_sandberg/train_utils.py diff --git a/facenet_sandberg/face.py b/facenet_sandberg/face.py index 7a07cddb3..9b565825d 100644 --- a/facenet_sandberg/face.py +++ b/facenet_sandberg/face.py @@ -79,15 +79,6 @@ def __init__(self, facenet_model_checkpoint: str, threshold: float = 1.10): @staticmethod def download_image(url: str) -> Image: - """Downloads an image from the url as a numpy array (opencv format) - - Arguments: - url {str} -- url of image - - Returns: - Image -- array representing image - """ - req = urlopen(url) arr = np.asarray(bytearray(req.read()), dtype=np.uint8) image = cv2.imdecode(arr, -1) @@ -95,31 +86,11 @@ def download_image(url: str) -> Image: @staticmethod def get_image_from_path(image_path: str) -> Image: - """Reads an image path to a numpy array (opencv format) - - Arguments: - image_path {str} -- path to image - - Returns: - Image -- array representing image - """ - return Identifier.fix_image(cv2.imread(image_path)) @staticmethod def get_images_from_dir( directory: str, recursive: bool) -> ImageGenerator: - """Gets images in a directory - - Arguments: - directory {str} -- path to directory - recursive {bool} -- if True searches all subfolders for images. - else searches for images in folder only. - - Returns: - ImageGenerator -- generator of images - """ - if recursive: image_paths = iglob(os.path.join( directory, '**', '*.*'), recursive=recursive) @@ -139,22 +110,18 @@ def fix_image(image: Image): def vectorize(self, image: Image, prealigned: bool = False, - face_limit: int = 5) -> List[Image]: + detect_multiple_faces: bool=True, + face_limit: int = 5) -> List[Embedding]: """Gets face embeddings in a single image - - Arguments: - image {Image} -- Image to find embeddings - Keyword Arguments: prealigned {bool} -- is the image already aligned face_limit {int} -- max number of faces allowed before image is discarded. (default: {5}) - Returns: - List[Image] -- list of embeddings """ if not prealigned: - faces: List[Face] = self.detect_encode(image, face_limit) + faces = self.detect_encode( + image, detect_multiple_faces, face_limit) vectors = [face.embedding for face in faces] else: vectors = [self.encoder.generate_embedding(image)] @@ -162,40 +129,35 @@ def vectorize(self, image: Image, def vectorize_all(self, images: ImageGenerator, - face_limit: int = 5) -> EmbeddingsGenerator: + prealigned: bool = False, + detect_multiple_faces: bool=True, + face_limit: int = 5) -> List[List[Embedding]]: """Gets face embeddings from a generator of images - - Arguments: - images {ImageGenerator} -- Images to find embeddings for - Keyword Arguments: + prealigned {bool} -- is the image already aligned face_limit {int} -- max number of faces allowed before image is discarded. (default: {5}) - - Returns: - EmbeddingGenerator-- generator of lists of images found in - each photo """ - all_faces = self.detect_encode_all( - images=images, save_memory=True, face_limit=face_limit) - vectors = (face.embedding for faces in all_faces for face in faces) + if not prealigned: + all_faces = self.detect_encode_all( + images=images, + save_memory=True, + detect_multiple_faces=detect_multiple_faces, + face_limit=face_limit) + vectors = [face.embedding for faces in all_faces for face in faces] + else: + vectors = self.encoder.generate_embeddings(images) return vectors def detect_encode(self, image: Image, + detect_multiple_faces: bool=True, face_limit: int=5) -> List[Face]: """Detects faces in an image and encodes them - - Arguments: - image {Image} -- image to find faces and encode - face_limit {int} -- Maximum # of faces allowed in image. - If over limit returns empty list - - Returns: - List[Face] -- list of Face objects with embeddings attached """ - faces = self.detector.find_faces(image, face_limit) + faces = self.detector.find_faces( + image, detect_multiple_faces, face_limit) for face in faces: face.embedding = self.encoder.generate_embedding(face.image) return faces @@ -204,25 +166,19 @@ def detect_encode_all(self, images: ImageGenerator, urls: [str]=None, save_memory: bool=False, + detect_multiple_faces: bool=True, face_limit: int=5) -> FacesGenerator: """For a list of images finds and encodes all faces - Arguments: - images {ImageGenerator} -- images to encode - Keyword Arguments: - urls {str[]} -- Optional list of urls to attach to Face objects. - Should be same length as images if used. (default: {None}) save_memory {bool} -- Saves memory by deleting image from Face objects. - Should only be used if with you have some other kind - of refference to the original image like a url. (default: {False}) - - Returns: - FaceGenerator -- Generator of lists of Face objects in each image + Should only be used if with you have some other kind + of refference to the original image like a url. (default: {False}) """ - all_faces = self.detector.bulk_find_face(images, urls, face_limit) - return self.encoder.get_all_embeddings(all_faces, save_memory) + all_faces = self.detector.bulk_find_face( + images, urls, detect_multiple_faces, face_limit) + return self.encoder.get_face_embeddings(all_faces, save_memory) def compare_embedding(self, embedding_1: Embedding, @@ -231,15 +187,9 @@ def compare_embedding(self, float): """Compares the distance between two embeddings - Arguments: - embedding_1 {numpy.ndarray} -- face embedding - embedding_2 {numpy.ndarray} -- face embedding - Keyword Arguments: distance_metric {int} -- 0 for Euclidian distance and 1 for Cosine similarity (default: {0}) - Returns: - (bool, float) -- returns True if match and distance """ distance = facenet.distance(embedding_1.reshape( @@ -249,21 +199,13 @@ def compare_embedding(self, is_match = True return is_match, distance - def compare_images(self, image_1: Image, - image_2: Image) -> Match: - """Compares two images for matching faces - - Arguments: - image_1 {cv2 image (np array)} -- openCV image - image_2 {cv2 image (np array)} -- openCV image - - Returns: - Match -- Match object which has the two images, is_match, and score - """ - + def compare_images(self, image_1: Image, image_2: Image, + detect_multiple_faces: bool=True, face_limit: int=5) -> Match: match = Match() - image_1_faces = self.detect_encode(image_1) - image_2_faces = self.detect_encode(image_2) + image_1_faces = self.detect_encode( + image_1, detect_multiple_faces, face_limit) + image_2_faces = self.detect_encode( + image_2, detect_multiple_faces, face_limit) if image_1_faces and image_2_faces: for face_1 in image_1_faces: for face_2 in image_2_faces: @@ -280,12 +222,6 @@ def compare_images(self, image_1: Image, def find_all_matches(self, image_directory: str, recursive: bool) -> List[Match]: """Finds all matches in a directory of images - - Arguments: - image_directory {str} -- directory of images - - Returns: - Match[] -- List of Match objects """ all_images = self.get_images_from_dir(image_directory, recursive) @@ -325,15 +261,6 @@ def __init__(self, facenet_model_checkpoint: str): ).get_tensor_by_name("phase_train:0") def generate_embedding(self, image: Image) -> Embedding: - """Generates embeddings for a Face object with image - - Arguments: - image {Image} -- Image of face. Should be aligned. - - Returns: - Embedding -- a single vector representing a face embedding - """ - prewhiten_face = facenet.prewhiten(image) # Run forward pass to calculate embeddings @@ -341,21 +268,25 @@ def generate_embedding(self, image: Image) -> Embedding: prewhiten_face], self.phase_train_placeholder: False} return self.sess.run(self.embeddings, feed_dict=feed_dict)[0] - def get_all_embeddings(self, - all_faces: FacesGenerator, - save_memory: bool=False) -> FacesGenerator: - """Generates embeddings for list of images - - Arguments: - all_faces -- array of face images + def generate_embeddings(self, + all_images: ImageGenerator): + prewhitened_images = [ + facenet.prewhiten(image) for image in all_images] + embeddings = [] + if prewhitened_images: + feed_dict = {self.images_placeholder: prewhitened_images, + self.phase_train_placeholder: False} + embeddings = self.sess.run(self.embeddings, feed_dict=feed_dict) + return embeddings + def get_face_embeddings(self, + all_faces: FacesGenerator, + save_memory: bool=False) -> FacesGenerator: + """Generates embeddings from generator of Faces Keyword Arguments: save_memory -- save memory by deleting image from Face object (default: {False}) - - Returns: - Faces with embeddings """ - face_list: List[List[Face]] = list(all_faces) + face_list = list(all_faces) prewhitened_images = [ facenet.prewhiten( face.image) for faces in face_list for face in faces] @@ -403,9 +334,10 @@ def __init__( def bulk_find_face(self, images: ImageGenerator, urls: List[str] = None, + detect_multiple_faces: bool=True, face_limit: int=5) -> FacesGenerator: for index, image in enumerate(images): - faces = self.find_faces(image, face_limit) + faces = self.find_faces(image, detect_multiple_faces, face_limit) if urls and index < len(urls): for face in faces: face.url = urls[index] @@ -413,11 +345,14 @@ def bulk_find_face(self, else: yield faces - def find_faces(self, image: Image, face_limit: int=5) -> List[Face]: + def find_faces(self, image: Image, detect_multiple_faces: bool=True, + face_limit: int=5) -> List[Face]: faces = [] results = self.detector.detect_faces(image) img_size = np.asarray(image.shape)[0:2] if len(results) < face_limit: + if not detect_multiple_faces: + results = results[:1] for result in results: face = Face() # bb[x, y, dx, dy] diff --git a/facenet_sandberg/generate_pairs.py b/facenet_sandberg/generate_pairs.py index 9f67585de..67b64e853 100644 --- a/facenet_sandberg/generate_pairs.py +++ b/facenet_sandberg/generate_pairs.py @@ -31,8 +31,7 @@ def write_pairs(fname: str, line = f'{match[0]}\t{match[1]}\t{match[2]}\n' fpairs.write(line) for mismatch in mismatch_fold: - line = f'{mismatch[0]}\t{mismatch[1]}\t{mismatch[2]}\t\ -{mismatch[3]}\n' + line = f'{mismatch[0]}\t{mismatch[1]}\t{mismatch[2]}\t{mismatch[3]}\n' fpairs.write(line) fpairs.flush() @@ -55,8 +54,8 @@ def _make_matches(image_dir: str, images = _clean_images(image_dir, person) if len(images) > 1: img1, img2 = sorted( - [images.index(random.choice(images)), - images.index(random.choice(images))]) + [images.index(random.choice(images)) + 1, + images.index(random.choice(images)) + 1]) match = (person, img1, img2) if (img1 != img2) and (match not in matches): matches.add(match) @@ -76,8 +75,8 @@ def _make_mismatches(image_dir: str, person1_images = _clean_images(image_dir, person1) person2_images = _clean_images(image_dir, person2) if person1_images and person2_images: - img1 = person1_images.index(random.choice(person1_images)) - img2 = person2_images.index(random.choice(person2_images)) + img1 = person1_images.index(random.choice(person1_images)) + 1 + img2 = person2_images.index(random.choice(person2_images)) + 1 if person1.lower() > person2.lower(): person1, img1, person2, img2 = person2, img2, person1, img1 mismatch = (person1, img1, person2, img2) diff --git a/facenet_sandberg/train_softmax.py b/facenet_sandberg/train_softmax.py index cdffc499c..e982a431b 100644 --- a/facenet_sandberg/train_softmax.py +++ b/facenet_sandberg/train_softmax.py @@ -158,21 +158,19 @@ def main( subdir = datetime.strftime(datetime.now(), '%Y%m%d-%H%M%S') log_dir = os.path.join(os.path.expanduser(logs_base_dir), subdir) - # Create the log directory if it doesn't exist os.makedirs(log_dir, exist_ok=True) model_dir = os.path.join(os.path.expanduser(models_base_dir), subdir) - # Create the model directory if it doesn't exist os.makedirs(model_dir, exist_ok=True) stat_file_name = os.path.join(log_dir, 'stat.h5') # Write arguments to a text file - facenet.write_arguments_to_file( - args, os.path.join(log_dir, 'arguments.txt')) + # facenet.write_arguments_to_file( + # args, os.path.join(log_dir, 'arguments.txt')) # Store some git revision info in a text file in the log directory - src_path, _ = os.path.split(os.path.realpath(__file__)) - facenet.store_revision_info(src_path, log_dir, ' '.join(sys.argv)) + # src_path, _ = os.path.split(os.path.realpath(__file__)) + # facenet.store_revision_info(src_path, log_dir, ' '.join(sys.argv)) np.random.seed(seed=seed) random.seed(seed) @@ -190,6 +188,10 @@ def main( dataset, validation_set_split_ratio, min_nrof_val_images_per_class, 'SPLIT_IMAGES') else: train_set, val_set = dataset, [] + image_list, label_list = facenet.get_image_paths_and_labels(train_set) + val_image_list, val_label_list = facenet.get_image_paths_and_labels( + val_set) + assert len(image_list) > 0, 'The training set should not be empty' nrof_classes = len(train_set) @@ -205,20 +207,13 @@ def main( # Read the file containing the pairs used for testing pairs = lfw.read_pairs(os.path.expanduser(lfw_pairs)) # Get the paths for the corresponding images - lfw_paths, actual_issame = lfw.get_paths( + lfw_paths, lfw_labels = lfw.get_paths( os.path.expanduser(lfw_dir), pairs) with tf.Graph().as_default(): tf.set_random_seed(seed) global_step = tf.Variable(0, trainable=False) - # Get a list of image paths and their labels - image_list, label_list = facenet.get_image_paths_and_labels(train_set) - assert len(image_list) > 0, 'The training set should not be empty' - - val_image_list, val_label_list = facenet.get_image_paths_and_labels( - val_set) - # Create a queue that produces indices into the image_list and # label_list labels = ops.convert_to_tensor(label_list, dtype=tf.int32) @@ -469,7 +464,7 @@ def main( embeddings, label_batch, lfw_paths, - actual_issame, + lfw_labels, lfw_batch_size, lfw_nrof_folds, log_dir, @@ -491,42 +486,6 @@ def main( return model_dir -def find_threshold(var, percentile): - hist, bin_edges = np.histogram(var, 100) - cdf = np.float32(np.cumsum(hist)) / np.sum(hist) - bin_centers = (bin_edges[:-1] + bin_edges[1:]) / 2 - # plt.plot(bin_centers, cdf) - threshold = np.interp(percentile * 0.01, cdf, bin_centers) - return threshold - - -def filter_dataset(dataset, data_filename, percentile, - min_nrof_images_per_class): - with h5py.File(data_filename, 'r') as f: - distance_to_center = np.array(f.get('distance_to_center')) - label_list = np.array(f.get('label_list')) - image_list = np.array(f.get('image_list')) - distance_to_center_threshold = find_threshold( - distance_to_center, percentile) - indices = np.where(distance_to_center >= - distance_to_center_threshold)[0] - filtered_dataset = dataset - removelist = [] - for i in indices: - label = label_list[i] - image = image_list[i] - if image in filtered_dataset[label].image_paths: - filtered_dataset[label].image_paths.remove(image) - if len(filtered_dataset[label].image_paths) < min_nrof_images_per_class: - removelist.append(label) - - ix = sorted(list(set(removelist)), reverse=True) - for i in ix: - del(filtered_dataset[i]) - - return filtered_dataset - - def train( args, sess, @@ -653,189 +612,6 @@ def train( return True -def validate( - args, - sess, - epoch, - image_list, - label_list, - enqueue_op, - image_paths_placeholder, - labels_placeholder, - control_placeholder, - phase_train_placeholder, - batch_size_placeholder, - stat, - loss, - regularization_losses, - cross_entropy_mean, - accuracy, - validate_every_n_epochs, - use_fixed_image_standardization): - - print('Running forward pass on validation set') - - nrof_batches = len(label_list) // lfw_batch_size - nrof_images = nrof_batches * lfw_batch_size - - # Enqueue one epoch of image paths and labels - labels_array = np.expand_dims(np.array(label_list[:nrof_images]), 1) - image_paths_array = np.expand_dims(np.array(image_list[:nrof_images]), 1) - control_array = np.ones_like( - labels_array, - np.int32) * facenet.FIXED_STANDARDIZATION * use_fixed_image_standardization - sess.run(enqueue_op, - {image_paths_placeholder: image_paths_array, - labels_placeholder: labels_array, - control_placeholder: control_array}) - - loss_array = np.zeros((nrof_batches,), np.float32) - xent_array = np.zeros((nrof_batches,), np.float32) - accuracy_array = np.zeros((nrof_batches,), np.float32) - - # Training loop - start_time = time.time() - for i in range(nrof_batches): - feed_dict = {phase_train_placeholder: False, - batch_size_placeholder: lfw_batch_size} - loss_, cross_entropy_mean_, accuracy_ = sess.run( - [loss, cross_entropy_mean, accuracy], feed_dict=feed_dict) - loss_array[i], xent_array[i], accuracy_array[i] = ( - loss_, cross_entropy_mean_, accuracy_) - if i % 10 == 9: - print('.', end='') - sys.stdout.flush() - print('') - - duration = time.time() - start_time - - val_index = (epoch - 1) // validate_every_n_epochs - stat['val_loss'][val_index] = np.mean(loss_array) - stat['val_xent_loss'][val_index] = np.mean(xent_array) - stat['val_accuracy'][val_index] = np.mean(accuracy_array) - - print('Validation Epoch: %d\tTime %.3f\tLoss %2.3f\tXent %2.3f\tAccuracy %2.3f' % ( - epoch, duration, np.mean(loss_array), np.mean(xent_array), np.mean(accuracy_array))) - - -def evaluate( - sess, - enqueue_op, - image_paths_placeholder, - labels_placeholder, - phase_train_placeholder, - batch_size_placeholder, - control_placeholder, - embeddings, - labels, - image_paths, - actual_issame, - batch_size, - nrof_folds, - log_dir, - step, - summary_writer, - stat, - epoch, - distance_metric, - subtract_mean, - use_flipped_images, - use_fixed_image_standardization): - start_time = time.time() - # Run forward pass to calculate embeddings - print('Runnning forward pass on LFW images') - - # Enqueue one epoch of image paths and labels - # nrof_pairs * nrof_images_per_pair - nrof_embeddings = len(actual_issame) * 2 - nrof_flips = 2 if use_flipped_images else 1 - nrof_images = nrof_embeddings * nrof_flips - labels_array = np.expand_dims(np.arange(0, nrof_images), 1) - image_paths_array = np.expand_dims( - np.repeat(np.array(image_paths), nrof_flips), 1) - control_array = np.zeros_like(labels_array, np.int32) - if use_fixed_image_standardization: - control_array += np.ones_like(labels_array) * \ - facenet.FIXED_STANDARDIZATION - if use_flipped_images: - # Flip every second image - control_array += (labels_array % 2) * facenet.FLIP - sess.run(enqueue_op, - {image_paths_placeholder: image_paths_array, - labels_placeholder: labels_array, - control_placeholder: control_array}) - - embedding_size = int(embeddings.get_shape()[1]) - assert nrof_images % batch_size == 0, 'The number of LFW images must be an integer multiple of the LFW batch size' - nrof_batches = nrof_images // batch_size - emb_array = np.zeros((nrof_images, embedding_size)) - lab_array = np.zeros((nrof_images,)) - for i in range(nrof_batches): - feed_dict = {phase_train_placeholder: False, - batch_size_placeholder: batch_size} - emb, lab = sess.run([embeddings, labels], feed_dict=feed_dict) - lab_array[lab] = lab - emb_array[lab, :] = emb - if i % 10 == 9: - print('.', end='') - sys.stdout.flush() - print('') - embeddings = np.zeros((nrof_embeddings, embedding_size * nrof_flips)) - if use_flipped_images: - # Concatenate embeddings for flipped and non flipped version of the - # images - embeddings[:, :embedding_size] = emb_array[0::2, :] - embeddings[:, embedding_size:] = emb_array[1::2, :] - else: - embeddings = emb_array - - assert np.array_equal(lab_array, np.arange( - nrof_images)), 'Wrong labels used for evaluation, possibly caused by training examples left in the input pipeline' - _, _, accuracy, val, val_std, far = lfw.evaluate( - embeddings, actual_issame, nrof_folds=nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean) - - print('Accuracy: %2.5f+-%2.5f' % (np.mean(accuracy), np.std(accuracy))) - print('Validation rate: %2.5f+-%2.5f @ FAR=%2.5f' % (val, val_std, far)) - lfw_time = time.time() - start_time - # Add validation loss and accuracy to summary - summary = tf.Summary() - # pylint: disable=maybe-no-member - summary.value.add(tag='lfw/accuracy', simple_value=np.mean(accuracy)) - summary.value.add(tag='lfw/val_rate', simple_value=val) - summary.value.add(tag='time/lfw', simple_value=lfw_time) - summary_writer.add_summary(summary, step) - with open(os.path.join(log_dir, 'lfw_result.txt'), 'at') as f: - f.write('%d\t%.5f\t%.5f\n' % (step, np.mean(accuracy), val)) - stat['lfw_accuracy'][epoch - 1] = np.mean(accuracy) - stat['lfw_valrate'][epoch - 1] = val - - -def save_variables_and_metagraph( - sess, saver, summary_writer, model_dir, model_name, step): - # Save the model checkpoint - print('Saving variables') - start_time = time.time() - checkpoint_path = os.path.join(model_dir, 'model-%s.ckpt' % model_name) - saver.save(sess, checkpoint_path, global_step=step, write_meta_graph=False) - save_time_variables = time.time() - start_time - print('Variables saved in %.2f seconds' % save_time_variables) - metagraph_filename = os.path.join(model_dir, 'model-%s.meta' % model_name) - save_time_metagraph = 0 - if not os.path.exists(metagraph_filename): - print('Saving metagraph') - start_time = time.time() - saver.export_meta_graph(metagraph_filename) - save_time_metagraph = time.time() - start_time - print('Metagraph saved in %.2f seconds' % save_time_metagraph) - summary = tf.Summary() - # pylint: disable=maybe-no-member - summary.value.add(tag='time/save_variables', - simple_value=save_time_variables) - summary.value.add(tag='time/save_metagraph', - simple_value=save_time_metagraph) - summary_writer.add_summary(summary, step) - - def parse_arguments(argv): parser = argparse.ArgumentParser() diff --git a/facenet_sandberg/train_utils.py b/facenet_sandberg/train_utils.py new file mode 100644 index 000000000..4686d1fe2 --- /dev/null +++ b/facenet_sandberg/train_utils.py @@ -0,0 +1,236 @@ +import argparse +import importlib +import math +import os.path +import random +import sys +import time +from datetime import datetime + +import h5py +import numpy as np +import tensorflow as tf +import tensorflow.contrib.slim as slim +from facenet_sandberg import facenet, lfw +from tensorflow.python.framework import ops +from tensorflow.python.ops import array_ops, data_flow_ops + + +def filter_dataset(dataset, data_filename, percentile, + min_nrof_images_per_class): + with h5py.File(data_filename, 'r') as f: + distance_to_center = np.array(f.get('distance_to_center')) + label_list = np.array(f.get('label_list')) + image_list = np.array(f.get('image_list')) + distance_to_center_threshold = find_threshold( + distance_to_center, percentile) + indices = np.where(distance_to_center >= + distance_to_center_threshold)[0] + filtered_dataset = dataset + removelist = [] + for i in indices: + label = label_list[i] + image = image_list[i] + if image in filtered_dataset[label].image_paths: + filtered_dataset[label].image_paths.remove(image) + if len( + filtered_dataset[label].image_paths) < min_nrof_images_per_class: + removelist.append(label) + + ix = sorted(list(set(removelist)), reverse=True) + for i in ix: + del(filtered_dataset[i]) + + return filtered_dataset + + +def find_threshold(var, percentile): + hist, bin_edges = np.histogram(var, 100) + cdf = np.float32(np.cumsum(hist)) / np.sum(hist) + bin_centers = (bin_edges[:-1] + bin_edges[1:]) / 2 + # plt.plot(bin_centers, cdf) + threshold = np.interp(percentile * 0.01, cdf, bin_centers) + return threshold + + +def save_variables_and_metagraph( + sess, saver, summary_writer, model_dir: str, model_name: str, step: int): + # Save the model checkpoint + print('Saving variables') + start_time = time.time() + checkpoint_path = os.path.join(model_dir, 'model-%s.ckpt' % model_name) + saver.save(sess, checkpoint_path, global_step=step, write_meta_graph=False) + save_time_variables = time.time() - start_time + print('Variables saved in %.2f seconds' % save_time_variables) + metagraph_filename = os.path.join(model_dir, 'model-%s.meta' % model_name) + save_time_metagraph = 0 + if not os.path.exists(metagraph_filename): + print('Saving metagraph') + start_time = time.time() + saver.export_meta_graph(metagraph_filename) + save_time_metagraph = time.time() - start_time + print('Metagraph saved in %.2f seconds' % save_time_metagraph) + summary = tf.Summary() + # pylint: disable=maybe-no-member + summary.value.add(tag='time/save_variables', + simple_value=save_time_variables) + summary.value.add(tag='time/save_metagraph', + simple_value=save_time_metagraph) + summary_writer.add_summary(summary, step) + + +def validate( + args, + sess, + epoch, + image_list, + label_list, + enqueue_op, + image_paths_placeholder, + labels_placeholder, + control_placeholder, + phase_train_placeholder, + batch_size_placeholder, + stat, + loss, + regularization_losses, + cross_entropy_mean, + accuracy, + validate_every_n_epochs, + use_fixed_image_standardization): + + print('Running forward pass on validation set') + + nrof_batches = len(label_list) // lfw_batch_size + nrof_images = nrof_batches * lfw_batch_size + + # Enqueue one epoch of image paths and labels + labels_array = np.expand_dims(np.array(label_list[:nrof_images]), 1) + image_paths_array = np.expand_dims(np.array(image_list[:nrof_images]), 1) + control_array = np.ones_like( + labels_array, + np.int32) * facenet.FIXED_STANDARDIZATION * use_fixed_image_standardization + sess.run(enqueue_op, + {image_paths_placeholder: image_paths_array, + labels_placeholder: labels_array, + control_placeholder: control_array}) + + loss_array = np.zeros((nrof_batches,), np.float32) + xent_array = np.zeros((nrof_batches,), np.float32) + accuracy_array = np.zeros((nrof_batches,), np.float32) + + # Training loop + start_time = time.time() + for i in range(nrof_batches): + feed_dict = {phase_train_placeholder: False, + batch_size_placeholder: lfw_batch_size} + loss_, cross_entropy_mean_, accuracy_ = sess.run( + [loss, cross_entropy_mean, accuracy], feed_dict=feed_dict) + loss_array[i], xent_array[i], accuracy_array[i] = ( + loss_, cross_entropy_mean_, accuracy_) + if i % 10 == 9: + print('.', end='') + sys.stdout.flush() + print('') + + duration = time.time() - start_time + + val_index = (epoch - 1) // validate_every_n_epochs + stat['val_loss'][val_index] = np.mean(loss_array) + stat['val_xent_loss'][val_index] = np.mean(xent_array) + stat['val_accuracy'][val_index] = np.mean(accuracy_array) + + print('Validation Epoch: %d\tTime %.3f\tLoss %2.3f\tXent %2.3f\tAccuracy %2.3f' % ( + epoch, duration, np.mean(loss_array), np.mean(xent_array), np.mean(accuracy_array))) + + +def evaluate( + sess, + enqueue_op, + image_paths_placeholder, + labels_placeholder, + phase_train_placeholder, + batch_size_placeholder, + control_placeholder, + embeddings, + labels, + image_paths, + actual_issame, + batch_size, + nrof_folds, + log_dir, + step, + summary_writer, + stat, + epoch, + distance_metric, + subtract_mean, + use_flipped_images, + use_fixed_image_standardization): + start_time = time.time() + # Run forward pass to calculate embeddings + print('Runnning forward pass on LFW images') + + # Enqueue one epoch of image paths and labels + # nrof_pairs * nrof_images_per_pair + nrof_embeddings = len(actual_issame) * 2 + nrof_flips = 2 if use_flipped_images else 1 + nrof_images = nrof_embeddings * nrof_flips + labels_array = np.expand_dims(np.arange(0, nrof_images), 1) + image_paths_array = np.expand_dims( + np.repeat(np.array(image_paths), nrof_flips), 1) + control_array = np.zeros_like(labels_array, np.int32) + if use_fixed_image_standardization: + control_array += np.ones_like(labels_array) * \ + facenet.FIXED_STANDARDIZATION + if use_flipped_images: + # Flip every second image + control_array += (labels_array % 2) * facenet.FLIP + sess.run(enqueue_op, + {image_paths_placeholder: image_paths_array, + labels_placeholder: labels_array, + control_placeholder: control_array}) + + embedding_size = int(embeddings.get_shape()[1]) + assert nrof_images % batch_size == 0, 'The number of LFW images must be an integer multiple of the LFW batch size' + nrof_batches = nrof_images // batch_size + emb_array = np.zeros((nrof_images, embedding_size)) + lab_array = np.zeros((nrof_images,)) + for i in range(nrof_batches): + feed_dict = {phase_train_placeholder: False, + batch_size_placeholder: batch_size} + emb, lab = sess.run([embeddings, labels], feed_dict=feed_dict) + lab_array[lab] = lab + emb_array[lab, :] = emb + if i % 10 == 9: + print('.', end='') + sys.stdout.flush() + print('') + embeddings = np.zeros((nrof_embeddings, embedding_size * nrof_flips)) + if use_flipped_images: + # Concatenate embeddings for flipped and non flipped version of the + # images + embeddings[:, :embedding_size] = emb_array[0::2, :] + embeddings[:, embedding_size:] = emb_array[1::2, :] + else: + embeddings = emb_array + + assert np.array_equal(lab_array, np.arange( + nrof_images)), 'Wrong labels used for evaluation, possibly caused by training examples left in the input pipeline' + _, _, accuracy, val, val_std, far = lfw.evaluate( + embeddings, actual_issame, nrof_folds=nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean) + + print('Accuracy: %2.5f+-%2.5f' % (np.mean(accuracy), np.std(accuracy))) + print('Validation rate: %2.5f+-%2.5f @ FAR=%2.5f' % (val, val_std, far)) + lfw_time = time.time() - start_time + # Add validation loss and accuracy to summary + summary = tf.Summary() + # pylint: disable=maybe-no-member + summary.value.add(tag='lfw/accuracy', simple_value=np.mean(accuracy)) + summary.value.add(tag='lfw/val_rate', simple_value=val) + summary.value.add(tag='time/lfw', simple_value=lfw_time) + summary_writer.add_summary(summary, step) + with open(os.path.join(log_dir, 'lfw_result.txt'), 'at') as f: + f.write('%d\t%.5f\t%.5f\n' % (step, np.mean(accuracy), val)) + stat['lfw_accuracy'][epoch - 1] = np.mean(accuracy) + stat['lfw_valrate'][epoch - 1] = val diff --git a/setup.py b/setup.py index 4f04a004a..264f4a6f5 100644 --- a/setup.py +++ b/setup.py @@ -1,8 +1,8 @@ -from setuptools import setup, find_packages +from setuptools import find_packages, setup setup( name='facenet_sandberg', - version='1.0.9', + version='1.0.10', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', From 2a542876aee8fc07487a6dccea8ba32d2c36a92a Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Mon, 13 Aug 2018 21:21:53 -0500 Subject: [PATCH 22/50] fix return type --- facenet_sandberg/face.py | 4 +++- setup.py | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/facenet_sandberg/face.py b/facenet_sandberg/face.py index 9b565825d..3ee4c3b37 100644 --- a/facenet_sandberg/face.py +++ b/facenet_sandberg/face.py @@ -145,7 +145,9 @@ def vectorize_all(self, save_memory=True, detect_multiple_faces=detect_multiple_faces, face_limit=face_limit) - vectors = [face.embedding for faces in all_faces for face in faces] + vectors = [] + for faces in all_faces: + vectors += [face.embedding for face in faces] else: vectors = self.encoder.generate_embeddings(images) return vectors diff --git a/setup.py b/setup.py index 264f4a6f5..621c1beb8 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ setup( name='facenet_sandberg', - version='1.0.10', + version='1.0.11', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', From fe7b9c1b78ad8d38a2faa7ad09830b972e0aa9cb Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Mon, 13 Aug 2018 21:32:04 -0500 Subject: [PATCH 23/50] fixed imports --- facenet_sandberg/face.py | 2 +- setup.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/facenet_sandberg/face.py b/facenet_sandberg/face.py index 3ee4c3b37..8430f0d24 100644 --- a/facenet_sandberg/face.py +++ b/facenet_sandberg/face.py @@ -11,7 +11,7 @@ import numpy as np import tensorflow as tf from facenet_sandberg import facenet, validate_on_lfw -from facenet_sandberg.align import align_dataset_mtcnn, detect_face +from facenet_sandberg.align import align_dataset_mtcnn from mtcnn.mtcnn import MTCNN from scipy import misc diff --git a/setup.py b/setup.py index 621c1beb8..e09576ed7 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ setup( name='facenet_sandberg', - version='1.0.11', + version='1.0.12', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', From d0498a194c926d4180a9513a356fda7a2580713d Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Mon, 13 Aug 2018 21:39:18 -0500 Subject: [PATCH 24/50] fixed class order --- facenet_sandberg/face.py | 13 +++++++------ setup.py | 2 +- 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/facenet_sandberg/face.py b/facenet_sandberg/face.py index 8430f0d24..459a9d550 100644 --- a/facenet_sandberg/face.py +++ b/facenet_sandberg/face.py @@ -18,12 +18,6 @@ os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' tf.logging.set_verbosity(tf.logging.ERROR) -Image = np.ndarray -Embedding = np.ndarray -EmbeddingsGenerator = Generator[List[Embedding], None, None] -ImageGenerator = Generator[Image, None, None] -FacesGenerator = Generator[List[Face], None, None] - class Face: """Class representing a single face @@ -65,6 +59,13 @@ def __init__(self): self.is_match: bool = False +Image = np.ndarray +Embedding = np.ndarray +EmbeddingsGenerator = Generator[List[Embedding], None, None] +ImageGenerator = Generator[Image, None, None] +FacesGenerator = Generator[List[Face], None, None] + + class Identifier: """Class to detect, encode, and match faces diff --git a/setup.py b/setup.py index e09576ed7..29c82424b 100644 --- a/setup.py +++ b/setup.py @@ -2,7 +2,7 @@ setup( name='facenet_sandberg', - version='1.0.12', + version='1.0.13', description="Face recognition using TensorFlow", long_description="Face recognition with Google's FaceNet deep neural network & TensorFlow. Mirror of https://github.com/davidsandberg/facenet.", url='https://github.com/armanrahman22/facenet', From 3d74300d70925cb0c1866598281be72b711960b4 Mon Sep 17 00:00:00 2001 From: Arman Rahman Date: Thu, 16 Aug 2018 16:59:15 -0400 Subject: [PATCH 25/50] refactored a lot of code and added insightface --- facenet_sandberg/align/align_dataset_mtcnn.py | 291 +++---- facenet_sandberg/align/det1.npy | Bin 27368 -> 0 bytes facenet_sandberg/align/det2.npy | Bin 401681 -> 0 bytes facenet_sandberg/align/det3.npy | Bin 1557360 -> 0 bytes .../calculate_filtering_metrics.py | 3 +- facenet_sandberg/face.py | 430 ---------- facenet_sandberg/inference/__init__.py | 0 facenet_sandberg/inference/common_types.py | 50 ++ facenet_sandberg/inference/facenet_encoder.py | 119 +++ facenet_sandberg/inference/identifier.py | 187 ++++ .../inference/insightface_encoder.py | 157 ++++ facenet_sandberg/inference/insightface_old.py | 140 +++ facenet_sandberg/inference/mtcnn_detector.py | 80 ++ facenet_sandberg/inference/utils.py | 66 ++ .../models/L_Resnet_E_IR_fix_issue9.py | 797 ++++++++++++++++++ facenet_sandberg/train_utils.py | 7 +- setup.py | 5 +- 17 files changed, 1757 insertions(+), 575 deletions(-) delete mode 100644 facenet_sandberg/align/det1.npy delete mode 100644 facenet_sandberg/align/det2.npy delete mode 100644 facenet_sandberg/align/det3.npy delete mode 100644 facenet_sandberg/face.py create mode 100644 facenet_sandberg/inference/__init__.py create mode 100644 facenet_sandberg/inference/common_types.py create mode 100644 facenet_sandberg/inference/facenet_encoder.py create mode 100644 facenet_sandberg/inference/identifier.py create mode 100644 facenet_sandberg/inference/insightface_encoder.py create mode 100644 facenet_sandberg/inference/insightface_old.py create mode 100644 facenet_sandberg/inference/mtcnn_detector.py create mode 100644 facenet_sandberg/inference/utils.py create mode 100644 facenet_sandberg/models/L_Resnet_E_IR_fix_issue9.py diff --git a/facenet_sandberg/align/align_dataset_mtcnn.py b/facenet_sandberg/align/align_dataset_mtcnn.py index 19630fe32..70b17934e 100644 --- a/facenet_sandberg/align/align_dataset_mtcnn.py +++ b/facenet_sandberg/align/align_dataset_mtcnn.py @@ -35,13 +35,28 @@ import numpy as np import progressbar as pb import tensorflow as tf -from facenet_sandberg import face, facenet +from facenet_sandberg import facenet +from facenet_sandberg.inference import facenet_encoder, mtcnn_detector from pathos.multiprocessing import ProcessPool from scipy import misc os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' tf.logging.set_verbosity(tf.logging.ERROR) +widgets = ['Aligning Dataset', pb.Percentage(), ' ', + pb.Bar(marker=pb.RotatingMarker()), ' ', pb.ETA()] +global_image_size = None +global_margin = None +global_detect_multiple_faces = None +global_output_dir = None +global_random_order = None +global_facenet_model_checkpoint = None +timer = None +num_sucessful = Value(c_int) # defaults to 0 +num_sucessful_lock = Lock() +num_images_total = Value(c_int) +num_images_total_lock = Lock() + def main( input_dir: str, @@ -50,7 +65,8 @@ def main( image_size: int=182, margin: int=44, detect_multiple_faces: bool=False, - num_processes: int=1): + num_processes: int=1, + facenet_model_checkpoint: str=''): """Aligns an image dataset Arguments: @@ -66,155 +82,146 @@ def main( detect_multiple_faces {bool} -- Detect and align multiple faces per image. (default: {False}) num_processes {int} -- Number of processes to use (default: {1}) + facenet_model_checkpoint {str} -- path to facenet model if detecting mutiple faces (default: {''}) """ + global timer + global global_image_size + global global_margin + global global_detect_multiple_faces + global global_output_dir + global global_random_order + global global_facenet_model_checkpoint + global_image_size = image_size + global_margin = margin + global_detect_multiple_faces = detect_multiple_faces + global_output_dir = output_dir + global_random_order = random_order + global_facenet_model_checkpoint = facenet_model_checkpoint output_dir = os.path.expanduser(output_dir) os.makedirs(output_dir, exist_ok=True) - # Store some git revision info in a text file in the log directory - src_path, _ = os.path.split(os.path.realpath(__file__)) - facenet.store_revision_info(src_path, output_dir, ' '.join(sys.argv)) - dataset = facenet.get_dataset(input_dir) if random_order: random.shuffle(dataset) - input_dir_all = os.path.join(input_dir, '**', '*.*') - num_images = sum(1 for x in iglob( - input_dir_all, recursive=True)) + num_images = sum(len(i) for i in dataset) + timer = pb.ProgressBar(widgets=widgets, maxval=num_images).start() num_processes = min(num_processes, os.cpu_count()) - - aligner = Aligner( - image_size=image_size, - margin=margin, - detect_multiple_faces=detect_multiple_faces, - output_dir=output_dir, - random_order=random_order, - num_processes=num_processes, - num_images=num_images) - - aligner.align_multiprocess(dataset=dataset) - - print('Creating networks and loading parameters') - - -class Aligner: - - def __init__(self, image_size: int, margin: int, detect_multiple_faces: bool, - output_dir: str, random_order: bool, num_processes: int, num_images: int): - widgets = ['Aligning Dataset', pb.Percentage(), ' ', - pb.Bar(marker=pb.RotatingMarker()), ' ', pb.ETA()] - self.image_size = image_size - self.margin = margin - self.detect_multiple_faces = detect_multiple_faces - self.output_dir = output_dir - self.random_order = random_order - self.num_processes = num_processes - self.timer = pb.ProgressBar(widgets=widgets, maxval=num_images).start() - self.num_sucessful = Value(c_int) # defaults to 0 - self.num_sucessful_lock = Lock() - self.num_images_total = Value(c_int) - self.num_images_total_lock = Lock() - - def align_multiprocess(self, dataset: List[facenet.PersonClass]): - if self.num_processes > 1: - process_pool = ProcessPool(self.num_processes) - process_pool.imap(self.align, dataset) - process_pool.close() - process_pool.join() - else: - for person in dataset: - self.align(person) - print('Total number of images: %d' % int(self.num_images_total.value)) - print('Number of successfully aligned images: %d' % - int(self.num_sucessful.value)) - - def align(self, person: facenet.PersonClass): - # import pdb;pdb.set_trace() - detector = face.Detector( - face_crop_size=self.image_size, - face_crop_margin=self.margin, - detect_multiple_faces=self.detect_multiple_faces) - # Add a random key to the filename to allow alignment using multiple - # processes - random_key = np.random.randint(0, high=99999) - bounding_boxes_filename = os.path.join( - self.output_dir, 'bounding_boxes_%05d.txt' % random_key) - output_class_dir = os.path.join(self.output_dir, person.name) - - if not os.path.exists(output_class_dir): - os.makedirs(output_class_dir) - if self.random_order: - random.shuffle(person.image_paths) - - with open(bounding_boxes_filename, "w") as text_file: - for image_path in person.image_paths: - self.increment_total() - self.process_image(detector, image_path, - text_file, output_class_dir) - self.timer.update(int(self.num_sucessful.value)) - - def process_image(self, detector, image_path: str, - text_file: str, output_class_dir: str): - output_filename = self.get_file_name(image_path, output_class_dir) + if num_processes > 1: + process_pool = ProcessPool(num_processes) + process_pool.map(align, dataset) + process_pool.close() + process_pool.join() + else: + for person in dataset: + align(person) + + timer.finish() + print('Total number of images: %d' % int(num_images_total.value)) + print('Number of faces found and aligned: %d' % + int(num_sucessful.value)) + + +def align(person: facenet.PersonClass): + detector = mtcnn_detector.Detector( + detect_multiple_faces=global_detect_multiple_faces) + output_class_dir = os.path.join(global_output_dir, person.name) + + if not os.path.exists(output_class_dir): + os.makedirs(output_class_dir) + if global_random_order: + random.shuffle(person.image_paths) + + all_faces = [] + for image_path in person.image_paths: + increment_total() + output_filename = get_file_name(image_path, output_class_dir) if not os.path.exists(output_filename): - try: - image = misc.imread(image_path) - except (IOError, ValueError, IndexError) as error: - error_message = '{}: {}'.format(image_path, error) - print(error_message) - else: - image = self.fix_image( - image, image_path, output_filename, text_file) - faces = detector.find_faces(image) - for index, person in enumerate(faces): - self.increment_sucessful() - filename_base, file_extension = os.path.splitext( - output_filename) - if self.detect_multiple_faces: - output_filename_n = "{}_{}{}".format( - filename_base, index, file_extension) - else: - output_filename_n = "{}{}".format( - filename_base, file_extension) - misc.imsave(output_filename_n, person.image) - text_file.write( - '%s %d %d %d %d\n' % - (output_filename_n, - person.bounding_box[0], - person.bounding_box[1], - person.bounding_box[2], - person.bounding_box[3])) + faces = process_image(detector, image_path, output_filename) + if faces: + all_faces.append(faces) + + if global_detect_multiple_faces and global_facenet_model_checkpoint and all_faces: + encoder = facenet_encoder.Facenet(global_facenet_model_checkpoint) + anchor = get_anchor(all_faces) + if anchor: + final_face_paths = [] + for faces in all_faces: + if not faces: + pass + if len(faces) > 1: + best_face = encoder.get_best_match(anchor, faces) + misc.imsave(best_face.name, best_face.image) + elif len(faces) == 1: + misc.imsave(faces[0].name, faces[0].image) + else: + for faces in all_faces: + if faces: + for person in faces: + misc.imsave(person.name, person.image) + timer.update(int(num_images_total.value)) + + +def get_anchor(all_faces): + for faces in all_faces: + if faces and len(faces) == 1: + return faces[0] + if all_faces: + if all_faces[0]: + if all_faces[0][0]: + return all_faces[0][0] + return None + + +def process_image(detector, image_path: str, output_filename: str): + if not os.path.exists(output_filename): + try: + image = misc.imread(image_path) + except (IOError, ValueError, IndexError) as error: + # error_message = '{}: {}'.format(image_path, error) + # print(error_message) + return [] else: - print('Unable to align "%s"' % image_path) - text_file.write('%s\n' % (output_filename)) - - def increment_sucessful(self, add_amount: int=1): - with self.num_sucessful_lock: - self.num_sucessful.value += add_amount - - def increment_total(self, add_amount: int=1): - with self.num_images_total_lock: - self.num_images_total.value += add_amount - - @staticmethod - def fix_image(image: np.ndarray, image_path: str, - output_filename: str, text_file: str): - if image.ndim < 2: - print('Unable to align "%s"' % image_path) - text_file.write('%s\n' % (output_filename)) - if image.ndim == 2: - image = facenet.to_rgb(image) - image = image[:, :, 0:3] - return image - - @staticmethod - def get_file_name(image_path: str, output_class_dir: str) -> str: - filename = os.path.splitext(os.path.split(image_path)[1])[0] - output_filename = os.path.join( - output_class_dir, filename + '.png') - return output_filename + image = fix_image(image, image_path) + faces = detector.find_faces(image) + for index, person in enumerate(faces): + increment_sucessful() + filename_base, file_extension = os.path.splitext( + output_filename) + output_filename_n = "{}{}".format( + filename_base, file_extension) + person.name = output_filename_n + return faces + else: + print('Unable to align "%s"' % image_path) + + +def increment_sucessful(add_amount: int=1): + with num_sucessful_lock: + num_sucessful.value += add_amount + + +def increment_total(add_amount: int=1): + with num_images_total_lock: + num_images_total.value += add_amount + + +def fix_image(image: np.ndarray, image_path: str): + if image.ndim < 2: + print('Unable to align "%s"' % image_path) + if image.ndim == 2: + image = facenet.to_rgb(image) + image = image[:, :, 0:3] + return image + + +def get_file_name(image_path: str, output_class_dir: str) -> str: + filename = os.path.splitext(os.path.split(image_path)[1])[0] + output_filename = os.path.join( + output_class_dir, filename + '.png') + return output_filename def parse_arguments(argv): @@ -224,6 +231,8 @@ def parse_arguments(argv): help='Directory with unaligned images.') parser.add_argument('output_dir', type=str, help='Directory with aligned face thumbnails.') + parser.add_argument('facenet_model_checkpoint', type=str, + help='Path to facenet model', default='') parser.add_argument( '--image_size', type=int, @@ -240,9 +249,8 @@ def parse_arguments(argv): action='store_true') parser.add_argument( '--detect_multiple_faces', - type=bool, help='Detect and align multiple faces per image.', - default=False) + action='store_true') parser.add_argument( '--num_processes', type=int, @@ -261,4 +269,5 @@ def parse_arguments(argv): args.image_size, args.margin, args.detect_multiple_faces, - args.num_processes) + args.num_processes, + args.facenet_model_checkpoint) diff --git a/facenet_sandberg/align/det1.npy b/facenet_sandberg/align/det1.npy deleted file mode 100644 index 7c05a2c5625e0f4e8c9f633b5ddef5e942b03032..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 27368 zcmZ6y2{cvV_dbry^At&$GbEKr<-Yrsq%x%eQBuYR4M;^JAyXn5N+g+5G?C)I`z4x` zN)w@h=2@vU*U#ts`>yr>eg5a(b=SJ*oOSkH&)x6Ydq2-}_RjWno8u!YGDBpuNl@^* zzz7ptJ(Df&CS&wWLe@q^M)X6_^(MbQAA~G0hgSEwkaOth#GNztV8-rIYTOPSCTz0wW^8YW8++d)= z_uxyNH z^03lql<>=*RL(mJsVQLs$~N6tZWh7%PkbvzB3j0U)04kG~m`ou>z^>F9j1_%LRH;-$>3O z1vXDbo0v~76Bv;1!ihuNY^9#Ru`P%QK#eeCSWtLDD0x1a-PYV83~8D`ju%?mzE?f~ zzw2D4@P97^1{p0vmA2{B=94I&-xtL;>D38dlHJ17my6l1S=vO_BUb2s?vG$&OC$Cq z3&Cq+ykPc_906J9ELeB~*-x=tLH5>MqBKvHJ+(C=p>K8y{;qv0IMtv?qGw*ie8<&f z^wV^K`0zw>EcmSOS6{KfG*OrA@i}fAFFa+tp=>zeU-!Vm^>S2fMV9c@qr*b|YYKv} zAaPQ0XSdLMQo7)PaWh$aP>CNny#&i5Pt|*A#|w96=hcrgoC=8_n*jL!^bzAn_hU3oQbh#bxzvkp6w^-6|J*9KJvJrd0C5~ z#CJ97kDi5bd)mkX|2|<$nlw>SJuRr%)-3c%vm~9~)x>z7BtOgaXw*UvLGqFJLPysm zp>=33i45?R3|Qg6E_hwI!hdCy;(yXfJYl(fvZr!c;YqSNZXB*O-h?BL&t#DS>;IqB zv4}gXtzJBCaR+tQ+UJ zO;9oJHd5wgucmO7s$4FbwUxd|il+ur@vNw{mM=OwkEh7J0hbU1yXf5~xOVqm?wLJ_ zR-ZY+zsN@O&)e?M*bG!O4VpK1FXnt^C+cQ|7;+r@tJgFJCCwWmH!1&dv9@Seh2izprK|)HpyJ zWxRRGIx~z6SpMHItNsUOvHyWtx>Quoc9@1l{eR$<_}`Up6|S~5T>bwK?$G}McZkan z7iAZ{L2#2jL}rPIY_&;*P4|u2A3hvo(;mP_O97RHM$i`7307Yv;7_+P-26A6_`Gi; zmThUcr+XWI_WVpve2}1ZPtx%2G!eiGX})%^7JjMD247hREbmoDjkyXm`MVVkpVZBI zXd#dLsLm^lx1w>va`L-+101=cNpD5nBomIw(*yT?A^Xk|;l8PHJZhwLL;I*R5IIzx zy6MH>Eu#-Caby-f@$>~cr%eMj?b-Ymjlt}T!+3j%J@3$p1Je)3@SRsG3r#!#nVO@y zgOdh-IZugByYYxKYbKD?`b6LAy3_YY>F6v!kESiHVX5u zE$e2Bb(es?wkPx)i9n0?Rgko`1|mr($aaKqXH2#p+#{$Sxg5p} zwPi+Af~iu!Aun8Y3NI~J1c~EwxM$5uDrk$P4_saf3%4Pc>PhI!YgfP{P>y4hOtl@d)be?=Fhen^I?f2*K)`9ab*v=3^= zDTCv7!WUm_NB7p}MAKD+C)AqK%zIXR=ha|dxP20czxp7We0@eF^=g%}z`DenFJn-nfc;1AEt z;L$UUF-+$anLJIFS}v2sPmgqXz6U8)oO|Rvy#cjGu7npJt>;`)R4E2l0}2fM~U{r zZt}P^5b%^Kc@m;Wz5I%~j7y~mokQ=TqDX8KsJGr&GH1>XDD{Y^alr*BRw_gP z{N0bW3*{Ti?WL&X&m7jUeZH+lRX%KJ5{Kj66#71HrjDB)@x#lm;QPYeTv9s~|FkW{ zSNeOvS@adU4#~$MR})~}eO;_O`iwl-cOI(0jK}Hv&Y-q>0%mK7;<<#&SZs9tpg zh32ox_7Ot3_}-B`ds8T|{BRN;S?AH-_Dclfty6=S zCHt}2E|4zVvXqJ*zb=&jqEDvfnQ?<^a~?Bw8I12$VpfM^AhggK>z{nWU-m`pTWbe; z=U2i$iyEx|P=G65hB1MW6L?te2BDP_S#|piwg;r*-qD6UDef3r$Sh@^KUDbf&JsSn zI|Jth8`9>3s_@}*IVNtf$K3Lxa6jLZruUv=Z@Y$5&r78cC-H%u+rE}AzTblp>)t?* z_6-(NF&cIHwxdpzH0?fj7LS%na`q#I8RX1>{`!76yks1FlGuwQglg!#$&|-Ekmi!k z$mFhM@>EH6zCPs_pZZ}k|L|cY-7Viu0(Koo6}*RSGin6i-BoCdRWZh;J_WbVW{hrC z!kK%jAa3taR19q;y~__{OOrM05L-EPZ&=vegb}Vm$dGz3`3i8``xp0zNGnp+FL2nR$ zEX_VjJO#bvqqibiB{v+)*Iob*-BuVlcn6fm)Z*|(_GsZ`i`jm)L@ig28b3;|cSxVe z*^!|_o0qE8;%hoO_$CRx`qv6}iq){V5;-n%a3B2GF^1RFnDV2o5BWuLTR!%(7^dG! z$A~-&h*tZDtvfb>(t{328F_$hQT3oB+lTUFPsOO=qdyo@<-^_PS@GHmAxg4-(7K?> zXK+vK*_jPf-_C#$tAcQ5rY@Vi{3O>|Fq4m4Awk=Vp0l?{>LJuUo7Op>Mz_*ef|D;x zV9|B~3bPIV$W;DI-WXOFzW$25_K4$DL!Om8QK>wT& zm|&`a9<}Pc>Y_F&8f<)w&Zk3B%5eyH(Z=*qBl*<7chG%&xA4|PJ8<|_4d;E#__c4% zEW39rRvLeTGCr4Kjxkr4H>BCz37&YX@loB^Q2#|b*6PJVwUaA)JyC<<-o{Y*LZTqvc_k7eWdBX7~K?h0-T z%EN|B_O>++j9m8^!Q2;(q*>j1_}93NJf15{&m=FQT59jmXX_~Hhqp*qKpBy?djWIb zB1|)ug1z_BaJ!Z)m)!jrte)ktuArfG*`ZTty&{GznrBS?;+H^0)hRHKyG0K4d%=x> ztpa~ZZM>g!lJ#$D#_MN7G1NqbZ#W3JO`;0*)GYYF2u`u2CrhzAEm3QR%X8!2o@lN2X^c@1uz7$Bu z+M$RDhlgo+h#_;m|25F-&ou%A!MT^YhdwbXQn-)i;N$?6CTlgL4&3_c^>l$ z+xDEr@g4GDV$Ru>f~6gQ$PJ7VuaxxYSdI zEr;#t^=TWZ!o*{2^=c8iTJ#az(EkZT>J!0HgiF}%>88YHVA0J4`UzrHCfy59jmZH}d(X zW4OF zYha%DL4S@CJlYh9RkG(v^YKdTvrebMHpZAAexAtxyUQlboh2l=11w)Y0)tr-?< zkb5zby3Aj|gO@3y-lXq>;WwiNX=8`M?UiF_K*v|m*)p8=dL5+8W;oYR{XCT>EVHJ@ z(~Ricff6{mYB;X39z{PmC!zneTwI#b$xhUb;|oVThY{VEn9Am5Lb=U;Wa79Sd{JCY z7ZmB>^()G>DKj6xslCNI^JMs2wi@G}se#J7a*W^l7wxw1fHH?K!g7OBJj*tL;lJ~E zx6cB8n3P~(>LTzmswCr!4A4(94!2#GV2)v_U~ha8h^Z=#J*x$7HeK+rdluciAqg+L zEXS_Lq2$rDTNqva5M8BD@IQa15z8J=?sM-8etTGoQA6|U_q{j)ciQveMUe^lwDO$5 zz}|K6zv}__-6N?|;c3`$*PSgLu+1^j-jfq{1qQTD9_UlJa|y?u6J z-V!}rE;Yd9)chb=J{MmC%?{4mVobz-EmXzqoBF1|$qepE(+Is?jSfzcYkn zJu1R3nuhn5`rxjF;cV#Lr>J&Gn#=CAn+=ulW#|CCZM8lVR+&%)yKd1ABmMh_?%`u{QbQ(L{xeq^x zi}KeK)8J=(nJ{-*I*wfUjEu@wG17%4#*gBnb{qMII5|+uea51PHLy{$uHbIv;WQVvkTQc-3_tHhNBm6# z)tf*|%pGxvXbD@lG@XqJF@_D=J#csPL+Br4j^gRApbK@~O@tX|4nJI-IUf#mgE8~dOi+Cmok)Eox4%jnBst>&w{QcL#>Pwd78{yXm_7 z#V~*ep~-S8#{7HEZ?63Sy2FRlUAGXUGDSwjtY>mA%V_N2&AW8 z@4yj_rL?2x5ZkFzfhxIr^w%p*n0hc=*yXc|TSXoKYiW5Z>7EWpU+3|4ub;yxryaC> zWD=P?Ob2CC(plqBQ(kwGf#XF-!bkjphRmb5;IS8cc9rE9qs#`^)?D;E`<0vyH-V|C zg%~rk0gt2&=RF2qFn{+MIFzUb`rpklrBR6nw7cPwLuyp;S50UpW=YTGoUBiSF;RjYZ;4wo-;hW5Q+O0Ytir1Aw%t2E$sw@+{@NndF!(*VnFcNAnRthrAzOd)Y zT6|kUE+&58PdvniQ9=LU446KRo+;izB_nCH7+Ep4D~d+UIRgiOml3hf zO7P!oOr!rypy!S{!JE5($yMCTFN9^#FrPcH&1C~k-&DoRyDqa`$q%XKtWmV1%1!tu zVt`CK+KgW^2<`E7qUupV-4)bfzHlawd>Je_|KJy1>wH3OmOsEo-M2(nb{mb^Cc=YV zTd_rM2T4rmfm6?%c}?XQUVZW~E>bYZ=wD5(4*}{M)g(+b%sIzPVbe{=_OU|bJ?D{6SkhmXLby~rDtq`8&lYg|SPQnsYa}wvEkxBOCy}|Gwo@9nw5ntmt z1?Gnj;`>BW;MB5kCl4`60&XCgWn|b6C@|f_uM^;?;I#WN(QfU(r9FDn-3uc_wl^ z#%?*;IwS$U&)y^u<9jhnYY+M7Q_eJWk^6`zvC?KSZkF?dC~Rp1y>GFowJ-+_dL7`Q zLJ&2&od_ef+W5>1JFrJ|fE;jcV@iEilvNysXI0kL=LbL<;Za40fjbzrl&fn}|GV<6@*}Nq9T)F{>e29+~{WhB<7HdI*nih{tJh!{A7rC7%?P zEcozF7dBk60d-*kbC4*+O_|1YEsr9e@gixI|=%<@GTWs#j=U#XOVST zUKr#TEv#(H!08Xvs9CTCEPmPtYTJj=28m;2gq=N?oD#glIxb-@%v}tu*tsY$@&O)Hs_tYx}oou zBl;S9;+oh5$UeCkSl>AQ=gSU$v$ByLb38>o?j9ohnzLZ_gU9&j%3_TBTFHDnq^b9- zY+P{90Y{aH+T|W-g!5AdXFu=9Le*wXs-w7)>oxv_EF&*^{G%ebx|R)B=6F+?eNklI zmi=hhJciq^v4p+(4t)PL33_PDF|OSnfb%~T3ys4w@zJqE)V(DU?EhB6jFrGg4=jVa z@vGU|R72KfnG60h%VDhiAn#H2hRo_MrMmKaP;Q+K5JzO^rh1{}yeVitaTKgJv*4;h zb$o)UH7?5OBbJ#0CTe{k%W!v)H0^Gzm|K z!1`6}w5<`1m?@yw9(TjIsu*ZAI6jE~KG03y&(z<1Wy6=gCN~tqq0(;$s{M$CaqTfY zMy!d9Uq2KCBAbM5wo6I6`XxA-T#IY!#KcR2#JcwN<02#pnxc z_3bn~a8{SSQNBeClxBnRfD~zWDi(}cVnYM}>e8L!LBvIvNa7NPFs%unAop@VYM)<$ zR<)BcsM>-qw3Fc<_dEiLJ(Sx%JH>WvYQN#~R$Oz+wdocFT@ z#-13$H|`l{*HJngjRV7=xLpLzCT_!qBQA8x$9O8y7e=o?tAi|lkySW0u%{W4+(tnK z=d||YuZ@4v*nR~_cBR(4?WI_)b`UIAH{#;kX|$rwlAT#?fwzY_@`pDH==}}$WWAX; zm`O!2KUaI+_;EGugrDq8_9e1@!#{d8=ocL?F_oJsS3*mO2L$9zg|?_5c45&ZJQ3)^ zXO5QOB<2`5o{))o4MVwJZZK8J*#$ZkiQLdymw&yWJXr5uz=g~_w6&&uo7fyK-z5${ z=YON=pk87+r5P0#6~G*q4(MP0MM(4pP^5M@EAA-Zs(z<&NSiyl%|1x%j~S5Zr^-m# zCrxOp^+0bq%Ht(pa%rP=SP(y-w<jYt$Tc)H z8N&mfo8WGZP0;!J2(G0OAin$pb`)NMC$UP{`GE<roKg5Sri5X}>4UR^IM5tOj+x$o;8>@LHy`at~kK35P*&%lnS)4*qn1I{Zv z#zuS?MwVsz&<82r5a88~JEvU{M84A_g+v|{GBmku$X7hr;{Xv_1w?dCGn@JTnE(fc zWkU@I5ahS9VVz~@@23FygT2j&SXI!VuH@^kF}BO1En!GN7+mXo1io9ecwUq~{^>zt z_p<@+d<}>0ywNOH(~f~CFw{Q*%})9?T<>(7BZ<8)wL z>=U@I05D^}1-Tf17tJL!xNXZhOtbdkrhUCAx#vF7dMAOfyBJOK8*$b0Q<$|Z2~SI$ zV(YGbua~jYKz8RLDcVfCefdZu`m6}5%IFHtvi4nD!S-BHZ(%qCGO6&SM6 zl8Yy7MMxYMx%~p+y}zr# z<8W=dS4T7cEPgL9Wm6dnI{7ymR>jE^ll4?*{o$t%-6ke#>_r<%V8 z)!DIZAIS5h#eXqngOIWX@$71F1$z2TvrXI;#d>5d*xIuf@%mmbjM^mGVEAvZKua_f z_5MA^!{sCSXHP$5aU)4^-Y(XhXav36AL8FT6Ie>F6i8)>(-(FldD8YK*yg7}w}g(R zKfc&=SFOV=AvFzeml(hmb7wr!bdFV47NbkpEKt1K$D%EOFPlCAUtC|rq_rckC-pvA zGTIW#xdt~KzLKlOoJS>z44M~}25U$E#Nr{Z@S|ZF-EmQaTvW@#%+h60apn*lX}JNJ z$E%@v`z9PO76a^qq2N(YG5qjIz+LB;vb{bgOlja9D_`Nt|4z=v(r-d^3U^?4+yd}? z+e9+lMgsbz>+pq{DBfK$A3s}((Fb!nTbR<$cE5R;-m*Q|ut zn*qFWtt$V})XTOw3WQamH`%4#HKd~bG2XD-Pt3PC!GusH&>if-RE|H!=)+^E?fsW{ z=~_G7J|Rc^B6c&sm%c2hP78!;E6J2`etdszH%fXpp;&7U!2Zcx$IOO5vyX!-Qu>&A zcr=;t>>#f1`7E6Bc_`mIt{t6qk-8^;yzg(^yiAUgqJ`M7Rf@dYc~LNq2Ef)2Dez{fIln3>L+foF7$SR|9KG@j z*9dmfoXp8wUV4*Y<-0~f`wLh8u~3idi5&#Zqp3vknSk2-i-j$-o`QJDPgZ`mQeflT zg2CRIkfNzY_kS1Q?6-}enUVxQ5@c!A%a>5c)dUZ3O@Q}a3?yT+aroLpu=*XtpV7y` zUOgBC76!xEsmIZ!R+K+#kEg?uv~laUAadw4V5mTc|9Xvd8c8PkW4nm`giTaW;~_Cw zNCxY_mmuK72dEAW;xFXlXv4+RKu=$V8_H^2`;s3oy;%tBn(biTabHr=atgmajmDw1 ze^~#!aWF#cK0EO`ooAL+60f{>j65}^^RM~By>J5wjjKZR`Z<-aZ~Y8A zBDdi1CUN$|-VMdyPT^sxqp`{OA{)`}kA01|Ft(x?&1+cw%{EPZ4|Bmr`!%|aSqAFW z)A9BtRm9e4wszEIxL4XiqRl?Dv7$TSX6I%&-*+4$=u5%E`wLL9a^r+@Ov*0;9ATipBhIBe&%&zWYtDC;`tcNdMXVwHJjP(lEZ`Er3YXt_(7!1 z3@9IM5^})WN#gdvIkz{3SQ($ph00c!2 z;MW01R4a-X-riOtuvbw;&-b}#BAdqUTdg1$rJmyP`umt%Bqf+;YJ|NlkJx3pgzH|t zf!T?MygE_>w=UQWmiwh4`BgLO6BJ&vGlfl8Bd6igAv&1oJ(IKsDDttJMsf3XN8!v24PJRfg^nAh%BvdN!91DJfNn3G z?_M?NQJ796$CsmNSsS01?1#D5NoW)$MTgk8z;8=UTD`&&u1u=N&sPgjF7P<4c916F zZ$_Z@P!1|?LcIR97#&;2v(Fm~iRAVU)VZ4ulkEba($o;u)ZM7f$sstq_5(H9Ed$fn z>}OYET-lizGrZ8U3NKjvV&_vN__L6G!uu`?yr|&?33QC*7D6dZTU-rS{tf!FtWCMq zY#E-lq=r44x)k>5rSN{Sy{u^DHndR}p(^Jj_$6&e?z`?S8KzSwAho~Pt*&6)Z@eFD zXWxO7Z|dMqsQ_r*4fy-v1^OE8f;M>*`fJ)3fyS>t5MJ~ZXAOCQasE=M_2mk#&1_|g zLqD*-OWBb2V8h^?z8-RG`iS#(aq3%rlW>X6@IA2+w?#;B`Dd@lw|Q+)tNQ|62lYFN zOg}7{7l(%x3h+~vIx}D3g+F6=LubZ0T>K`Mc$n`KHeH;CO%>Pi!Pv*>e6JYC>cr4B zDPtg1Js7IJXK>!_2e(`A;NsJTxTf;mZf6Pc0`h7}+Ny`>tQpH)^@oN>o_V)m+opu*;?!JPQ z9R={|=>$@@N}adecmopJ63ne2oVM!t@V@u5EHLaOxOulTd$k~bxPJu(r5|B_XLR{; z<6N}OzKz?Cz9(WAvhj)74ZLwR3uC{AV#Xg!{8hIWYY&v+c>e|-y3c?;$qgH<|0%*x z+EO%W_9L<_O@x2!kKht3|6=jxc_=kAk+-fk=ZjZ;hO(@k+&fJlmp<9U``ZuV{U`CX zPcx0Ce!R(EZ#cz*U-e=KD4+-d`*VX zJ?A$JYPB9#wUs5WOeIwX0{LqNVUmc*_1g&*V9`Ei*lJaNVgl}t;a!bF}wo-%KhSN4DqVWrir+4&hVm_Q__L^Np6DFMt3`Uq)yHG;yK*l`W?q5Un)keLk^}7dM6Zk#;ZKK=j1+? zFk?N75^sR^6UCsh!v&pR?10%PXYxZ1D(E-o6J~eX(c5|2RJ_<3M3?7N8EtiZILQHu z%EUl-p&vh}eG8g`X3)hO#-nzHGgrVL7<#A<8&;km(z5+v-BAQT&Yz^}sXyS?t!Z>% z{Z|x5ZCw!himYv_f8b!0-;*PtqEH?c)`P#mbOynI%&-Ic8 z?{jdwXB&%?eTe-jd+YZ;iiE;#Zz1`BGnbe$28OL^#L0#k#M4`b%$@Ee*wAmnr&svk zYTsy_T`oX{6_>~a5oap@Z9W7%o=*MRd)WBGLB6isgr-~GgJQ;Mz&)y`=8Py5(L23N8@=W{@5eT_0 zo1pEP=%9u?3S17HhKsT~+#v8c`*I=^C%%<}0Q)!W+-^^NzH1nN@TZtWPfrj^#;s+S zc1rS>rBU4fZVyY=6XQkmtnu&m3AlatEn&6#3ABDXfLHa^$(6tfaB9d0e6+k6zrLGlu z@MCHH;~*zCr)nua)k;S59b350nw>(wRr-8i$4Hu2@D#1hckz$=WQFJb5qvzupuQ** zrcD?i;}&M)nIkGpYIi0+Yx)ftmNHyzZUWvdxj?3cok1mC;MUTmcrR!L8>=RP8`Si0 z>EpBbVQMK$2uOl|8JXIoJq#^u}Qw^K0xfJ!j^}&~K0xXJm;PnRCXmoEH#0S~% zLltX@f6Q;zvSZN4ArZ`On!U!=a+3J(+bUs6y$gHR;|<~M!(dbX1?JSX4wv|}v&Vn( z$d_k(P<49_Q<;5vQ2*FLZRd%T5k)H0`*u41_R10dZjiwvdk5K8mq{>p$`huUI+Cv` zc!&dAMp283RKCHX2fqb23mrx|u_T+3T+(3&G+$w8beY4fh+5b+$mjeEk>bvFJAo9n z3$InW!O&OGf*c#CVc;77{rqGoKYk+>WV?G&)6bq?oD@3fU$4NAW<%_B z&xumsHM>~$&?I_Gu^eTNpGK#nrF`yL5ngzu7*ECT5Xyd)QgM6u*}LPpo$UdT zxT{1n{;uTGvMS{EsNd}3sv|JcY#aVvP5JdbzI3_jE>dg9(D%hZn2dp zyy3-Xygh``nn`@I$UfF^J{Hf(EMmsB9H*^Sz`n&9DB&6eVy6>uPwWuBW}tw7IPe%G zJ-VT3f-UVeawe-or=Xda7T>((IP^^`7CNQO=Z_ccq|g0Az`Ls*4o|n{hqm29*@cQ+ zL3<4Sc0db?_PW#cW4hSf$sKg<3{RFL^x}sm4a05P9^6oKe*M-SQJg+mR-i-`xcqU3 zrlLQv;p96qda)LD*gll`o-Y%w$Zvpif*kz2N*rfiSH@kj69xL4(%|17P3mrW2Gn1j z!l61tXmpqT-FMOBaA4hJ+?4zw%b#gTSI3){~|4HJH=D&pbGhRUJ_FTHfdb2S0RupK@b}>tBp`1)-L^S_zI)OrTy|tWj>^+UX|)s*6rwojUkO5(lmxr4Cc>@h8uVT7 zdip%F4jx@DWs8MrB=z7H{yJQauG7~C`|E8WSW^I2FMA;4Lkec}q*6WgHV7TOLFx3n za`6^b{M=OwD!J*j=yMh=EA_@>Z*u5!%gs2j!Bvu{P_`figMUfTKUTU_E^s3|HG3z?bW-Byb`&s)l@D;OojKmx+X%BXZ`r;&tqYCP zd3^UfFJ4#hjt?uFN%MPU=p;W6w*OWgehl-no1itBKmGOqQX~d_^zy%O@0d*bYf3AG zzmcJGHwSypY)9P5U!r`S2pEl27S^461`j_7>?UkA;2*8pLELi$EL^aW%~<2a?yt3g z4LBC=Xmmhh{z@Dt`5YH`C4on~8h0OGgG2Lba7N`sRO|l57EgRfKA$Y%`#z1ujvu~s zw83Kj&2%?jy4-_mgY)%dM{(X6{hGKX=74amGH;E`!Qsub1j2<|1$NWj1vjfEVEOzm za!Gn0E)Kj5@)s20r^7;0og9JDa=hOEpf9_4O&-?U?g6vn^DyW2Win8t$qyzhMZuLF zXyp}%vR~uqo-Z7#4lQ9ae~&`M$j>Zctv~n%2GEji4shX0n{dz9{W$Gl0xnmrBu@k@ z@YkS^bxYm>z9stwc7}PuPrn{enLe6lT$};=;T`OygB?^eGqD1 zPZGyS@-B;V%qU(D1|{Qoil#bCIy#1)Kh%md8q`?Xie1pZzL;u|1FMF>QsrmmZQJy_d?^ zIC7a0k8=Nm7RCv@#lHYzk3DAX*Gl2p;Tm|B^n$FqI*d;~k;zgbN=WuHeGGSe2wRuy z;J4P(#B|&fjPK5YU0+pTn$0J$1r1N z1@7$oMn0t9VSh)&gIvfLa62f=jV+sSw%7*t{%;ASdpDBDxyx{tM-fU6`Z2#PnvX)4 ze&ORwZBS@5o#&;BbGs@Hu39g~qNUsM*9|AOc=0)|DmQ}LseJ@TD^I>&wFm1C_|m-# z@1RuMI(WHs6MB6fMZdmMJ@_DQJ*z9?>{6+X-ymj>$YyK-uCw%e( zAF-c;fabCEVpA;bAJpl7e3-{`{}y9zsR~`}ssN*eiELZ)8z#XR@6Qv$K*$n2TKrOI z{P`h9j`@fgi`($+M|IkL+7T|b#}9H{3xvnCGeCx3g=S-O47Q4(rnCSqd@F>4HFJ2b zVmvzZjzcS%c=~LR1HZIB5uImCqmtQK3~n;unvybbtjmNRiapGNdb5c-*#Svy?M$D< zP5haZAjT%Tq}o&W<84(la*!A!3T17I3=O??f+%cQyI*HsviyN@sKSj{|D z-lH(BWXeIW9Knr_L2iG;2u@GCF!vif@X4|LAph1M+!|Bydg%gT<-TD&bE^R~bp?Uo zPdu4YoKGel?h;5XJdRzip-Remwu?UGg||5@;!JL65ad@!ZCd(EFxDP;fE} z1-rs|O8+WY6=BZBzb}Taa%q}q(~PsiLRnvVHB1djfQ;LHq&HkkFgolfOuF|1LLJND zcuz4tE4C0cX_`U#d@Ed)>_#h%3}HfJBCJj;gWj*#@T`?JU6CzGTW&RwM{N$EIme7P zS`R+=aq0*x{J9rnB8T(-s&p(KT(>Qyljx-Qy)d>wm0lIMq8nBz(GjmI;A`qQp5}7` zu74?lp*!wEsNN-Fps<<$J!FOUx9);Rp&41f=s2_u%%D>qR6u;~9{f?Ag-aX@*aQ0* zt|Gc+@OeNxI@GV6h>yR3sWA&_;?1bRUhDu~Y#POX{EFsl6rv!ib_T!GuEIx}20*8n zE_@v}72|cnAUekt&qn>hTi@=IOJeFYw8Ii=I`wF$damsX-P>UHPk}CS9WC@^jclgE zTL^jnlbm%`#QWB_V2IT^Y7xna&v->@5Uq&@^`599sy*nvv%;_q!AOJl;gS`HNb=ES zxTj?Wd*+WrT5kr?Q46Sy$!#*vya>z!&B$K2S@_pAh0J*E4#Q?t@^`h$-0JyuRDIFKUYK8jomhtE ziQ>#YvXX988BTk{${<7aF5Y;fMyI9^-nHkdqlo7{*b(H(uW!46)dn?!P{;dlZ~8lw zGTK6YgtlyzdNg%Vc?T8WKf&``M%4Je9rl0w4l3K05#6AfO!Txm-HFxGzAeA&5Dvc_sNQ8!ct(_r?lB7tX!4N`4nTqB?X%I~`XDH!Y z68h|GZDUcCkfF>Xp(0e~XW#ehd7X2gbN+#A?;rMF>ss$=DRSClais6nQ~K7~ zmYK6ZhVEXdi)AD1NtNGn;`r7W<8GhkhFo9H7@w4-4OjEYHz`1ch50mftSXTap$;ccV+4$$V5u3P$`STLql8()So?(ov}W z_N|bv)$%0QegzWKNlj#Q%_cf@Lmkl-c&?N&;hg2~QrdOXgsfX)NAp*o<95u~rc1sb zBD$GlahP5u{TOzT8A#ks-lY~$rKfX2>ZKy%f7+Jj$I2j|I1=A~X{LS8@6wOEGRU*I zUv%$N3%aZ2HCf)ML@%r^qCpZXh)3W)l6f?e2Hq|wn_d5kCV>-uqvJzEI)_sE%d?m% zq1~L+TXBe^mW==4Mea>e1YM{q=#fef(!0tM_-cM3iD%CeiG{DYqPLmcE~sN%J_h52 z+8x+*wU6^^QKFNZ7UAgGW64l$393~l57t*-lhx}D7*<&p7S8Mt?T@+6&@r0ut@9?4 zt&b$^=3JtkbC%oQe1YCL7f7xMK1v0nz6>>Y*I2{a8rd{N~y;hel)0?af4To*Ad-lYnAM*NDulWn|QoZ0hTr zN!+rd$g`bGnKLc`Vz;*8>FQB%=)n~_dD|Jr(aBh>rDFJ_bHUcnbr zLh5@`sH%G)jgN_=hwQ{Sr)7=w*{v#Wu4Ra(jBP+?c6c>S$SxCD-M$nhDT1A&HE8xYO zjnvR;7E`7$8P|G#BNDTt>DAvlbk8>fx?2U5$VdieJ10W(i)0U5ic5-xlYyjb~5Af{pxb>;^-g`70HuEUE? zOOQsK`;Pk@Aw^ft&cxv1+i>K&jnus<2d&I6G9eBIz%BVhs{1BkLt8V|EPO&7KRYu+ z48Jln4t*2q>210xv0Ze&_yIlAoP>D4f}w&U(_|b&w_eDi<%hOlp5g;0 zOwiul6#fA}??uFaz#cAo8VWOgEz#3^PE*?2=m^%4rrDdInU6YEy|$D-Tez5ueC7!X zz%;0<&vtq>rtyGn<}Ps68j&2MQP@SOuv66)B8__mA{51j`3MZtFt|6 z$ptAg+|z_CS8ybm-E*)u{T;K|#~$XyGeqmqT3YQ|DVqP@7=w>TaQO$Xap%RCk#jD( zutrr9llK;J=8pMTr;`hvwz@RkI)$7RwAv1~nOwVflIW`GDf(mC63{Lhf)<||h~+TK zc`Zs{(C!r-_HzT|oD0IkGST4nE{zT>G3LL<9m7#-%49|K0{G=#$7w}nlJ!s3sGX(` z#<+#Rs$K;wK^wUI&K(QBuOu4ZVzAruAzgnw4c*Mjxk|0GG(Tt`_3?T{H6`bfH(HbE z&uSmynDUF0>jH9E&4YZYkEgOG7O3gwt2ByswU=-{ z8N2ATEN_~1y_PvV+*Qzt4uiL@mT>&5II`DfK**0=yb)$f8{P+indKNzn4`$4r76;KWNGhLaY~5>gm>z$s3{N#L z(LkAG6qWB~g7E?!ckdz5nm!$G{TN411lbecGh0YwJ1;UAB1wwk>q!aGr{XJfnBjVx zV10xM^sFrdyh~w85Ka7aJL-X3J;B@a6)H}2rx{cjYv3dn_ z)#4(w&p1JzCAD(Lx|_*{nl3J_aVMOUPlRO;)1jh%qM&(+#j~45)F~ndznczWS1Hvo zlN<8Mj1kSGOTU25cVA1-&v-(0|9BFwU9Mab>Vs0`J!*972h-AHi1z}vpxhs4H2m%X zx1y7X)2#zsd7cmP>9|b?9*v|LujkS{DM2@XVI}ALRhR6w6UQ^>-cV=Tp~%1T;kMk! z;zIrQk|(_$iNlCM z#y2jCtnHdk$~PELlQSDh{M};u<;WTO_U9<9TRN7ecEw^^VuG_m9eF&MOr9vX(8#ELxbJ2TQSzqT z6QSqcohr;Mak|);Ar5-2gXDXcHhKMgU#-XJQto8mJlyrx9~{O?qmfrJ9T>Et^PC%) z9X-+Hi9G?isWtSLu^C-(>;|n1=WB1M#&W06o58M4+^cyKBhT)K<$z<;E3uN&aH~KO707h`j=~BT@ z8Z>k)wz7;mcXf*#dKV#5{Bg}(T@8!XW}w?w zFJZVm$-JQ#Xn~;bWqfcA-3aF1jbhDGt$`7WlHd?>*R7nY}ESmSDq#6}aNadiLSVF>HgU9V>j#1Vka7>x6qctSa@Tffa}~pP#ulcP>?k$+AxKZYxyByYk`|GSk_eksla=MMg*JSTf)JpYBQ&Ej+v2i#-`yh^bw2^H5B@zVbWBBfc25iHj%UCE= zN?*Lm1d*-+`~2DwoUI;&x6EEc@WXqI*?Ylwn-#A8M90X za_V0Psrx%IF4B*>sLG+18fUZN&miOHJQbf`H)5x%n$0t%b^y#d!+>#1TKY0InF+k=j~0 zFuN3vaTR%3_|^!;31ZM8+6PvDPMWkh%Hk>~cjVDKpW-qn&!tKh-*dBWbT{fqq;-ggB z(U=LY@Aa62|Gea$UKj3+%VKcOm{X{oX2&P;3fMXJGLcz31gBmn7}X^KpTwLoxZ4z} zmOlmirfA$VITGq7&tr`~cTu#byr&=2eLz zb}jlvw$?O~h6hPFLW`23-uI+9RspJ?OyXLl?I5V~8vNN($6f1fN9lh$;hlpd{Fpx- z-L6Ye&2VQ_d*p_3tA&s!=PB5wlMS~%Q~EkzlHG819$Azk!as|Dk{1hHx$3ZVE`N#+ z|9!I*U#zm94r`7e$Hgn)WY|{ra&Zqiepv-9!>=>t^Y#nh)O^vHDOu$A@l0a+atnEA zQ9_$W7-H@B#hhvNM$k6j#XWY}$xShrn|fm z{VTv@?``_kwU3GV5CFGloAHm_597AXEvUN00B-vx!2Tvd1nQN_%s+kEe+s0<1(pSCYa`z4r04@Hr^Y11GCJHMYEH~&?~FoSeJYYhqHmn zDB(N}Px(jStXyB>+z>$Qv{m`#6$*?)p9QC}GmAFlujcuND4hC869SL7GPU;hq-;SN z89Fo$O&YUd%Hj~NVelB%-tvZ4_k9?l9112gf70t&X;5)L7XmxyV3pP?!XL@Q>P?kk z`6!1Zz50aZUwp}@VSPl=Zy4OqoP&#OU%*UxJKp;3U5N4cB-&!6!R~1v&;OlvoBbgS zUIl(){EkzM&Dv+_I0Pj@r%DGcN8KTjjXfk`yb7D6D+isQuYz&7FQ_!7(#& z3udKYs0_@c-Od+q>EZ+$uv83F1+doV>}T8)Xu*6K(g$bG)X)L1Q(WzgC^#E(mP$pJ zl7oiBX!V*F(sAVl`4hDpJbzWO0sYF&qRTv*=!tl zM}o}>xJIJmW8uiyXfREi2vG8gF7+%X4u@pu*}s}t_;e>x87D?`R*!`bGd!`xyhe0( zZa7TyPNWrD^Xcln>BKeS0cm;KLM*%!xrtF4R598JEXo(aFp>y?fe)$Df;ev7DnA?? zIf7*T(4<4xlyfRJltvA!5&3D9a#jJVqO~SPFyDU+J$So~X)Cz{D{sx>9o+;&$kwYw zwn1lSchvUeqr6G{rumWGZHqeD9k$Ybl4w=3FkRv6{)`=bC#IpzB zIJ1uUd{E_oXNFQwV-+m%&qJqSuect$mGEI@2`P$|VI}OZlI^DgAZAVp$T(4$;Z}kg z--56*l4#0ut}j21cA zUx9bcGssDSXb({0h`fF_)O`#?yUk71Wp_RK+5MZ&`g}yxx2>0xU0sO-+pWm?br$@m zeY0Uzu@ZPQjkt%8hq*ad$%`6qB4%#MeiwSG*r-HM-@X@Cy^6$xd@DUPcnd-s^;pL_ zX*k(Gj`u#91-eDL{LXJv@ljbkvFy_14_DlwMlMtF<7(mk-{hlgVmekR72s6q^XOHw z4^~hMbUE|8)@t`hJYHZ2caxpD-7YOmc4Gz6ul2_Py+ZD~gEIFs68Pl)y}Y|1G$_1a zfMUjS-F98@T*889doiBdI|oet(&*CD zrJ_DlCCq(v1zvyE$6amm?05ZJw5z@nm2C1!NW@p0=j}cqYvw@Q{zTjSQ}P&gr09{W zFHCqx>0?B5AeFl8&jr&99M0DH!8NL?@DnE};Q_%cQoK){f75fB7EIK|&s#O|;K+R_ z6E_z8s}|u~mq;jIe}ic{kP4$sW3XhxOz_gq$Dx+TVCBm|_`PTq++5jCt=-DVK#wJq zUljUov>SI^?HPAEeLv{UO~$mFb70Am_e}oqEHbbeaK2a#S<)xTPaI)S#qV~~^%_@s z`=s?mcT+lf#9qMS{teXaR-r%&3IqO|BP2wp3YjcV-1ciW^q(pupANsED<*v69vn!5 z5KqAF%t$KdmMGet_kqa|kA>vfo%Eq^7J3zoB+6DVnb1+iM5V|W3{A$u&GSdl+e8b9 z>0$WCK@uCLl)^jLIcQa2KuL)lSfyB_Ysq1n({zg@HrRmY=6QGdc8)T^_b0^JSZ!5!bSD_6zmucdAB^#le?GbSOdk_YJ%h*(ab&v0 zagmcyPtUA!p^nR+!K-_!7TeSuq3%i)WaJ4Z*|;p?m{vz73TBr`m)ADTxMuol_Hv@u?TM~S|I)X64#2H{ z0;sx57?FKkPuH}J!iZOoiI-YCz1Z48T+#+HyF-i@RXwu)q;iE(ujdHUdXG31GiZ2e z46F;k!PJ<1A$xO$aGcH7d{<~BNUoQG;I2+|x2OQq4KwlH<|ioHo`Z>fOQ0=MiD9)r z(o++Yz~X>CG=zVJ*E+sr#z#w-F^fUXgc9gz{z??a--jYIAyA9m!?)Mf;G!>*a4P?c z=u_1ZSXj897Xl&a)4w^Ob>}nK6*hxmSr%>)?!>>hC2>QAfTEIJ(ZYSD2tF8U@F`&$ z{Pwor)PL_fa%3h8<2)we-hdIX=-hZroFz*vMvL>3SLCtk?pdgLoB(Sl)H4B6iCA#A zQXq?ul56h$)a83UhOS-?cINkJntwi?`cg->pNOLa%Yw-hIY0g%F^*20>rE7&W5^@@!44=zAYelvk;c6kl%`Wnqa9k3^&>y@hl8^O9ItxRFFNKxNlc z*xz@TT6|xE$=e*@?xCS@;&BRHtDc4Z&tr&buNxf{>h&$PB%E@n94c>%s9DiE&>9#9 zdIh6-e(qyPTd|h;d1nsxWX6N-m3*+DQ%J@$L8yOkJpK6A6i+s8CeQ9SqeZF)+^$yP z*y(5JquC#D^sDpKI%^VcND3B$8_Y!;|E`7>VfIMfbArx>WEj6Efs|F4Fn{--pur`h zU}yLl%AATO0ha|r#Onw=8!isg5nX&hucnRcS66HG)GF?hVJtM6E+ub%@58vlRb*y~ zD!nq8LP>N1s+GsX{#Jc>VBHS`VM=7a{2Huz%j4w1A=GeF2JG>RMSWKz{NouM9WCnqhTB^MV@WHblXl&7)!5P*qaI$iR<}7qMg2Bby1iYS86pMNhaD z(rjZxUS>rk?BD8&67Pgr|Gu-x*m4mQRhdD`-CSVCos~4f_b3^0;W!!4eu66xKY_Tp ziNqlJ3jH}-76w-*LA_Qr)fPznsd)yBlf66E-!H>28Z%Jx?Qmc>r_hQQbI85`BYttt zJ`{gl$AnGUPf|lK5cd%oL`_8pR3%)&^s*-&_SXW>3}fEwzyrGKa0n?}VvA(OTT-$n zm?%2rgOkHD-bg)*&gbJrKQNQ1EY&1hSEqsgR(UEC#4SyoCeXM;6*mYW9+lq@(@)t7 z@F%92>_=^!-KvgNGY>;^{xQ*}<%>zL|9lMUltA}!Z@7v5s-$!EexR!k63IKjd)O;M zm&hLKwS{21@tLIl)($*k?Ztm-@#Ia_q%eM~6ioeLO#(&-(HG|@jnvg#4JY2eQi+Ii% z3G$8lV5j_@zBxXRD;ZQ1LK~#mKD$`>7f|KktBD`q%R=m?Y*Y@J!30LQ(Mv4kv1U+v^@nCnJi%z_Z6(G%MG$SR#m?AKkCBe$l=Z_sA!-lWH+kd7~jqi38mz}3J1ra!W<$CC^9tXSn<=D45GVnV%7eC#+O7bRl z(UHFo`!xto zi#NQe z{@k*BOyBsHrYoLATkB}LyCPNSm&2e3HJI2SJcGjulsZL}k_<5e<{NO2&AF(9)Rf+Ne8bU{j<^ zcC-}GS$@q_fA9~Tdb61prhF!AJ_o?#1~+Ckm1WXB63h^i+W_Ub2K zy|XFwyB$EsuidyOY=HQ<>?9V;tEk@gmry^^i=Vtvg6$H!gYFIIx!`e$a8X5t-Sc)h zUnvM4GAeT6{LUP3v}?shmE!EsZf!6=@{&%h?%|y4?m$|N0Ua%61r~QL@Z&E9eqrwv z8rnUH-g2seDe~(<^1VRDV27wrxCl2{HKDkcHeRY354pDzK(VNo+a4Lq1ugna^yd|G zZL<#Y#=S|j=G0+u%JsxK=h}!R5#f^Q4;cEkNLcL9Pv1AHp~v4Ik!7YdG>!Aau3`GH zHA4^`HKsFTS2UvUD;G$59}1n12+Q7 zfRPn#{T2hQ`_IubPj_J89Mm0)!@m*fP^UH@+xr$#2altma59I|w|>-KFc5w1j;8Ih zWALV9d{JI!& zqvRy-kiClAvA^0zB$z`LY!||{$vYv|l+s|uFXY;uI}q9ykIz?&conbTXgYobH8RN+ zEog`#xi>nv`Py@6L(oU^ZF4m>PwOY%{Rv<*S}?W?*zDwlPMlC}j~!{g%#4wZG%l{3 zHm-_<%^Up4#m|Y1?Xfie|A6Z1{{qz&{|&0!ekqJ%4_Ij1W?PK5U6%V0B2GrJrn9cG zCJySh)#+-sjz1Wd-9HPq&M1M)*DpZuy*Ad5>8EL-YOLY=iMIa*`jB*CcGC7`q4 zj1O~kWcv>;WrKY~`TO%O+ZYerC6{F)@Vkv7`{|PeZ?aO@6yj;X_xf()9pld+bV%{N zPW^j+%!SLy77m7|v}~HUWFiT*Gq{_Q9H} zt?xQU3NlniL*@Qt-uze_-nV0(QzJwM((O7g0&o0A`t%@&Qf0@UZj+-@N7}uitCO>+4Ts zUxW@}@9j%x@Aj9$sVgJduUYB*s#+;n!fqkAG%}Enl)%ZI@@&W#86p5@>}(;uw_s?n zuqs5J-&FSiW^5Fa$RB*d9bddqc6$_`XI{Zu1Y7Y=-<=^#AU7_h%;Wphr?c})R`I{G zn)&*sqpWn&a#)v}#Ak|2vBOL*fT%)=RhIaK+H?Z@bwx0==$*r3Lsi*`$*pw7ffaZm z$CRBYD!}r!$6)fbdg!%21XaIRp%ib-mp|DDkFUMM-KSLfdZ#Gh{I}wr=eOY4M?2ah z41rnbF=Q^7HOcwQ+O#Ac)O zKm>^W^xwh0UC1_lDkiP%_Jk{Vo>hFHbL=tht1zwGbgn;x3Wc;LgkSCrC zKMy^I4WpjH`NLB|=a>}V^LYsD-aZTH_xM^(P8W5gJ@6liEy6mM|9INeAj9yW@&D^- zM*n)+|EEy3rYs{mb6uzm<;a1~8jMsd!_9Z4(PG9iQPSyEWM$GZ?v6%12Ro~Y*2ip& z?GC}}U0U!<-WE)R(WLSKPTC^_9lQbWThR(`i6m0;en&3J4x>J^x5nk5*#S)E$sdOE+jLukZq)` zSiJYh{9beaZ+_YTnV~3*Ej*1g@7Nru;&9nJsPsI5jsX*=&EhN82MG%WXFK|jT z!}RnF2D>kl+liv=uAzCr8a9)9GYjm_uOv~b0{(u{B50laj{aNd!N0IXVi@)c;x0Ue zxKx%I+a<(1|4qT*hZpgR@g}A*z=i*KBN`jV7c%yijTmn{3xfSRAkpd<(kagDfDwn! zz^GPbR0W3CRq-!QrxT^_D)937OyXB(V&^-3P^cQsPJZr##+RqkN0n>1ErV7wN0wJd6RKZ@^F0)DO8dC=F~frc4TjK9}iP?^z<>rIb=RrVcM+}^uE6g^W zJWj<^uTmO)k_0L_kv=muUhj<|l&fBZWl2W-7yA>S*0YyRSuDj`Cm)91Q%3yFUKb`b zuO1ajGXSn%q($}!TQ+Cn+Hcdz>y7t_!=*92gP#Gv>+L6|$u=9CCtt%_PsFQpPxDn* z8XzRl4qLaypoyg=H0C{~d$T#Pl{CUzxgsFy`4X#V5xV+{EJH|Pp4|}bMBsoJZ%*K7dh0Dtw|#G#u9- z4`cme=yJDDAiL3;blh`fS8wIXG_>P4qYQs_Lk8v9>D$g)JL7c9MXZ*f5=pfJ=Fsls1+1Voj98%sa-qb-)F78Gzy`3o1qRz^G_(o>RG!V@e4Z_>6W_QeLfq?cgd`0#R z>>8C0LyPuv#jEUabdkMa;dnLJ`TZ92xa4%rc9F^OV6-x?ZYPB`>n=k6(k_~vQBMyv z3FFFQNj^wxh_;GPg)u$xRLpb_xPKqZ^YVej=GQ-*aMXtQv~&@Zj%IS?(P9$w+<`^E z2Q)Z7gBkywVHZnU^C`c*S(hkJ;+L=kZ0I=RSS7}fa_gn%vVRfPjN`=WzASIGLkPM} zeEGZ13AAvmFjsd^48Cu%q_bZuvW|~8;ucPokGgmT4!xTPsvj3K;;|dahjvq%v)q!6 z>5PP(Hy6U*2iM?!rU^*#+MJwD0|xmNkW|rwa6io-B3zW9W0f%5#hRgZNOG;5%KVV!0W#+KMrt8h1QIh> znJv3fL0jtv*zxS*x)WPdX!lVB>zsW4qkS7AGuw*w9NCS7;~nZ|e0>I*3(tUIVK($` zu3^=KV_ESzOF$~+6&pWhJ-=$92((`hApz%mne0azX2~;OL8Dg?{MJ!NsXj?QZucNb ztX;}?a9OC^^$+)DD4BUDuHtR@rEsL^ zXE)ZsOzntOopSkcc#(XYym%MDUS4~Y4jp|%Csc1VJ9u&*taA1S&4+#9t00d{v0qPx0UAx0=HgB^*9lv4)4Zo^X=Pf^z zJ9NdMZoTIKTiNZ&Ic)k&B7!}+_~uujUv!)1)>;F3dyOADTY>pENZl)qNj%+UGQuAG z$kbM4_7!)OCd4?AL-J?vhrz14X6;fkb$EccaawG4?RpI9@T|k_XY$#Z&q}!UcdYnj z{sFjmyn~s4@k=~hU5LvsWZ{>$)%^1_v3$GQV$>CX#b>x}U=3^~@a+$O8tndq`}jhm z&O+%0eya~bt?4SDULwUNiwWUY4_h|d{TO^2EyV7LP^!~Ayoyy+U4pzwCfoeM&1_$6 zI0}^iV#D1{u)*Lh^y^>XN0*GoH9p7LyW3RBxTASkpd|^J>m}H(v*u>kOZL&`I>Jv~ zy%5%&`OLrYttL7)-&nmGbvCPgiCN?gb+cjXU^M%;g)>bY;>SE#&z8*oKxa)wTqsmqgDjEn8|;!GPa7^LQAS333Ut|VHS?Ou?FFT}6Ym(oG}-A_40OS+G|Jif8+ z_WL+=GxR{K^gfgok%yggBY6|MKDzdG489f8$B(1#g9~SiL{m9}tbz%P{28 zHFm;+%kbqr8BsLe|BWz;|3L_m|3Mh3j<|Y!5LXZXK^F1<(>q3iO5uTH|Npo$Zsb?f zZAC_2vu({oM&N0s&=nyehn+)AlGAv)`m!)Bvf<%^aRK{1;wgB@=fIWFPms2D2}~3c z!4ef^T<&MVCvH^5p2L%%==&d15oO2Tp5H(mFFnMshmPXdX@y*`Xen;*E5)AQsbt8j zye66}fQ^B-m`3ld5aHXYCFEVTGzA)lB z4`JLEU$Rx^By%m(o=n$w2IKvu*dHBF{~Ar9`6m`rAL$=-T8#?yowLOC5gg&;pMpA( z#T)tAwR*G+xWrayh}+W$ zh8lKoS@k`B88sHLKaN_8M>3tP7_1+72s4*^(@m!}z_p_tTKatO$Wbp?t02W*%ZtXf z7G(@}UId}I4o)>go*I2i1^MMFBc-tz%eI-bjzLR+=`6u*UUk&B>o^JQ{fo=v^+EUM zF`Skw&AVJFB$r#TXBq^~ za04U#MC#Qu4`%*zW{wF)Btdiv#GKg;KESjy zr6Eb!f3cpPiYs6yHb-LV$FuNRR2pWxCc*Z*PNusZSgL+BgDRY9;ns+4L+u-v=oLvd zc6qDP%Cb zBJRkFs;y6dNOuZL<6x%*@BhpfwWWqh_33QdyG)pfMNUEUD>LESp}%zd#?xfl(OY!? zd=oM?^)&tz^izRi0(^B4LC?AEbndz+&hgeIau&v+wRIhNl_3FB-^!sya}#8AJ}1+) zPLPVqBbZdB#Oej6LsZ*({MwO^&c9c{+MZqTedjWIXM!)>KPZfqi&Hpq))1r)SW}zC zV91*p0QcUeVp(_+{A5*$pg0D1xPRtG?wb6GF(eN^{3V(`X5exE9IcYp6g=trP2F%j z%iY?8dv8U8q>l-P@1Dq?=SRby_kQ$i@Npa+tc!Q9G%-&X+2B~`HoCv1h~8=3PqG?+ zlJ%0gxP5sXlP!~vQp!1`Z=f7LNlJ1e^GxC4@@Slo0||Im8Ra6dGZV{v}vpatPCYy#*ODIkXp8pzw?adUK~Dl*Px9B@I0O*pNg5B~oZH zZpN9@63FJ*%k)E-HWY4Qxf6#kGNO-N=;9I!I5Mx0%=(rBUY4RE8ipFNRQ#dWN+K6E!GxnPA#>4QyjysQ5uNZEtyVZw=bH}X-;Ym3anEtG zZf237rC%6y-b52ODGkBZ%Nlgmn?(5Ls?PXG+d;YIJUDyv4ospcxIY>3dafz z_-Qlq$#gbN{pStV`7X3>8bHWkD=n~VC9_h5S^xT#q{4O`v>!2slb53K@%14{Ff)Y2 z2V*chFp17I@1(OIUt;VvyV4G+CgyOs zKm^a+o5tm?9OCjOm(!c;!qG6Xor+Xu)^>`YA+2}EA#=f)*q=B7V$&;O?J_m^na<$R z+z7Ns!#TR})`1DdX5X8u+6% zmbi^I1ucF#=C7a2HRaqU3GJKUo!}DQ<)zqxfdc5pC>l6zCmC{6fys-CiPwZMvR)w% z2fH1?Lj4jI92%e(uC4&b*$s^F2o{p-#kl{CEa^C&i$DFh;H|#7Si35n{{6NY!bS>t z>1%&XSiTw}%!GKq#B9{xqzP$3$LaYS#-MUxH*{pS(z|*$XsP-+{OhfQt(sbJ>NIKRv_bSRM1y{294hL0>`zBpu5r$@^(ztxS|@&XUm{P1A&jL6&Poy?bP$OE-3bo!)Fa!nG5!JVcb0*azVxi zc4 zn^kp(-pzKvrYs|nn<9rol|po#WfK*j70cP}GNTg~*yH>Ow&3us0_KGkVx>eBst)CH z1}k#8151ul_FN(D#9`u)egvRPf?rd74Cy)prAM>8|12 z@AF{hBun@iG5Gr8ch1-2CU_MmV_1GJ&UNUd>tsCH750^MPIVSEM+o89j7ZcMjRBE3 zLvp=QgBAxb1f|QmSl%EF5^@(zQmZa9j{cooNKZJt7SrOsohZTT9Glv2K2q#+A-`J2 zrwx5)ekaY28sL?&9@1i@aPy;Aos!ZaQA&H7(r?v`_Ri{e96JLwW>Y>DR`%{*I?KNfcKr zpP*i$Q}Mtp7f_v-3lo>Kf*^JpE=n>54R#xzU;UDPnAt6Pq12*#02_AJkaj^a?2mT^`1qG1KIq+3yDpm(3*BtOn_jEUuj3 z4{^_fA$ns8of)M{e(VT>Lhv_Q{kCpJdtq5FxwH}`YwbPsC(*#n5@8IL&cTBJ2ES#rrM4f(n z;-1m{C^_pVJ&?8o-qnYqvH5poR!@cB+eh}>nl>6f=>_ZzE+n6&-5CcFA-ch@lXM%n z!LPh*{97!5xpuMWa(N9d{GN&qk|}7XsEvPBEQqS?c=Djr4t?1w($?XQOWiKQZINh_ zd}t>1$f#xZglm9SSs88LDMBi`7^qG9L@@3QZrC>t_o++)DIIyVm^L2AOx7mxYSCn% z=OBsemg18)x02rZU35S*0@}kqq4SnRG#H*r4Ghdlw8nl&-f)`OYRyEq!M&s%43Ta!xT`poI1iQK!TKXe*Kff!=1cKZdK65Q9pR5v zt01SR2U~A6U^IUP&jb{p0m?w)@^tb_!HU^rHibQP;1VS->KT{DW5h31KouP`1lAkI z!8g92gxQzkl%Ys)JKfLJJt$zdrH(?<7=xZ~dun-w(XiO1n@(%dLwD1sWb4icRIZ9r zx#%Wp+q?)=9Kx}{wuKr!62PU_lU(-9a%`a8xbS5s%n^^mRd=g!cytfF(_9Oel%(+E z>S|KGjX_sr37;H$r_zO#|2x#Z4+-aY;eR)8;-!GBY(Jt8J=+Lx(`Sn`UIhg*RlQzNB)kR1RE}z z)50JLMjxzkwOBLf_OZ^iyBwM5lV4$lxDMKlKa0<=@f43<#Nm+Hcxqkz2AP} zuA(*MappampZAw;IyHyA{6ZV|wywd63&%rKegnDiq?U|wwBrQR6?yBT6QIAi829?B z@tx!pJ3($fs=N9itt^NB0V`b4`v&rF#f{Vl0C;R(>UvKzyF#?T2haqM~Prds^mOq?r~p{>XOTDn`A)MjZuuEq>zMJ1CD zwukXsgg^Q5;WSB-&V#G}N}!@bh`l3KLG@bI7=JB$ewDL06dxC5ODbKlXD7wC3YVyT zjy;{zq=bjp-zA^4w}APaC{jWznMFl(pfiTM zB+>`HS0g#&cMs9nNt7Bh_ld;jnV5Pj70jzE@!v=b*(v4{jn;b2D7l}7J@cZVRJ{Pt zF4H2K z&Ug;v=10-Z@^k4xls{Z?2_+4V6ZosC%OLkgG$S`=6tJ?1@OR34Cg@H)yy%aGhzskX ztkeR8szx}^=t`u`De!KcCfX0?kW03a)KhB%B<&ZY{eN$u8r4OU?csFNNc|femSE32 z>C;A6aX8j)i1WKtY9DRP!rcx-wJu*0YtKK92CF@b0FAS$Q{f3%IHFZ+#_Yj=qX+54 zKNF#4$c*kODaN4}UvRzm9s0xI6YMPaz~!O!WL}dl8C#!9zlk}Zx?dKS1>~cK-YAs+ zD$mzSw?kmwB)sK)hbtN3r=lI=ta)fYym1`GHkI4~$47y%0HdjdLmW=~a2M1&qeu@b zlAXItWhDtpT@1>@pK43;+F{3qILN%<${ol9$n0w+qHHHAoVl79+sLzD3N&%} z#v#0M*$oEmgKLYvi_uFjmNT!Wq*B~&3U|69L3^SvRnr>*aL zRN8SSa>Z;+Kk=JxnR0|Q8n1vEdx}x!+evIf9yCWW)OB|{obdTbYzsGG)BOq(#_qwU zamMg-dl4=)=mM$UUD(b%C+oJk(mluO;56+guCAI~(74%fXSF;ZP`a49oB8tj7tF}9 zpTBVQrePUL;2!=*tB}YN;Oml;lCT`U#ns~Q~Zf|mY<=c`W@g+ zd_HF)=K{_7lUU<;6Z|>vEK0PFL7U#;TGDKQroO9*qni>+RPLoWHaLO7a(&p6VosMF z_(?juH=%0bRJc)mkjAUo!f`!e_H}iEAWO4_+#V$Y#oHaok$>@wQoJLM|11VkmvvA} z^a4KooJpI*XHh|fDvh=tv0qme!8V;x96LUgnJ};sl<6%dp+tdQG315cquvURw8;Rc zo(vm58)3)x%@{kV4`P2dVR(}a-bo!tg=2N7e{d{OSpEn~qgag2y^N<56v=hJ)7a1} zPin2C@$4>7)QznoPn;9*-vm*x`soJq(xPC;mZvc1@ORku*BhsNcz}z}f8>U=Zqlcz z2Z+kWXk74JnD2FsV%8*?lgD6y`mz)0=aK#5cm6%Ic2y@BzI2Dyk^@xS@ig~Q+l)U^ zyAX#y&ILwQ8D0FYaMmqGD1IY^rn*c($!c*J(AzU+qmDTk?Hk3T6$T}GyP z_~N7XFHECk6olWsBS`ccM`!F_38%IN!4BU!IOe1pq{&8bAN2N7%fUu&SL`FodMsvs zEXXBV-yGqvtRwDNUC2FD@WuyY9bxs_>1eFw2l{>o!Eas+M#;BebCMFRbxXmkZ!h8u zCv$35FNY__HjvlK=gIm*ifGg)17BCDfr#%gc6IM%Vnk1ZjD!Vh8C`>zib&e79**D7 zKVkm*1cC9FU@$tHfbyFYN9sxd1|P5mp=a~KYSv`@S}R9rOEpSJ=)rGJ8;^Z28^N5n zR66n-*6Qv@2OWEIZ&@!rGIb+%CR)J-m36@Fv$8lBmgh3~by$gRzaID6V* zsxL7Yj6Y1p%%P32tD=c|H9f|s>;p*co&@r4(UVO?&I|InoC4>I1aPt>cV>I8~D8ECOpuRhOJU|*gi>` zC^uim4k(1c%8BroGW?F8=V9j$Jv`bNkE%gdAZr#wMwM-WQI)e`dc{-Q_FSyc@y&B{r1vo{!}^fL~=2B_%lPTCsRW0Lk? zQff2<3lAKI*Gs+W3{hoJz3WX2s0H(2`E=BcnL_R4r%dq$7M?5NH~l}wM3Ja_)NO5OaLbGlJrzE!dxbr9AA}8A2iNoPMvMR8(C+Vg%8zH z(*6wX?9IS8`a;};FRM^=y#=#h_XGx76jJ%57zj@3hL+Q*;G1QQYLYK#y!dG}n=1)r zEq_VP_z>V~CUJd#SK)m1J6!!9OB{VV9scSGxUiXRq##X-k9o5X5*_xDPno&+Z@w;G zP?|z3s-?K0Stsa*3Re>FVIQ{ZsDfF;23We|94txa$dP9q+)ME>kWxDxqE6?bikd5Q zg_pv^$V5~Zu7HP$SBR-t1-&C;Ow_F8*!2t2K)>q~wkuAe?x|+{p|pCOH1Q;!%~nFo zy(eg;@I<_o6^YAQg`hxcJe%U~4l=qLn0j6m=WS@ltrEpd*BMFZNqP?3#|y#h{50DA z^&Z{2PMe>6?=UthEXSO46&SoDnp*i*z(IFeylH~aHti@@-%rPWO?4dWc?133;^7>x zhmqEciNy*^r$4Y1JSy?R#P{n#G1wiRx4frCYmdXb=O=;tcM}4Al51}*G=g1xI??I# z#E+rN;hpT`nzt`=NXO+v^!LelB6l*HyCQgvzuJY_EYk|Kh>k~t*j=a;?u+nfQLX87 zIZT%MLK>B%i0u8#Ouo@ZxWxz1yN1H(q4tG9j!9R zhx4x%bCCr>IREEAC~i$a)#x>taYGaQ-sVEzo;~OqkpO>GGU$DrkHq*geg7&Ejn1Uf zFG{b`Df}FHG^LXiS(jj$a6GYZYC;>?Wb$H9I8M+Ng?+I$xOs&w8IbEI=^I_y36B#Q z$BE@sJK-hq8Xkq3PtFmOxwaU(D2e-G>O!{+O@jlzQKapJ0W|ASu>G(T%a{UOmLb9P z{vCkAXS#TZ|3zd6?g~c9I&kryDO2X#!t?Rko8OdpPol5;dr@kU0@O`82mvlZba2E|iAY>QC6|wa z(KRtdNq#St)cDC<`}UTW*A-*X8cVc&I|82;|U8c%A9!k~HYWz>m}rcZU_a8S)0edD!(uLkg+7tL(9&7fsx zyrBPIJkfWmC(NepG+^^OoIYaEZ^b+23iF6; zfhE-anlSu0tPbu|7I2wDp-lY5VzSZwE;UYWCc2-qn14Uw!F+d(L&5GY-g_XXJFpSUGRLe1v!*?j!}C*!rNEvW5#aVhnk}9jK$t`Y*^eN&|Z3x z%TU-3pQa~a!x>?`J5daUHcuzfHlg(5;$Fy?{Y!6|Hw(_JNZ|IXFTg9F?wFZZPVH2q zVdbw(2+^EPN+%ycixVqI%AM3V<4X~1e9Mj@fmSafj_Sw!XNfZsuk&RPyGQ?;P{M?3BHIYQU0+)1_V z6PZ&g9eBD=k;_2@arQHKWOO1r?ax90SvizvhG#jlexm zsn)Ri1J$|XPMb2!@$^U^JZVoQE}h&6Ugv+nQG*P6X_YCeEzN{$5>3Qe^CVj8n$h)j zp16KN6*MX|kqEs2ddp6fC0EYTOq&X3g;)czn4b*mCYK7DCrV+`a0AL-K8}@=^_Cg&!BR~b9G*Ce{Lr6H`+uLJ_^gHa zapy>fqY=z;b?0ifRnv`AZjuVA`(XU$8r^8BiUk!fXtKvxTHdt^ZSUvN9gmRv)%1#Z zb-o~d3DTf5a$mBt9l? zz8A%pXkbruCA0m}4LYmv9;Qer;$G!tU|IMG(tLcOpe37%*B7EEN;1CJo$$!DIT+>^ z43S9(aph$nF2^pA*n4jw-m8;9<%Brjbwic-y%a%79ZS&bHiBz=)yda!yO>|b6;xZ) zXCxZePVGb^aZ;W#ez;n~xm*oHqX*wc;&0l}?jjG4Ld$8h<6aDEGohVk0LRL1;qlxK zGfNdd}gtUAD0O#|hHl8%|%oSEAyheDO%37WNK#uve~J!cCVi z5V^U@Sn4z4S#6wzvsQO<;%Xc*{xbnK=eLv9hw|tN5jS}M{*R#Ad;)RhMM;x;H646p zNDSSsqQ1pm2>ZGL4yez>&6er(=?;DV(BM`)R9FBW8NM`YYc6@3-$p)S8qM;#4Xy!& z^i5YZ-dgBJrk@POhMgZEFzzPFh*(9R#U8@AwdJrtF9T$+oQ4nKQ($PKH2aTCh6!qp z{OdKM)ZNnv$JrnXv?ST3aw=qP<6-dp)WEG#tEXju>M*$80n)NYG6TyUaiWHW{d49cjjc|?2l5&?x4eZb2)j+jthzz}hH>ck(gmeFBe{beHn6$c z2@<`x(EVX&$W~@7?1_$tL-IoOES|?|v+=k!?-BiwRRDcYnQk>TK{7 zJZ;?YkZmGarY%pqHl8Ak&~BW4EFDA79l^NchD?jwMS9wPGJd>T0`{?a0=0wsjDPJ# zh*&xezLxudam!U&I9x4QzTym+Db!MBi)32Vvlh!X`p}cgq7Zud8TaJ00t)*@W4q@t zb!Rq#+3Sp9Pgas@v8m+Tzs1n_YC8yeMe%800+uzXqQ>)caP?+6_(L>JZ7gC> z-HL^bOZhnMfj@EcEg(0X4ndXHG4M2=i1#W~Ml#|?WYNPIG@l$tuJm4lwf?o(X0ig~ zH$0>^z5b+W(n6{+ei<}|7QmCU2Wj&;85HBMf`rm(uflu>Eo`gzrzLiyAHP{MDKGHeQOJ z(2FETO%JfD!P8+(>Hy4j$b^fbG7$doAgn*xP4WxX$=Zrsn$~)N^Hx4ZWyf2Sr%A5p zaLf?4pDLuAA_loRtNx)(Spz=vHAMRyUtGTV3Z^BMGQu0HV9~X^ribSyqN%wh*NBz$ zX=xW7`|1P@=(q>X{e!eK`2fTlXA|?mEc)n(1D4)R2iJ&gXi%02X8WX}u<$A}!5JvC z>@X0kk<7)te%f<47Ta=0`VWTZ(Bjd4G@O)#d zbKrQB43r%#BVxPE@shs-v=wK9@!S1mQp8>IGwT@|jJuBJQn8@oVTCU{>gk?TbzEAT z2O>^NT)X{!!S63D8F7k;M}{9N3g+_T4kfTjX4qC&~p$~Ya-VOG@nMVz$ z^wf%O z!IWbfbp7CGjHvU4?!A^o@k1`j_RmJ4Ee?W~{Ak=&{hJ2HeIliKi|H!lXd!VDZ1fA@Qz)SG~DNwh; z^Y=_KIsY(O-KvklUntzOd;(w6XW--alCaM8`Uq#vg4y-C(0Zv2iCi|E@KD7IX8U0H znJAsr+lPPZ%IUxwFU)v2hjD(oAD%?=^o@`_Ef_t!)@P{|9$(jlk*TxrY5jS`GwYG> zsv?iapWs?*3h9&8IiMw{ELde*g_4b?FlnL`SfB2${WbFiNG&@J@rM_4BC5GHXc&GR(`26`y z`wq82>;{jKQ0Wl}cNgFlv3$a%4badJWmq{ko!$_q5MuojjV~F)xX~?mvOt`!oD)m- z-$=&ByF*0oNj>gd(1D$hT%Uc*+K!?&0z85pSIBoevizZ__kh4YGzWpv|hNk)9pmi?!QevU z9#WT^p}0&n3B_i~u><89sG=tDiGN7WsRLiVEWzJK61djsiM zE5|lJTL2Z??=Xr(JXvw6fRjAy2KKjAacay13=rA|cb+7G`}In0Qz451qq?{*<9MiE z$HU~-%e8m5j)tGMTT#}{i}x({MP>UVAi@;`b10vDu3iFhW9=c~UU$u7TyNtDR;@pXf#DK7z+WLsx;fCi zI)`?dr$BUW4rkHkh;L?p0rkn6_=7g$iD$YnFQ%DUnHP>fG}54GE`V8w0Xh0`7D?~B zfCA}fG(l|-?$nXt%QSv7>A`0)cc7NWxURzC;d4~s-x^xJSAltDa8S^pt_ukzLG;U{ zA_(mtBvQd2QNndUaqU`7y+(ReBGM!LN+KBak1VCn?q4Bu5+~z|f-L&|UOIJIb&nHq zw?v<{+o@CH4{F<+1l_48F{LLOu8d;geC2s~y(gMnE*teVU3D?PkH%oEr${nQo z+9m8&PNVHwWl%UzhH6Z4qER|3*cQ12(|)$lv*rzG=X;wNEqqRLRh&TDB^QmnBk{rv zM>^n^OOv`zg56SOh}t{`Wt3U$-94FoB-TWo7cameovGx;mOJF1xfy86E04Id6;e++xX(#53LyhQdG7@V+gKX3ToL^Z9eF`?2}8S|4uF z;La9e^CE#t>{noCeZ58;J@nX~l`QmL=)kLwa`DZ25&q-jEo7d_a!`{!M}vj^Y1|zr z+TTuK(NPc5vSR_&U3d_i42#fag)~+@zE2Z#UC7FvA*AP?42{^m1@%8&CzltjpnK>R zyizqEeY2L3O>7t|I?{Xm`C}4lk2^~?6^_AanRDq2s}2+^8qFU(JBp;KZi2D3!?h*a zZsgBpC-{A-4=3k+qpP(;MjjoUjpx%AGKb3*@QJKE|7E@$JOBC=%I{Lg!1Z&PA>r-h z{`{*r+CUo7C-3+Rp3ESlSwQ6+a)g#Ud&85OOPp~LG8c5kgG9Wo<1>(t-) z^yVQrqdXDb4>wTLWlKOLWf~js=~QiIq94NTf4JBPW=AwvqVco+jV96u3286dyu6MnCnz z$@|B{)M=Z!xKFEaY%e?s9>f1Ry!mbFkhyKn;W6 z;fA}#Oz<&Z1_YYCPN@pGDt)0gY!0oN^Mac^A)6f7SxZw_uSEqF0bKhR09_v20Uxf# z*-K}TW-}*Tc;Yts+LD9Yb03mWwEe%vFjXTs3no<8jPun5n;aw58?Wl<~qF-^KP7InnD z$SIWsa*V#D&eMHH_U=PsqxF)o%2|+GrGS%`jK?RYi=g=KZqjkx61RyRr&oUmfV7$k z=~vf=_a_9v>L!uF2SD!55JhEm8}N(Gprdz<_(Gq<>9|z^AoS@Jqg!mq70iA}JY@+U zT4h1*Xljuy#rsK>4M*)dBXNy@@P+FF@s7L^8Te7mxHMJKQ)8!)u{Se7N^&{;r}xMo z2oktFZKAz4?KI_e1-*T?k8TO7pf)}^)I6R=@ohS&Fw=(~I<5wl6Hk!iZn;qUd?KFw znS*Ao5Ac@IGeIYlNvcb-h?Uh^@~qQ5&X< z4D8U=hcEnJazI9zZnNuWKHXNqyQ8j=(91t|<5gmWGA{0gkT%&e1 zEXm2YBSy1>VeCjI``a~3oCl5=QFwv4uf0#B*2Kefp#)4DX9Cf&p5(;iqs-oil~^Ko z3g*Uk!dpTM)}hX@dE7^8oH$#Olr4@!rx@+SmHT z)a7a^xge#^m~XpDMDJ$6*HQCW+m*+u@TYXrabE?selFvEHvPqil_f;_>lkjpvw-~a zkwBVROP1ZqAd@YcOy=kwBC$7);rzN0|3y9lJuO@4k0l&@xH%b(+SK^UbGOjS$9*Jw zT~BXX>;!kIowZv;&f|uF3hHx5oG$X53F5n-(*CBK_{f7Nof*fVT;)maibeT2PW%t` zvKuBc%P&*4l@i$Z^b$xt)kJ1wt=zz~RQ2;~x_5y$6kS$@*G8Azjh_?hxZp}>n0P%3Kg*0@M{hR7M`+>;l`0ad_YA*0P9!^w57XV| z%iy`X258qY`fBtbaj2I8CI+t=rz7gB<@H_qb@owuE%{eXWYbByqwouToSDpYEeOI-vR)vuAOr%+Qy`@? z2#=Wb(J@oi7?J;np)-xAs*B>VA!JI1kPwm-QK@jxURM-NQfWXWX;4a~l8Q2B%q&Ah zGN+VOa?f6uGWDuZQKW$eC6$tj#`k>p;qtro-fOMr|Ez!^CiQA6xNWV)`8p9~_kK@U zyH!Y5Db1h>hq|gK#>8XcyKu7kBljlSCJrs@W-(Tkm!N;46#v$pe0&f&03Lb2>Bj8@ zRsFm0(3VmxUlK!m{9e%2mfo;trYok6)UXQvK6QDGrxpDxAGit?dWH6XTKp@Vng`CA_E^c-9jIfr9~6nK(E3PFMbLq z-{0+`TYbFgqhnvmJraly)~%uUT`%F0hBdT2C6#>i+5j?t6-mGTcdC)*4&ocG(7F4? za7C9pdZszyucN_m@v$GX>s%~t%^rX|W9IW|#Y%jBCK&O}3cPi~4W3>*O-)Dih~Rc2 zOv~uT8y{TZ+M*8h=LONg|0EE)63LXvt28-I3G;hwscCxvexH=gCKTAi;Tv~hlGjR{ zB+G(rrybKvfStfc9}s=p)R)=w%B@m?)1d{a1h$2e#tf zZ%d(VnG~s97>Wm)78A{LDWv7$0XlT?6bh`m*uxLwcoWRhK@!x!jddrNRYU1iiw>%p zwF_?Zq;dM~O~Tx&VKPfm$}j7 zORnuLB5y;3>Cp$Hbn3?(I=ogL6{m`Vj$JB74UQ8!rzR4aU#HQbrx8m|JmMXi=Z)24 zjUgjmiGNk@H2LXXi+Q0wcs^|r`6xS;T1iBLdj*0?69IRP9vLa-JMS^UazKImz3~FkvoJ`zs9-VH?X!n zh8S*YLu+{xP&a%+1Y4Iv=$0`0&b^R%vY?*$OjRaN>i5&I;46Z$Tc(hU%FC!`Q4{<1 z!WvRJD}h<`^C*h%GhXX7u`XseMHZ*~8T&>%`W9 z-cLtf!CGnf^SYP*A{FG^;}&9hR|FsKQ=wb04AH9Cb|z@jdz$BGg$^!>s3Et7xIK`C zElL46*QQU9(VdhGxb_!>9@ADy3{6^qTNG+|F(}I|^MrwXE4IPUwp|zL@w#%jQhEDCK z)jwCENuxHpzjx$u<72RiO2Sh%(U93D3Z0FUU}lXLhIc!`;DIkNrZooB6z-AQM_sse zXcqcOZUw&(Yw9cCO4J(#aOBcTdiKgA+#HaGZTs!+vxcdR&~kK$pPBrmp(+?7EaCBPX^pGi1X817WenwUHtv>g8k|Jr$gY_`6Sr?vZ zU0@UAvY<7YrH4A+(xFG5=-d=ZP(+s4@ZON#aeU0X?#=rk5sfBV`_SUTY;xkdHCPlT z;uJS;Dd=Ag{Z>E(Q74Ros8ABp;sQv3{_*H%u`fRU&Vddp?=NonKQ=0(d zz1ld&DV59GsNnV$MMN*+D13h7Mck5SV``Es28J}lnsRSqb)b;`^ivb=o2E+&y*!}o zVgee5zacw#SBaEXA>PjYO+FFV9~+T8-~l zH4p}ufZH)$+A`M`ua}$S+ZAt7^KLrM6ms>g);O3PZHO|vRN(d>b5`wUGGoEL;kS>T z2czf1Xs!{0PgRWYSC&1E4NRpyK9Vr7HUo6sJ^5$+ec)yE0yL5xBr6ZBMT<}={_~oJ zAQBnIUlVtO%-w$;A}${kuFKQn&0cm`Se%`YyT`?1@YNmUrS)QffOT}@i(I@*TtND@ zi_ls!7KQeMG?=T24DDx-Y7JAiCvi9Jx?D?2o7|zVw+f3EG~g_mFg)Dp0BN1}__ap_ zcZ9~m2a7rIY2yLNIhsv>+D#KSSjwOZ#|Jd7vcs~=KKOj_1kBtKh-7m<==e5a&_!9H z?=vg7C4Cz{ZW4i+(>rONWg7Bj%h^5D8s&O#!1>gl%yGv!#_8%;s&MEYEnj>UJ-9o6 z=Qk(Js>^}c4?1{x&2Rd~WP}#1T7sK$F0!rPEZ9b=C%n?2*?6enAn+8v!1wxW+-8T&YgfWvG!mwl8* zl=)NRxj9f?@Rc591_f%nG+|jFS3epUqjO&rhM44XGmTI@{sHOhZHCbIS(mNz55Wm{ zwD9~MBjIhkO0rE*%VtjAjIkO_7@0nWEHKHYH_CK~gLx$_U4Ifj^G4aUIV!X} zNfxEL^=|BFS&d71$pngG$iQBJ!}CUo?0pUNoHG_wmTS|bQyfPlZcT;lyQz_K1hJLu z1*Zp-`0C+HDfRsa(hi@gZi^;N^?S;8eaOdM&9>m>$7RN*c!JcmLKFusp!#VKsb6G{ z4!2G5ykZCKvG#|#w)5aJb)nbPCBWqDWMG1m1&K|I>E^E0RH2fkvQ<@7V7>Oqzke<& zNKX^aZcxD*cP`sf_ZEZyyuv$Iq(Gr@nCqe1!qdofaP8(}!Kfc!O?`-V_EpogX-aUY zJrT1n+#qV&t<-yF@q_h5zP-hKvXBmtGq=~H2j78YZZm})FBZ~c5m!KF`Y5?Rt%i#H z=HYtjt(g8Mjz~IlHCFBgqQ8QBmurLwG-L%NamFtE{wb0U9yVigj;2z>ry8(H+6ZnP zsDa$kXlQo!1D&WH?8W;D^jX(460iG-bXtq?O%r56q2WBWicY}df3djJsECma?1v{? zmHE-R_sQ0(Bn-`qAaPeC;k?8>qId0Gd!K!5#4XdI^^y@`nvcHbeyO%+=efH9gl?kxG{55%N9L6zl-|2@1 zCip`11AESAF$C(AvG4!uB)@h|L}8m4C|hkLswRJkt^Hwi)>~4Ke;>qL`NID-OU9yUGst{TulO2N3t#Fcf;N}(a>&kg7BZ1 z@HE$!P!49%-u?^syd z=Y!f|+PrN;ihSFa)1;*~0e_x4gnC&#>a?pKHXSlzex^w=2B~K7f#Xs7f2-34Gxy>o zWmy{G5r&rMtjW~Vee{Ws6da$gi7R{>@xqBO)TC_#lk5AOyq;S~^5nTb>UtTz&#_ft z)iX$M@#JCLk&EO`%QpC%@tp)qFQa!n=7H1eW@gqVKB^i>(MF|KDz!a}o&R4i*%H;o zj(*GHm02~ht+sRNw$eYCeQzf=mp-P`K1M)vYzO1L*@U2w3l8QQ}b+5s1tm!6GSeKj1m*f!9DG4b^F z1zkpi%UkfRACvP_<`cWCg=E~Uu^{$G68|0UA**)t$=5N((4zkVCBGLia#jmztYkHn z&2nWuHqQr=>w)4wTxrVdB<|i%XJl<=BmNf(&PGSkCa(l<@_FR*%K<@`!7{R;Y&^Z+ zv=Q?ZENFk4FG+0`#m-C_X8*)o$oLfkXYEguz}0scpD-RBZu9`v zGIxf}#D4LIbooijd%ICUWk0W@@;|lkL2D*Bxxc}&b_+0f$7ZZ4bYwi&OaO~TPM}_q zO@gwIvGc{w^Q4|0#&c;)KsH@KJ;t1-^LH)>>o9B7>sn4qa07Tg4&=`Lcsy7u3i6Es ztXGjAxn`*i^l}LP;fWEcPkdUZI*kNQUd<-2iG_t#7ijk3Y?9S>f*G898eNp%ll1iU zjDE;{EPldZkc5cvq1I)ZcV+?kIjlr8g0f(sZ6ZuUIig;fK$dTp;YQ%c=@%PIDpKK0 zyww_sA?yZ?dCtV`UlRk@lA+`vm#1*MLUfANsF$oS#s)Spt%?SuH_<{^yKp)%zed<8 zZ?#b7Zz(aznTt7dr6Eu)g;-R4BKp_U=)8Y7NQBW8);`67xz?r$uY6bHcF#HBukxKV z&Rl~fol)fFyA1lTG9A|!&I6TYGnv<6Q^-}Rd8DJY1)B~mzySYt_;p#0P2V;|CWJ}h zIWrk9k8Xw$u6@kQ84vLRSF_nD|Dj)J^ikonRpfG2IAqQyC=p#n9c-0Y_R?q8b4(oF zHoXbfDT<)}B|f`C?in4434j`{VU~Zi9FA^ZhxcOSAZa#|)V4G{Jn0X6WQ!(DKjFoD zclHA2)ehk7{!2JqvXt&iKL!4Ui@?4avY5nE6MwoK{9# zCO5+LlNs=8=mWiQNEL=ozGr?alz?#EAdTvx{F`};Bc`w^)M_tad_sZHS}6Hkd#{qbo!cTeCTS6ZCh8P+Kb;b zp;Q*d)84_49DrVvk3?E5P;hURG(0&s4p+w;ptbir_)#?nyVlHtOCe=gxk^Q-TY*wm7tc9b9v0-v!9l%C;4LZvYd;3z$$Q_yc84^&B^aQ~@pPP>^Bna=3fPs) zW9U?`dOAMao$ zPgCg^PMm}3F3Ak+U<>@0;ba9z_)(Te$8)nO#q{T7U}+zlSdNz* zQ|ZiG4W#T|12l{r$7}XC=_iFgz^``+Sw9~Oe;Y#Qm<<@cQxo?|$qHAT$wS4u$*?-t z506}OH6P{p=yf4q$cEE${8TSbDDGZ@TR8qsWY;}*|5KKCZPFm5Ke|OL_QqrF1S5Rg zy@uFJDd2&ug;*r+gR6I#;}hXt_$D_47a88fNe*X7;GG~e)>wpkiq1Ihj0e`Z?S;|W zN@{C-mhzLU=qSyx!oaT+gwYGW6_lz|mfAJfN~m+9^H z?NGgZ6Inl+hGo}E7(3R5<6(_p+D{Gg*zY7`q-B8b4%yHTyf&;bT_&6zQH`6+!pQ90 za5|Jbi$7+~UHoC6g+5vt z2Nnv-tKsJSt;~h_GQwvZAFTAtgTLLy1iPLmL0NAwI`dn>+{}wMN!3#Efi`CIv_?G8 zW&wU_dbt0IE>1SH=C?3**xcMf64$UO?Hta%iEZeBRycL|9g8L>jtX;l*ZQ+_4H{LdQL+j0wE&fuV#Uar5IT*ze4_{&CTouwCj zmypi+Z<%PHIHHy>F5G-y0TQi)>81ByNa8jruuZ#+-alp0L|O`OoZbfyk~>IA`dMt6 zuSFAcLvSgHCqusaSTqud)2{u66JxGW7m)~D|7n8d=FYFI@;X_1Ol~#4o_r2c=_kvB zFQ>p4-vILIB*&uf`opva-o{5xGPtcsfsDD+i68rtiJAUbc*stJ1#>*$htCD*HLZ(1iu}%5aAh(x^COZ!o4Lh#rQRObTApdAE=@+ z$qG1k*n#5%M~I|#6MWvj2);)wht&ESbmRI9w%ao?p<^@byCuzv>wl#Wi#O0U&7EXN zxgxmv$3cT%7F{a87{95>Mo9exn-4g(;eHj+{5?O#0Mjp4QMrB+X)Y^JgWF%3XvB6Wz$(n1$%0yPON! zaC7&C^FXbx0GsJHt0&S+yaU9dEetG$5RK&$j~Jbpf5= z)It_Fq(a8uU>c>8M!#(;WjZF0(w*j7_)hLT^FuKl2IWFQCe{Y~o3rpxuRTUg)y3?t zU)Vg%{eJ}VQEUGOd{v6(kY*T7seE;$N6(X+e{$&d{TOT4|l^D>}2i z9l_xqZK<73XLjh~e>QRGCX@yv#r;fg%_I_XF^9sL9dy}C8Pr)=!((F^RxAW-9eeDs5`j}OozSnc7#w@gP-CxBR$|>8>{w@nW0;fV@3$Hv zAL{}#qN+slZ!(#psU~!n^<_h%`|0mu1^&!$<6t~8eIetM4t z{o?BOXGgJqb~A(?alq#a_Lx00uF5TZGttW!%b2G-nPelP)jzBGNE%Xuq$#=uS6 zBwA*3g>-yiX}Zc2azTD9PUC!fvg@O0v~(St{)pf`3up4(I2Gz{D?zVY0qvy(s#M&; zxG|CZUMr*zK6&B}Us>F9C=}<4I1&4MrIea~0@*<$?D8|mvAL~8ZI2^$mwrc{-QI%+ zJ39oS>0O}KkdI||m4!x6B}n8zD25u^(b81z^S{1M*qc5uV`(R;60F38=yvj_w*(qW zCHX|W4aG*B`8rX}WY6(8=(Y73;nhdj3;_-C$Cif zNKn>a+&rZc%#0K8zCkWLvTbKKIUK~iZD#PzY#Kg#oPraY$Du+pc<6o|3pP;FGRvFs>0Ch0N>AWl?Vm*k-7Sz7J))VJg+z9xHu{QQ6j(j@OL8sx znYBT>;C5pUU3uy+nYBEcuj>AdT&>(ombcol4mG+E=6eM4t`u_~f_x(QU_*TS85+a6 z(d-uova6hwQT_f5xY=+M2Bik*%e{rDS9csAoSJ}T!%Fz@o4jz`-Vr+Q^Hlx|)qdz` z9*Y*PH^I8thz8(Wl2TufhT1L=zxx5n_REFePxT=)T?Ho(4O4~t<1pUUfoNMVg#hU$ z#(cU3UbcG{>WES8Y%?b-a3|ITm>5f5{PinzX=GDrH$ z821_-u0OmKYd>+^S!oU{tNj{J+j-J;#c`zUj<;~A*$p~Hjtg3&6iLsZ3uO2l!DD*! zNT1Uy_Q!sCzQ)WQYZE&TYRy+2vYf-;-AMfT9ex=S5+nT@2N%d`$ka2XKbpb9mSo%(%k` za;8Y0fBHfc5nHGXJ>42;azF{gT$HdRO+vVKc0DV{zX*$>04XY3OrihQrq-I zoG~KDyy%M&yz7>JMdLb4h=850NP?% z&=zwGcXV8#NB!rZteP$y#yD7KP);WH>l4Fc<3Ym06t}zB!O!s3%+ucI^hxD8yys8I zu=^JJV`L^SKQ{&U*d-Ee7kPSxn=y%h9U!-qH>2`(H^%uxCHY~~Pqs?DXJdQPNW4Nm zwRj>znQ?m<=~72*RDHu2FyKH!%Nm4pdChh0VpIq_q7mjF@UO zf9`nFvR#YNcVj6nHVK4dx9dqw;!Hdkei8<+$pfpz<<;#~h(GP*W(OT)iuPd;L}kFo zFXrUoT0I(+QwHl)ez84U#f1-5MQ|VI;LXxh;6INOG7l3cpv}uaq|z)9!gsyJfaZU6 z^s^7e>9Y9iTP()hS%R6R-E<-NfK*QoQGf9t8N+!57rwj%D|Hp&r07J#(>eoM^DAL6 z`7x1ru?yxsng*h}1^D={96rnm5d7UJ4~^?8K()^bxI36;PCA4)Pf3yQ?@e%dH4j}h z4Dj$AZOmC(&+1-UNpwHhf_C8|ywrb(h%8d0R{KNnR1W7HbNNFFBaPxyQ`wDwLg4tD zBC^1`hP-7v1#*%WD6Uja#U7ZFqrb*erJ43PFJKWky_cpbzVDfTcTbT_@lz6p@i}yc3szD5VQP40=vz~a0$a#sJ#KY=})oX{C%9Q!SyWk zY)IW4Gw?L!e0>oK=yW^}w@rP|xWzS+|FR^Z@5Fa{{M9AF^-VR9T*9rGkmEztzDU}c zaRPE%8QiH~#H#SQRTVOgBw1bvSMR%F8><0*i~eD7WhKnG83Y-=XJGHQwYYJ; zy6Ya}s47R#j=m;iyl29FyJ~#hHb%JKx00$a?h*>i^H6;82=vWT70$k{1(xr0AkU}> z8VbHb)s8FhW2P!u{?rA0Eg#aQ{5We>y>GrE*8^%e) ziK7a9cX3tlTB(VZ7b9t6x2Q0J=TAZ4jEdJg$-nRk=rv6XrW9C{b7KP-yM1Xiz}ATl zsi%{q(H*d8gA#1N9|nULualeq?r@C2E)d&ojy=OyF=gI(Sdde|%CGBWb2@+1t|2S< zZhjS~-ndTxj&jb2*$>$Ti|pyW*rjw5GasgY3&GfLlb}GUjT=U!&?=)l^yM!u!&M{? z%Pi$#{g`%qoo_*k{@kW%3Wr$(7xLk+!RLW zFSiX?-Lw;8X$IV~T?e1aLy6S|q~;qs=>7a{)brFa;^L@`A4j+0{e!8Z!sNaeVt& z3n#9=iC&@?={4d9yNXuBLaS!N#;$=I_4#O*Tt^RQUc?NuXuO?r1|1F0{G&w z=Nu@NkFQhPs5BB9l|YBRPU4z5Y2?`GRCK-aksJ$pO@ABD!^|VWxcrtm+on&?9%op_~ly@HavMw zA6))IRb92v`i6q#sL?pO`1op)HCKdRX=8}*pAOS-2}2B-Rsj#DS7LZ}3_QZAIP`ZK zlRO-N#vdE$=f^6XTk;_pcSIHLMf1^P4|kuf{>jVI+sE{GKV_atF;M??G85iVPd}$P zL7e&|tdW}q9_Qb&9U3kmHC_rA{D~od_vVll?eo#@Y$eX+Tt0=zXR!Oc9>MN|y1caV zI2s{v#}ZL_R5trc$5e950KB;j2ta_=X{|>>b+sk2NX%lJEPonCAm+bT( zHS9ocB>vJj0e_!!%(Q?&sOWl6FJ-il{|^2E-S+81!+W}<`Zkm}r0f;mQ&q%lSk6;mc~3kkNO z-kGEDZw{BgG?zlrfc6^(PmQr`KndNab8P6+5On^6%=1%MIEMZ@DHgF}cO7SGXCM!3 z45!h%E*fOP-yF(YB@ROd&iJ+^o7CL9i1w-Vq@v^=ImbT2CxXL_+qFw%<>iBnk7xn? zgb$dcSz_3`rHQP&H4C1EZX%ZkdLYeYBAvHKiO$;0yDC}co{P7I+q7&4~8br8*rwvlc^c6gahxiph8v#R)2`& z9ZhPZcFV6aKXg72Lw*R{zP|(dMYl5FH+&*X%qNhAU0Wc?{3L5I_W~U=Y(?CP?=qzp z(ctze>-syZ^`x()j@rc4lF72m=t{L0q-tX~{Pqms`#fxg!$UUwU8~)p@aATa8?z0s zc%Gtz6N>P(-(RA#f0zkd9D-Z#<`LU*oa>EC|L#FfLvVdBgcsh!=O+_!L8d9V@16() zH4(H*N{GeM_Sjt5OUA9%!#CdNL4UV68Q-}c*IMla?Mz9kogUtm zoCy>4eX&EdgSx+Z4MTN`XnJ%R{-=MBjt!WIkH6G6& z%M{SBo1IACU=BPlkV5@Ew{T168jw5q109&1=#UUjcO7yC{%rRd6kvGL9eDFe1Bz1OFgGj~$2DY=$Q9>lZTkdp>-tT92&*l3=4Yaq z+fLYPC?-7QYYcy1<-xQ@9|%ghfWlG+v@^a!^1M@=_w_mF^EF|QOMeDQp#ZGO zESNJ+2U2Iqkz@Tj{KWJ;SZXv6y;VeT`>|K-+v#7)!e|*`@iSeTwnB{m@7XEg3C&Qt z=gL!J{wo2!$`7M>V>Nlg^~xqbvFFKk210&fGq;0a0%5MT@`k&**~XbE+#K#6e)>@a z$pL93>WCGNU-5`;b+iGa(ol3rxd}W7`sy); zkw;|An{K*6HJf9ihw11cFW|jsA+%HkN{+9guRdu*(}&NHtU41n{4PS3pP4waM3tJ} zk)kgeM(K*t3E&rKK~i=J1nnj}Vbc;j`rxl8b&A>xh9REVC9jJA)#y^syBwo_! z?ELM_VezX&*G!I&yfdD z2&?;?%g0%h^-aN`E+&DdJWE!me*?~+H<`=hzhbxjwxHY)8sx_flJb?D$#d>5eDJV~ zsh(Q|bEk5iUmjOKv_;}|y)261L=DGwhe}#`4aosIzGvb>aG58)A~#X#?)$?Bi-`t0#{sTgt)WVHP}) zvd1O1D&TH00T+u-M#hj|)(>FntDEa3joNPcfzBc&vbDZLV5{#z*QDw{9NWaYG z2Wqx3iwvqU?V%AJIiA6J5POMmwk3JJOcWI)Oh6PgQ89Z1X?dZ-DsGL%I?r-Q%1L8~ zJWry*g6&XyP(j!^PmA*?R6)W>k09^S4KRLXiesgn@YXsL9Mf!oDx}4-E-MWenmOX5 zs7!d4xs-N`%J5HLdyVag{>`)pCE>lo{% z5{;3DeBf!$g|qU;FxDlM=?MNqC%vA5l~*_(>)b^A@va1vJr-hv;X1epCwOPp+blFD|60Pzx-JROL7j0!|cffXab4bf?!oVGY@S|r77K>_Q^yg4^Y05Y1uqvL3^*zAx z_0w@-rVLb{O@URv-=mAj6#VkbkBz8q)IEE%6jKg`$p>SeS8ufR(N7XCU@u2E!biSw!`jV&c_?|4vOY@@BMdC2O`vzA_ zCt~-~=_Jm;3>wXZB=fjF9n{KzOm|ZVIM+hO^TvV1usk=H6@z(4zQN)voGZ|)o!s+t zho_TtV8`+Vfxj)cR?Q{@MRpFeHm%N_gHHlBO}$Tw`*NW3z$NDEa}kgUn~%L}DkO1M z3pE}a1MR=$LE=^u8DI38ggOz@e)le_{x-rMYZi)kM&a7sZ)m{$ej=)LnFN2-q0Lq& zarw(AxZ8IKme}0ljcxL$hMCr|h{vPp)w3Y7&K%D*Bw^x%5jvXW3^ykEz@@d{h`sVd z`XjXxmmf8O)&(=Lz@ZJ>TIBfkr{lr0NCl^ybOievC%ip96tp|ViE7_OROQ@z6O|>wE?ps?yV_mgiPoE;tOKQaOekv_1vLLDRZejVj%k=&0P4HxUKFL~tS};pV0a7m& zk)W}U$o9EnSP=gdgTkU9;HMAPy^obYIm#4JYlRdxzJ;`n>tfzUR^O+LUDa##~6{R*mI6NufmG zCBy%scmdYN+VLS>nk;Doc*w2V{BmXsJAQ?#aD8Po-Qqr$ZFu>Tiq^cgym)zlmh6sa za*dy!kV3si*B&+Zb{u|?=1TT2c!Ef)-_HejmxFY4KHon5`v z19z=9A?>4{kg~K5{vDh|;&x8M;r>$c?Sm35QF5ir*7@OY1w&z7#B{Rfog$hUU4*jf zOL3Qa2ksvqgDpMv>`#FrU9X&oK^_v|Rp*at#YgdMp(6i6egyJ1+$C!bZ0Vfmk6~kj zI($2`lq6ps&n%N1Vmd*Jc$W+^3*B{?89iy#%So2IZ$p8si~_KWMT^38)=PgJdC_?S zEK|a9&PE%U<20SVxHAlvBn(&g{9sQJEfRiZl*&A-pkeN3LF}O^-(tfl9C+PDyxDAs zHoZquJf-NxWolG%tvE!eN>kxVEw~td7Umu*rpGMB`OTTzaWg-XI@e8Q9S`g$@0Xl` z>(@?kJbyi|3LK)vuPdmdzPYeA?It~(E>CLH-t&wninIQ!o|9dz#*C)EF0>u0BU-YL z$l8QxY&PWlITr)5^-V8+@f3lCBph6?|w z;^}9l=rHFqt@tg0vt6rUP`V67jkzpnP8jOA%aT(v>KNQr05vBj69>mt!n&Xj7`dnr z+|*uC-h~s?yRjFHYzK$}V*ruvYa!nEER$A}O@!4lq$_I)%$sV0dmsNrGcKQ#mXc3Q z>SXY*&s*a2V;fxRn1?2l#i%c;FmbbzaaPexsFZUi8@%U`#-e=cR@}(zY%zo@vxJOj zzaQ?)tjF+iT#of|6skrHp!KE6Fcmey5mI69>`szuR7BRSy~K!_uEJc~u{dc;Fm0Nn zg>6!OWN1SqZAmaaGrwS$Lgp-(-vDWF+y@(@^S^7X%d+COEnHHt2{~kiMgl z_`&KZE;2X>ABE|dIL3tQe~I&t#hxKf_YUBrjqizZf(_P3MPkMvx71GbXtl;K9%wph zLe^#(;eD|<##w0_TzlS0`imXWa3q;#&x(dEG8$N=KTOM3&4saC&R}*>ERmb^1JATh zgbyaaQA3HrVDq=6$Rvb@?>~qa=beVw22n;oNE;L*3?X!2HHe;_0wo1|aJfI_*RK~-wXZ7Ub+kd3#UML)@uf#*Q6(G6D z5ThLKgX{ocVA4EPA8$+lWNfk2Ke(GJRm_6!RXQwkecs?L5u6id2Sn5_$H1;U64KL7 zU(zi&r|&!c{rdwNN7h66(Owj(lAyC^C!pFQHCPmUfxH;s1g#qf+0WmjiRgPnBKa+e z?A`bQvy1F#cf)JKcqTGuy1YpK{d?@e_IOa@OJSVSUM8+z1GUx!P}g;0U|M+rKb|zj zSh08HK+h-oUwbZ&T;=?4_N}n#vmQFya;)khCywPUqjxkDAotjRWZ=~bkQf{bj#<0# zuwfpQmxtq&(J83>Nfa&&kFVZ6Clb{UNueg^;e0&*B{kE1gx2m=)c*Qu+T~CL;`N=( ztkY4{O9sH2O=T9poD6o~Wm!Ww6BfvDacC-!v0V z(+bFWJ3p#^kHPJQd`w>bjy@kQhvN^G(eQ~gaq{Wo*3iaMyOIFGxuudYx$F|}=J)-O z$!5^^ZhDZraWmN6*Fo>YmuX{29wcUs&}U+ym>?>`U;phSxjm-?PE71TrxnBG*_$Jn z@_j8lj62W7Y5iilUhK!DePP({(L|o?UCn=L7)~dMnu5sD2V|wDD$M3y$X}0Fp#8@V z5;3IVtBTpv4viOzoWwIaizVZOSVF_~Q)e)N2(FP0r?Z9<y zsP2aa%L>S+ZwJ9Sl+P@wlfb{O3_93~lQ&P@iNmE@MsHjuNpgHoCYMfvxNl$RNbPbo zw9kfT31Re%j45i=wV_D47D}@V;fl;1sYMu%n?<2% z*$lEj%nZjBeuXh?DtuQ|2aj$&dXBUa!$g@y-D(%e+9Zh?LW==r~&B+Du0~Ik*4HCv1MyJiN7L9?>s5i@Ro5;y*uK zlq)l#n~yky?)W(Xt=yS1ybC3d>cI@lBD(6eDwz_#7Cz@D!e5_Ryu`$J@L&6Rl#rsN z{aiA2+$~_IT2_!&Up2bRC5U;G%6Z{@r(osJ7v#HP0mLto#vYr=BvdAa$Qy^kY3(p* zJE}zoRyo0I4*^kgVWA{37H3K7oKkIlRcXk=msvLFPwls$x zX`KU4COTr;sW|G}Yz@u$YwJK|<7DQo zi6cH;V~6KHN<;VJXFTim`S@ei9lCMj6IyuN6IVVy%|xokVWF}TF1#H;{@am*9{vC5 z$;UC!S96hx8>j|(n-@5DrZ|YH2ZA1xO}8varhVMJ`0#~zx<2b9{d(#LUCKF6s@J-+ zf1d7z1|@6M|8B)jaQ{P99*Dv|lRtFb95s3(=_{VCaUyoZG0?)Xnvce)6Pa2`=G99V zsy!nGcU{>Jvya*1)(CFaXL^MMDCQBn&B~~2J3`(!nSjz|GYo#1fKBR2n6tNpe$f%2 z$Z7}pkhd5vmHx$zV%9^Rf<( zP5sV<8l8Z!3#oL?d27m0>_fk6^B^2T|CZOXl=8!Q4lhw7x%ycHcZj zgE-DDlGUQNk2A?%^(DNr6%qLP3->(|%3wm76)@pTaammqN^yCa$j!2N;(ILl z@Rs1?giEwn?thBT!=K9cjpGrDjErm|DoTTfoaerd1`#D?q-aQ6X?)WVC3_VSQHZQa zGP2HdUq@3E+EFsnQcAK?lHc?D6VB^A=Q;O%U7yeU-GYarn2*cWX2kaZa3Ety&l(xHyYm?8Tm3$Zm%in~h+Z9lC zs~FSbl8q5_?$aZlMm&eya1_8l*0^aaOdqvEjiP9hBASUSl>)JAMI$Mi9*=7d|0a1Y zHT36tXISyf2=ssEkwI>EctHL($Uo$`Wok#rIE$UIJI)V9MHK0~ee1D!tQ!ty=nCIC zO~aRKJz22lSWXdqzN(Zym^>@yN8gqd>hO#?k6Sl3{$~h&g9B(C7X?qYtAg&M2`JvT zxzb6?8u(+B!6<(Uzt!g**q-DV+v2xyig1-8kyR3?1*P5L-3c*-d&P8h7l-xNGexaX?? zj;*x?_s?75+}Jm`V99B`c9jouT3%RuRTArEC*mKYWkl4chvfAr!oHzgx}r=TmQAkY zIZg3|+HQGaPn!ilKz9?A`Fub-2XEn&(tFGwg;CO}{fjAglp(V3#PR)RWsv)~f~tO< zM)r05#=^1DP=6wWIsfMxG)hMZDs#Bq$yaYwnimNfQDHFMI1LI%_TfC8cv!Z~o2W^; z;!(U!o{PL7ec_4_zodysPn(IOYjn|gSdBJiKO+IdCZtq*7m6MA#&!Nag!izE5&U|D z(z871sPa#`-6IW3*B`)rO|QtKiDPJM`wHTmwjAu#kD{^RRh$+-4{x+h$J;}B_^zHS z?W*mE!Z*i=cp{LYP6ITs`AnZ$aBhzf0sMjvCS^_lZGW!@VB>*L;xZ6jx8s+NSk!Hk zVlTf~2g0XC5R-Eqc{)E?$KF8dKjsED2c1PsT*k1mg;dfMu;=LxGA_x9yxddH2ttFo zu05B0{&j@XHEx@UDNUlybW`YviN%7`Aq(3Pa6K=HAm{Z(caDoU*bw-f5 z&3D3&;4aq}SSlnNXP#Fg5)|e=Z226CAhGea)JXyk;FP z;AT9%U!|eLl zM^}+Wk`44!%_n$&XEo>Sd_Yewl;L}F{X&V2P3U>J4c2x|6wa<*jMq(4X<@S!{-lZg zX(kVG`rRE=cG6n@!|pNg#k_|6r!f;(%2N_7U-ayV~UC*G1$z(w3L=~UPQoO!B^`KT4f zzb@(qipdNLzv+Yi`P0-`>%r%e`2a>DR@UGl{H4KDZnLkbRTz{s|0`gHmt+&(bK8i^LdT#yj#d7-%Dd08DE0^>sV>`4ve@y4^EFbqBgwZ&Tz}A zY}GlGa_FMUsSoIalmy0pq7;+AVmcNbX{L%duVQ8KKH6szMsn7a(Qnn1k&IUWi-Y;Z zC*U{@%HX_K%Z|gej%0WxqD0S5l_0w1<7vjFEBHVuluVH_5*l6!Lf4a0Onb^AOz2D? zyB#=hsmVq5*x&**y}5z@@*Df-Qhj>RpR$V!cER)cYvJ`}z_M*|PWfH3B#+4XHk z;Pm+!Na;rL9{e3;R?KGjoxbxq-mVn{ad*tzzgM9ufWcJ1=WJ{AQjYa&0=GKN$<}aF z67u66bC93Mn<{sWbVfX&YhE6pW6Rr6XPr3Sc^*hoEURda+I~DgMVzlQ#|+HpOVapc zP14%F7#<%wNoP1+riFXI(h9y0dHK1CUiq1ijupu9qnE*FF9l>ABT3$yrF1AX03YRt z2^f>zVEHzHU1F4h8yCrANkt7#(CnrgxXgUnQ8!#sIuj%s{xGJZ8}W==2>pu@Xt~7- zlq4_l4*k)`=D(BZ=fBeMt8gp^ogK?;zN>}rmV{v%$Iuox55hT5eDPe)X&N?ijZ+(j zlHUuig8lr(ur#@i7H>SnRJ)%C%V_TXwoHTM#Ws*$f7 z2{7yLT6mP10RP%n;_owC=~v%ma+fy zcedQb5cb6>;uJ+?+;lyHMr@u0hdb)XrZd~%S~thfdHV~e$xh+(y$*9%8fAR8a}r!V zF_k@T-wgXXZhPvz%jC18GP%!ViGTS+GH4hG<#Ev@P-_#)?7ac*ndNBy!jN%?+okmHw>?U#lK_F z_pU9Oc`BTGD$k&;A9m5><0avH&r|6=-1UHo}L@?1x9rB)s?l9!T1QyE%xG5r&3ohjI|nJNpbag|3IbY;Wvm?~wCfK7&)}5EHj&27cUp z1V(aW;cnJXcxt3g-#$DK(b>x(GP{d*MSP{@V;<48U*h1=oR1-f&zWNeN2s9ENO&#P z8UH?!6{a10Nsd5UXT zPh!l;5p{#)PG!L%DD9Y{9Joe+`BSmn#$r!hNSPFN~ zSdpvSmZFb!7HUfjK;5xYxRt?qs26O+d)p!~U`s4MX6y0#jUqbl?ImI)Ua&x-TDn=7mtBj==c1ZCnY;x$Tyd>y%4afaUEb;!#BA9BojyB3# z;=R^&kTE`$d@2~EozVmI;+hd6{aOvTwEBVXWqbIu`6Joe_yV_{n1r2$5qMtaJdVyq zc$v6?TxgGmpF;sKh3mAx4+y6IQr`G#-v~KqR}Ou0dkFhUfC)2vX~eW)`qfiNTN^~s zUTXzxiL0a1bS!FgPKPO{mVrUuXB;qCF4XKahKc(O`DIh1&{@L^#qGLrf1VvyKh9vc zU%UyI?8nmeDGDHeQwsbo4$3&Y-U33V~}}J}cgSmaLe%4fXXez#%$aOV|SE8a(K#_7NZi&Ey@&KdMu+#Aw}9K-UN2kVi2pW~~T zfkf0nYGAm8q&KaG+?{WDFGZH2^t2LO9XXYFA6Z8-{@V)audJv}2_I@-P9WlCE6K0? zO0KJZ1~t<)>ErurF>%gZu&mUleZA{oxd->1|2>37H6r|QKQs9KW;N-T=gxKBe`r>$ z4$AC~#f;1HTrT)EPApr6-}YFb>eNNhaO*OW)x1Ne_aq9|C2ZnvSMemR=^~iC!kn8V z9)MFfVn9zk4>Ies+4C<}VeI?-=P+b)9#K&bBl77V$ph=faBWT>J-AJq<9ydc+?2JLdi@HoLOKFN zT%IsWSD)gLz6*2Wm=zi3m@saau7P@-1x{T2otVekz_?yj?7I4jxMtd;?8h|HrnwO| zSNY)8N;`B5`$W3+PLZECMoIM7ZK%<_9j-i6ps{z?Lx9Qw^6IGquqFpWT`vHO~2G;OdTSfv10dXTrLrSeJf+o?#nd#rKlFwpocCS{z|G_zEO*c zRamZ5K-GP`=($u&C<{5q3M&PiCo73Q|E0uKZfKzK*<~allI5*5EW#^hr(ul#L%P(? z1`ow#>#1m3M=+2Is-=E|>cX zdmnx*Q2!s6{W|lm~$Tq@45`(QZvDJb~r4{>PE}O zpRvD&`>m9(#tp}7>0{1KrGEE1x1ZO*j_epzcrur+S*ZdaUN_McwG>)3&Jz4`TtRL0 z3prfJr~FA(RA})ZPAS<>^VZf=_o_si)pD8~t^7>X&VeRW9c4#O>P!b!oKF(Bw$0C)WQ z@QleX)+4ozXj)pqSe327!KA2Jr#k5 zI{ruk1YZ`Q+vUSlV6{I6v**J>aMewHxz<{AYX7k+5*FP=kZE< z1a6{X$ew8;E>b(7Jf?|bbiAgf?qk4CViJ^E?I*?>yP0t}pV2|DOR#8E9gD)$;fF{A zG_F&}h6`ha<5j+peIuHvpjQm4V#dUMc?c3&Z(x0XQ)OuhEX=f|14kCzrD# z@5^w(w|qFHGFy0b?+7V!41!UolPFV{1}5X3VWfO4-#cnA<82`i+j^T(<5&mXe^4Cu zsJ^E6_?7g))-`yi*A{;-P=SZlvCjvYxwJ{~+^Jl<97bGobWU4RV98;ckr&BxAM-F0>vm#K(0s)Z9-Gdvr* z?K%FJgbi#by)Z{t4&H>S;DP5CQGe%C+N>}S&0cE~p;shK{Q8{^eVIycKG}c=PUTW_ zqbK-N>MBU~MqsFV8x8$W5)6)*1OK-zOuUb1z%l(+%{#-y{7R(@=DG+||MO)^dh;;p z%TxMbLp@pkWf#Um3YuTY#q{ti#2Pt9+Myl*`{n*{)dvrhD;pNM2c3B3kPm@@`KVncI($8GoTt$s+ z9B7_|0e(6nLRULoL|2z6l-sQVUwXciW8+qmQ}?!$m-98j885?uxHDz)H^0#48=FvN zi9PdCw-`={AFdRr%RqCtB8(ZQMd$st#PJXQ&##<{;&q;IZh?>`gNjsp_9;>j9gCeg z*I{c;D7^is19GiA?k<*2=Q&?MsmM0YQ+SL{&74cGPW6QGk00<>{gi;)*C$ewwU04u z*9>6q?nJFq8KC)2OgI_eVUrc-TrD=m#fe#L;Dm<|`fvpa^$frwMGGuissmZoJMgt) z9yYdpqE264k+WlO;M~%yRQ-Nb46)q%>A6>xFAUdn+)S=g`PGHGyc@# zLLhFD&P450b2QdjMyi8vld;`?pxU$@onyqwuAoL%WWNGWw!9xA@+Om&>!*R#%+|Tjrvsncu`gR9-{@9kLaU8ol+L^>UevlS_-bion z+(BM_bVNa)F7{=ZWA2_H$X|C8U)C$)q8eQko)clZV+ugy;zOdkdpmV_JBjz>TrF7j zUckJgB3!J!fR2~(rF&<5pgz+RX}W_yKWtbGr9b9T9_PINA*BY9=jWrHk|rbm>>FLV z@|W2v!twHkRB5Mv2JT-Lj(*)|v4^u@F5^7IxfA4}o$K{%F#AYUijC1PpafTJ@Ih8- zHXip#CTvR-PT3?vGZ&lTTIGB2J+=z$>k0U3sk4W7q`957pn!hN+ zx+E6<+KfT>OZ}uNsu_|NMR9pq7AGq-sH7n^y z?yf9f)=LM=6==DpDr1lrNALdr2{x-EK{wqq8+jEj+EUX(pvyxB-H+Vs(ESDjh? z#y~J$=SlN@#NgoCA8cr^Eu9k-0bN>Oh|7&B_#R(6GCO=Xy6 zOB^BW9>HCiv2$=&9bPylydcJ$;sLZOW!|Q#iDd`wpDz zttvDTzMupC>gd?qMf*2WTFr51{7j+FOJl&wXMJU8rX5RKsb`jzcKl zFdn_Xce5ca7fC?PJa{?bDYd;gos4t8LZo}dXuH@?+EiTvLCYU9b?efIM~54p`6oh_ zuS%fH@>4lah9RKFXX?^w42^lWxIIA%84%9~pPFE5w=@hI0zVN=4_7YdV2oXfqIA`% zHqvHf3pU?^sryVVZf0?(^4o-?7-TmMC3Xzc*%k|Niq3x!8Rw7X(Gu9AZGy`mbPDd7 z&V&a8YM^@SIwSPG3~SZ2@x}ZE7>?Buen0jM`CW%`XNV5je=3xWEV@PWn^$sPGjo{B zJ*&GtpJLj`ecnXr*R;LbkmDsD!1Nyq^h#eIeLeJ@+*BN3&%V!v4G|Hrb=x4jj5`bZ zjGcgGzq{$)np)Ic;Dt6d*Vwjl4IEP+fD^tZU=)`}T=*=9k$?3MHWx->s<63wHtacjE|o%~A^Hk%xV<|uI*sh`QZCHJ9R&vClM zCY%JnjwbtN?#EZ(m*aE;9bto{H4S^0h1Xn^Kv`D<)8bCtoRCY_9Y zj)dTvad7qz!lLn3%Qx`b~JsDozjK3aMG13FHcit;*( zxw*bGNiZ`LO24^5c2tW&6rRBwqj4~lkpr0nHSpZCkjj^e^Abcn;acuhEVVS|Kl98% z`v*^e|DpuEZ4u z^ye(qs1K!2f9|D$Sq6AeeUQjT9pj7RE;64mxcMxXZPa#xg{@^!z;zhAZ(95(0Bg8hsi#hS|XEYar$F{JLvJ5v)5253n z;buryjkr?sz#7RC2OKZZw>97hB% zj9;<>o*~od7EO~X#4)j32K!&%AimiQQMkadeU7`a&BO&OCMPp@@FzYV+zFXs#q_3u zGLcT7jptrOz{XG^yopL9!{_CpYG(rT`HwhW56LFwl}qrcX#}3uI*EzLeh^!~JuvZO zAK9N6jn0;f!D5j%`(MEx%oep}uQ=`n#}|kTGgq;Vp&MyVup<22wh^P-CqmMWokYof zDGE%AsLzaf=&^qV{JOOdPrDi+4JjovQGql(Jq$p5=@a`R81p`tOqPvil+JVzPpdn4 z^rji4W*DN^!X%<6rAIuzK4i!BFM_8Edx4BU%c$R|U__P;(4CL&kzUal!qXli3Y=r& zbnYN^6OW>2ud?L9&(q{O=R{~3r-#KWAE4Ka`Pj;<e__dJ8gCD3ylnpjAui4LaF6^u&9rVB5he^F$4!KGV=;W85*@vkt&AKU% z-#&3(McW)a&)O79kxK)**KeRgay!+c-LOkwxoB(5)C ztnvcQZ57b{cL=?`qlXUnT63JPNcuU~77Cth#rb?)$QAoT%Bw8lRoq;-DLPbHYF5Xa zt#J<{Pdp~aCD$`&L;J`i&t$L-Plec?MWjjkHpyuI!usw#2E=JBB%hJMpBE31f!Qt? z|23b-U+V$JvJ1geuY+gL1kkvb^I_=tGwQJCFD%S0WWu~p;J%(JqP_DYw!JK%KOe@B z(jSwsFg=xOf1p(Lbu)|_%JA#MUXWW7zj3E+8&%IT!=bX@bkdqwvO)PQZHPSrUiRg< zOmqU@d{G$rVi8A#_Oh5wqro6t7sW<7pQJ6r9zSP9Kc0O;W4FYE@oy*mZqtoMbvNiC zy%#v+ZB3573Ur50H+hfS{ z57%+u;7wTMa3A}s5xT@g!P{z`pmkvaUDtk>JWcCm|0dsOU$^GNi2;A6-9-c|-6vzt zudQ(S=TbQFc`5al=^(%RouTZ-6;gL4jmyoxA$7O=nKR4{I@x~}Zb^5e6+{{ar6f7$ zQwy5@@qxO(dTc{a4VB~eUJF%Z+3Y({$f*~nQQ5AMK2nqxo+|Eu&>1s%c1ka4+iP(g zzaT&$(NG5O7v$rXp4FTi-Im-~Xo*3Jag14T7!70W!G7m$_WSI9^7oTB2!`jwV8Rr7 zpyabaY?6g=P^EyH#FxU6DIsKBQzNT*UqmR+Yr$bvQ>?W~A%cH@X@-j_&a>Y_y8fu( zi=hU%tpE$^m|pEkn1 z1N%Y!Vi@U8OoTNy2bt*UkBB1o->d#Wkz6ra4k?l>`w-_~N$+QL{asI%#>ogxuQtHm z12LF$IuMTi+>Ae)H!+`!I49P6j^p3F7e-czlGWupXe^($s98$_519mjbf*HWPK>4B z*LcFx*@q$DGJ$c-tfP7+`iw=$TDTi>n$3)UjA_X++;>t>bhce)w%S|L2LZA0RbmX~ zS9Ab{T3jzM!0Z)jI4$)yddkcf{t+{wf$`2L)*8SJmv6YY6LNq>8&70cNNuUrB3U#zMm3WJuI1L3XbTx-2jh{MA%M>+jBH<+pB< z)XsdkaqkpLeRxUNHsz6JPK)Vj4O`mzPFg6s>kaChU(f#CUW~&hRAFL%4>@9RjNkos zUS&!eZGnuy5T>X{ zf&XFZahyZ~=m2*fy0F@cOybGo=lDX}#JS@%{(U0XV+Tm*TQ3Mo{=@#rJWD?`A1BNu z3E^zHN?4WV58|qdsG~Ch4^%&69u9DsbEk2*)YuYxI#l6~v@*X{;S9XLuY#uBZ10Zo zJ}k4l0%u=Ik|lw6uyJfYS#jwy+0@=k-g|4&7CTd%)iQ;ClG;sBN)B8Wq_Dwdo8f^^ z4E&L=MOT&iXxVTWi%O57=(YzWC75$Z?^+KH4sy6>+-kvF7)r}F+<>j&HB4fl7CAEI4Dmkei&w^~ zL#6TtqA<6IF67<=BN^$SYCn@vlMM!~3uYLhpH1b(xn0Upb?D@ouxb+~2+McOrshc} z$qEI3R(PzBOxaWjzduBoeH61o=20z9-c*h?F*hOoxHIa-u#o@!Ca7-4=|m^0Wc?MEMs`;8d zH?xdef=;=sh?FzfFsQQ`tG_=0))DxM1+)2!+cAE~43KQRd=8dHy7iL+nsQ5B<41 z90ILE@n>~1Yv;hd_a{8Gqf3_1w8- z54{#1M5LC^fc*&v_^xj`4^Xl;+?v9%g&+42d)qyj(wu@}8~!7ZpoRxLBiNa_z9=hg zK>V@{F_oJ=>Uujc`gn(TSHc2@g4EH#IfKdRwE;n9Hl9#bC8L{?xSXmubU`C=`F9L7 z*Gr>G%L0^$wqx|Bj1vxi5hG2XS<-uaHI%i#LH{mk2>oSK-`gfiDr@DgU z5$eH6V*z@sJOn|GNnH2iAbsU_8P>;yp{~hB{CC-x`L16_p44aI(L4)S@;4OvzN%yB z?p*ll<^{(?G?==GNg!XXj&GxKcooWNv}M(56cl_Vm70|_#f$44+Qs0)gfkf4x}J$m z`c2H{^wWErq{*^BDXR@#_uAgV zt;{2ODOr^t@30S7EDfVBK@{pvhIY-XzHDF&4tl|92EhMfn>Yrz?D5JUD&*hTap-%afOb3! z#*?9O=yWRrR!8p!!^4`m%O(mM`)f!{Mh(20st%%_t>nPMU&Lus5ormXh7KS41k;`O zK}AXwvpDbp_Vw?g%TIDX*uPy^?y#EGcUTXmESJr=|A$p+2P{e-2c4%aFo(?`0fzC>gB3ku2bHgRInTR|^MTthZ@KU=NVwytwnR;3WO9C5$%jT z-p~FhDt;rHC^`-a3LIxZ%d1Sh>JcD+WCFPlB?UO0agm zGu}2VVQ(jICASW{LyTn-omS8BEaVmV5|?9m`<#;mHw|k@h|5)ypzs~u`z?Z~1rMn8 z;g@9n%*7}->n-&t>mr)m49&oeVIAzExFXRRsw_1TR5~axAoM(*50N0zo44ROZh!2T zumz8|ltaGiG~!;}$W&#^@;ihd2|+u|eW#4uj?KU$B^w!u+eR?#J4nzz8{f)JfVcL~ zNQ7@C%#56X-!&)WKW?t2kR@b9*G_@6U_(ag)+krwRgyauLPtDrP3iEJ+WL2V`1 z3Rb@KM~yi_WJv?J1&G$?cItz)#6K8ZMw{q5m+<PaNb5R9 zBi&>$xa1T5CuPNm2F@a}4=*u8U8*or8;AL~IA@H08l+zKhx*0>A}Brwd&8!YTQyl^ zB5@+BbwRW7{U7w^V}ZHwuh-|g7u+eG)v zBoe3M)2zEe2brI2PL9NXMXP7`1)s`S@Ox$+n^lDcOY}%Z z&@=L+`8>LoE`sqfJ%pS+hEKWdT&uMa1}Fcb-?+@F{8BMYJinNJk(dr{!gVu8T6N&K$FN}I>f$yir(b=O~I2RKUZPl>B;WD}FzYu598uC{n0o#4j z!7^NytTmD*Lj{kR!{`Jno^wu-|FrqX#T7BD+7$%9rlESdG#OEh!1QyO!2e{29(NYO z^;v#cr*It)avcK6iH4|F`i{Og^~aa1w?WvmHDDpjlF(iHAd7uOt!_H#B;O;9nGwvn zBa5cGws5x+h}f_uO&E=b?|IYl(X?e~?-2?YZu*eVhK`tGuY`9>gFz)mn@n8wl|D6J zf)U4a1j%MbIO9+O4A0d=!$?I;am&VM`&Z)H$>Z^LuOIRJAx2F69dVYXjk{cqFX( zKyigrGIR7}9XYvJ53VIh5cul}f2L%>+6@}0<{~0Iu&;xB60J8|q(@%3Ob%8iO=Nzv=-KzzSxt)wwg&2&Vl!&DcTsG{R z8(4~$fL_2LnfPrCUMvWu>Z!hT)BY`y7kggUBpbX>aNh2XP~KXBG36p4%e^zcyy3X(-bQfw zX*+atMc<^X-wdxPl&PJ)9Zp7=V@8TA91<+VB)dGm{TgNA#zSH3^hezOX~Qr2`icmd zw@enl?R|t{LER+#eJ52a4IpQ4Cm^hz$R?)LK;6PRi2BH}tuKV5>qbTJ)#Lamg=gS{ zRsr*dIfEzaPr|aK1sE-LnVYN6#%(?dD7u8;j^JY$nf(xB*ao_|G@Cq`(MjtS>QF?x z8oQU?Wz7}F@M#eDJJ#bkNHb2t+`-3$cY7z~9a<@z+9hP2{r%bS)J9~-HEhJ8)EjA}KU8#Ms_PX~zRXe}v`eo5m?-htB` zp`hV}9#)=Qz=#H&=lTJS@Pq5E6~-T8lcoPu$WK}iB}EeG$nCrS^xmS{627Rz_7bgs z?`cjB4}E_O;*Xi*`ChlPaK!pB1nXP@k@+1o_mKjB_NJ?N+&C1?K09FJw5yzFA_#Z3 zf1*DOUa=qSjQHt)yuixj9((+`91h&iML(km7~8mr?)TIXT68?-)eXKSdtPmarP<3- zH?azLt!N=jJJnHNZw>3UJ_%>&KB1FF+Of%ND&TY_;a3|?lEL?fv4>bo%DHB2=ybs=OMHK@kL%zN#jw`9BZ+``Y?GZUTuBHS_6(wQkf)g;m^BTQZ=|(p05`#%eo6vPk z85aGy#ZC%Y3^FCY`0>>c4lFg~1#PLNGC9qpq4@*>>1_@;;QC zhT=zzfeE$=XjZiaf9G4`NdFr~ZpvcFc^)YcH?xLIRXG-w?vf7Osm8&qA?GlhI7yNC9TKf15)^Gt}XP3-@&Nvn`D~$O)`b+ z;w~K)TXhLMYAq zv=csA1tR~X2mUza43kVYgXO9!D&OA<6EB!B^UkfpY@1R15H(p?=`az#bgaZ5zm%b) zRSWq4axty+EXsFT(C(?z$z18*^ud8zdd|0&C4pm1zU)}?xJVeC^b zw^Gyk917=NK`i9lVwQtwf&L zTjq1YPBct4`*r_BA+fBbwEoBRkANh~|cK`*S={ATJ{l><3-+|oG%CHwRo#eI&-_o@`{!92_IH z8l!BiQ9j{0S#fS5mVZ~^xXVjWLx+b4CI7I?V~RoM4A7nW455c3AM$kUbP+k{QC$J}b2iO#blFnYNoX`gqW z4RSYu)5ctn?B81a2PbJ7H)p;c^c83Ce2v{V3W!(OTzp||K~Ag{Kvx#WI#{9w=80cn za~Zd16IVfz*WX$FZaz8sGZ@bvy^ou0$AaX0C;nGy&atd)h)Vk1Fu7GxIAr~gp8LTk zZ16s2%FD^T0{I)D)hJ7=^7n%7q8RQ!_B7SAGz%wTLsI_G(G{=U( z#WgR%J|`X;L<{J&wh|_~s0(toM4^3(HN7qsh<2S(puJcI#j9F3?1JnM*k*Wt=9&<+w)y+oQtgFtpymlb(TzVlg<21(}&SJGgbD@%Xig%dp zC@LC`hgXc4b<DS2;FW2!?iSC5z+_z(8>g4R7wFQu?zX zbw1~mcixJY(`bTG&WGuJ-Sk7#4UC<}vFk@JLf!NIAigq`ez^Y#)5&D!mRGPq^y>ty zJz&T2fr9B;r2y1C@{(C_Mhg1Q_K-y@fRV~P2SI=P5mxBw+#t^99t z+c5?VylmkY?<)D(S`CUh&P3NtYlfVY%$d!Z~1Vc#7=8 z8|b@g5#Eb8V3)s&#M^uR!`MqsV3??1`SN8O@mbsZJ!qbw;qC+bvbt)-cxE zAK*w`5UK|ghe89bhEu+ocjTx{pxx|R%68ZTqqs<+rMF@ZW*%Lz5+ z{if@-&I2nV0v2-;=*n|BMCJ|;Hmnq*Jh*}8{co9Chey1zuPzWxZf2O0W`hT&`E$Ie zlk}gI7Zgd?Lel+%;Cd&EW|FxEh8C z-05_K4KQ)n5LH#!h7WQb!Nw&U&FAhWCLtkk!2dCXN}T6dH)nB2Zx=}%R)Vea=CsSg zmQ1g=1y#pO?1Zi3IEKu5y8dSXJ_$cUGouoT=SMxb++Trb>)vC5P7;p1{!X?#IpY_X zGh{>WOYE1O8)wYgRo}a4rcRMKUmK%hS{H**kOcxLNu>;=-*D=@&UatS6>P*9G zdZRwvph0PnB83!b(11#H_OnhIN^?RY6zNY?h(v}c%_$T`Dxwe?C>r*&wn_hiy}nu)qFMRdb1p!xXy?9+FfMH z&RCQwm7>cHdAGs#YWBRtE?WiJ(+LtCQ@G&RVBH0~d!9&_TMJjNcf99PgcK~hlv zvJ#Ew=~06;2~vC6k#wu`Spj*TyYdZSb>CwuI8lwUn@+RRxmU^3vJJ3*K$dL2tIDX= zR@1$|K7#r0^%#6!7VT=+l2U~@l)ZnJxW{xbmOaMk+vtex&l_Qf+h}f$%3QSnHWAGa zj)7aA30Ub%neldeFqqE-P4hWQYz_I(pZ-%?GtnR3tubb&CpOVNnMa8K^)T41D_?O; z=Lrp3cLJ1jV{ygYbao@()6+M23a1y{gQ9|SWZAA|V4FP?a~xw~)t|NW^6~+}pN>ut zhjdahWgcyeoQhrcJ$NDTGO^<4PJWKPRM}k(L=0wwf7Hba`->7-F8`0}eR9P#k@s}r z$Gh~Ck^|G_$WfIlAd4;4h|X&xn%`B(NT2fq)zV1T%TgaYcn`FMRt8r0m(%G*gUsnQ zCVYk>o%|I{z%`{MwD5fdE+{c#bcOEt#Z`&t%5aR2O*{E8W)sV8_*XIV<}Njskw>%Q~ zWE4vpU@8P@MB#n`{7s#POq2k;CugH(YazR1d^#>2zK&ymEQhn(zLK{$m(eq&znE}S zYhtu23AQ-@qcxlU!`CZ}>0N^)tom6>#V%A)$6ha-shCGbf*zpb@4r+gp^F^jIVWjV z>v-RV54$!%3$l%Q?^^-yw3?p^-1BcV#PtNT?N=*Xo|%aA-7drSi)UckENd_*5rtcJ zNysPFSe<-j#?(@je2q@PYT0+-xXA+pWonrxz0G9s;At9J>`%{ZK8NP}rxJt3YGm6E zjtpC@#6A1Wacp9Sh5dm`*xRd5x9f1kRoVbP@XUnk(>2K7z687#Y6H{K^kGeJJ>9Be z1Jn1vArEhL!M|gB@%E`Fcy!4R@;c5QYa*Ax{_2kMmr)j={YnTw%g+-zKX=2D zLyPg;Pfa@Z(d&w>?dxGwxjn`OU92ehaU0$kHsicx2Lbfg!Lhtl=j9=q` zR?p7i?(k&Nxpg|9rCG&z3+CXu%C{IL)<*U8=aCfNty6nnKva#oVDY0$vgEcOB$V#O z8m@=zey0lq-#^p0Ka;TUd@gNTt&dmkJCPS6V>!jCW8u413GZQX!}*K95bKd>cGE#I zn1ACnTf(d&Ki4&}J_%0DwR2)@p5IoOR5?<9{=_?SEB+RW?O%lr9p6AE(jfChVj^=`*dc$ zC^>aBg2-tZZ+>)gD!>ddOR<5j_F*lNMoHXfBN@FhJ45GDy8%$zJoc zrb;_P=ubm0qO7$AU)-)Dfo<0a9bQfU&00)*BXuz)Kmk*lc4F$?3aZ?Amj)OM(b=ql zq#av|p@tSBkX7*$ ze^riS9ld@r>5ZZE&M6h)8j&?{%q^7uzOV#!oS)M`dw;5vFx@h8lmh5|_$qiXJRWsi zDrl;=1$?sL`z*l&^!W)>@aYqP$s7jWb+zE$QAI?n=RIAqrkAZyI>IXPJ}H$ayb1BG zGwok%0TXMIVaeStvOLld+3;s%JfBgDICh$?ZF)==)<2@76Xv6iNDNuEb1qeCyv-Ol z9wg24Vri#o6?ZzubvG*I%S|f z^|hsBk}nhU-JG7k7KizdKQP)$%X8L`6j6W9DL1@M0LM zm;h1fH%XUOJ_)Z+gXfC~L@HYdMSZ?O+3SNs_ZjnG%bHxF_ZnGlQSe_dQ0pf;fyLlt zDkGdKKM3|iCYUX^Pq<0cTv$%R1zL*tp|^LLuxpYpZ05QB_J7AxE-zG=WV8Z;_#FF= zJKKfF=Sv7Br;Bka9_qpk-B*dy?ok-lkPGMjq{Fq|xx%oP4B<>wcSx!~53eqUk}Hd4 zg+{u4pj>hkKIyL!CU?5Smm%Jblbk8snH?_lU8#b0i7~=;z0ELqKtw2ZX^qga+fw-K z^9;FF)4LSyCt-Tg!iMg4r}W@}OAqCJ7#<88SO z67IAzej(|+x0=U2Kfv>^o$tt5@w#(JTczdhZ9mCT$am+`LbNt!qu4HNYQ=+u7}`ddZ0o~OPT9{ikk zwu&*n>JxExx+=_K@5A{|iIy`?^WD9SO}rCx8tneF4HKu9;^Xw^RCJXXcX(DPv_4+L zGt$M-hDfp^t0Yp%=in*$Br=}#{IW&Ko%uAV=osudriT9BtAIKOfQI^8 zoUN;leI2<>Xq+MYW7Qw3v4!_%xAUFQh;2-N-7v1-n8;_5YiUsU8CYPHg8O=tF|(uu z$JA!Pj@_QHdcCgj(0y?(*{P7KiV%v>A`$|x@abzSUis_DZL1)rc$8%`XHNVum+>f#n3w3)RQCi86ZQg|{6=;MN1 z{4}4XGM{CHJ3pnZA+LC02-YbmJbKEHWxHAUyC-NQ0JNsb9HFM%KG8^q* zwBvH}ni0%M0rfR!*=cFADvQO<+uPbj3K9a4}HR*pB) zQfboe0PGjLOP8K*B<5LZ7$(N&_YSu3PMb_HYi(slO{-)~9`&-FkII;Lx;!stfepB< z)qvc&JS(?kIy~Ct0bYMcbB|i}p!j_gTeWe3{iR_}x>u{Sd%Ordu{l!Fd^?ZS#bw|e ziSe+eNt|x#RKRoH-ej|f3`S)8Fn6Ei(LWD0v8V7BGj5g`ZFrc1`omTHUPpvWG2>uw zdp&88wZkaYTZHXS#vL=3()d&1T&Ha}8RrHV%j_l=JZoCBX%ct*F^5J(#h0||pcgKo*K6(rGSM4Dof;Rm5 z*PI*bv4!%ZmT;&;f}Edyjp+YA1Xe;17@TTPYJ!lK>9s3p^f`>BC$P|if_)xt=r8oBy2T!W^3Uun=`QYiH@-OO$XH7O%pih&%vso z0I)lh%4c;%=}DJ3T#MdtQ`4LA6Bm#jnVYbq@H*bxQ-blz>!51G44M>v7^CYBqUf0Y zpe}iXoT(b92vWRF4OWH-Eti(SfT;=h@2wvu?9dRrOPmkYB0p*F`~e{jx><|LkS>} zQV-HO;=(E6&tUE0dpPGL&-wIBV}E5!3(vm$gzFy{lGWj|pr~{nqC<5l${xUrME4h}z?U2|de}sXWVKf#Tfc(KZ;@a&bdDj5o{s_Xk_Orm z_Z4fOIP#vA|A4nYg0x2{y6;?03;Fxj{43W{1kJgxuP#up0aJ48B%gCEiU6h2ofSh96w^qKBkwHvZi_c(y@C1zlAN}E1u>HdhUC?jbpO6U;qCa-C^f|A zFNKF8E$V7T+U*PMqgp;EH+?p}?Cga(H=mH&w?+se(iJab4zd5{_mE=$*R<{TUAQeW zfwk#vM6HG^wAk7Yrd0+BRvKR+7mIXpUW6gGi^~XCPdI_A9)6+DHQMF-Q!mr}D<|;a z*q1ckqL<9<_dwUi6q@omo!QSB^Zz#rK7Nqq*;5@A{omK4t#u@{2G!8+H@th!u$FAw zI2jb9M`8JC5t@1CFMXe74KMya<>xYSVb zmrS0ylvLag0o&b+h95_&iM!)v)+&D>pp zN46HyDdhu<-V#|F;kguxGM5MkmK~#fN(!ES=wzd}9YWt7*+lunWBPkb0NG+3jCKdw zNX4mNAQKqR^IjRIsiz&zan&?&A&UkYEO}~Ui_`ZVM3ZOgfF9$x7pE?Ode(eu)i4uQ zthqwp-V4T%oZa}`Wf6D#b3F-NGeA@HTyXit32^$CIJYqF5gJuhk;!~+e%#1b+PrKZ zd@_?{A1p0|y|vT0obWj0_zpqT)g|~l&m4O4he+ku_X3Z)S6CB~jn?Tii7+D#)B<-9 z*?q&1zG^!WiA;i=V>zV##d`9Qb0yUh*Ko|x9$3it25!xc7WSl^g${nEpkgxyQ$nKP z!%|)THw!W49tGL3PF8j!zt>jwz~yi2(c1JW8Jl3v9Vy)_>`0XqK03TisQ7g^b}x9s z&T-oSpIk=^jkAhzoKps7Z#+#_e3ukH)^dRRE+)dtm?ScD=Ve?RGXp+F=h2?^HD#lR zEbz^S99UwqA1CiFrxnMaQG9ZPgv>L;Z6oJtK<_DBt-BPbJrp9>u8A93E|hifxLaj?~m)}I!~1Lte$z7a7#cLSJpEtz?@ zE`ixQ`6>Hu);DVCv6g?Q>ciVDb!3O}F*s?|#B;0W!2N_|nE5okqJr=BORH~#hkQ0| z`Dhay+6nNFXHY&g^99qJn()Bk3^*npV>jJ1f+w{`T&zSfcSQbJ`Ph+Jki7UTjlAXn z&npgN&Uyv||9eModngH=+73dUTsc`YwiLAXny}mL7IGuEYv7w)3jJn08RuSXrtza) zxgF+wx7q>Wf>A7#?*oL-sLK05ZLL@suzTQjDZIRjqEt{4S4;RtWf0ON+?{shs@Y?05v(_O0RtYsaG>Vuj~ey zn)n!voPQF(5NG;6>?Q4wUO)zYcVa+zAl8i@g{GN5Sb_ThZZZ!{M7f7Ea9wnnO>Get2RusnUU$hQaoYue%&R?;(c@i!2 zohfukk)Z)!gUPha6!_0E0&eDek>{_q$W1z$o}03QyL?~)t!W2#*P|+;H*p>eJs69j zE-r9rVI@BM?ZNbgj|hIW>VmV56%lnAFPv1aLvn+8CwJjAToCnYq;>v}@ z^TJg$6pz3IhvdQ8TA!3Oo*-7S5@@9yjZ+s&G3!gmqs8{MY~IJwLRr-+wxBzm+2bdwzpWTX-U1r_1-0A0Y)Qh{e^52;@qids+%zQYA%)9w_wz2YMVx5=YRSSrZ$ zFy!ZVHSR;LJ2a#pMUBWEbmv!Rwm&bBv1=>DO)Y=PHshW6!f+-t^GiIObkIWc=5ZKX zHkVFJXlF7f{)1%_V}v1GJRa4mB&UYPlkG|d$TgW@alm<4Im?2+JgEUAk}cFC{2{UO zI)u7=H2ADRAD5 z4#M;wUrF4RZkQI=f<9+*@lc%ul$V}{ajgrXA!I7$mS3?9;b%|#@8_}-h8bYGLY!oM zyF>P8->z70wSZaNpo14fb)j#PA^tULr8BjIa3E@!EaNjeic{!f%|K;9bp1hF6E&lO5%b)-gj(IRo z>iDzjfD%_)9Zf#WvL?STXrpj}300AL%{>1i!G@`&;=`Fo;7a-tI?;4B-5$xaW;_<- zg1kZOHnilqz*{PGruVQ)@VxweV*0(s}UBKQfc!CJMEZpf9undPIf{TC z8;Wb;rt@{W>pSmizwJan8%x1I5=Iv`QSWk zsytM&d?JI)_E383m@bk2eT-)d>Il0RtRs(;2kGYGDI_pSi~aU~3H6yimZtFeFVFlx zGI!ac>n1*;kGaX?xA)ZYtUW5 zq;$tRzo*=`kl?(HcJ)0QC}QBqX-R~vEg z;fS()0dAP7glBbEHusw#(T6}~3e)AlCI&c_H+D+%q z9o^5ax@-ufCLf-kGKR-lx8S&HGrIgeLhebwBKqb0cPUwvz2T@Le71B7joO-xv!;lE z&%33_rW(PO*?WK;lECFMpUCds7K~WL&^iwX>U*IN$(Cf$NHjrViW#4I-^V-bTIl{5 zB?yQZ#oc>+jK+lMGIRd*V@b>;m_IEK^4Qwjng^sydN#U> zI$K`5(MP|Xj%0Imbnw^qUb2&ur~YqMG4_@R#^?ppu>W##`6qek%zH_6lGLFuuHc6$nyV`xFv(0 zxcS3dS}v-G1~rb*bWww6yR-=|UAanbA2|x|i4|_&-hp^6o;V$QOr>J3qtEOxGNU4! z-u-ofu86!qpUbDi+@=`Z{U!sp)=H7v{9QJ9SQ%B4oA9*RX{KjqF^;-11zN-NQUAvx z&j$yEIJz3QMktu#EKxr#$!JHHK%B-gKGaq)%Vv00BP*a61MDpGw zW^DCV;`%KSX@e*+UfKX^=Ecl--IxlWs4O}Q%OQ85o|!vSNY$RM1E1}4G384jt_=i~ zov;Y!Xm|5Iv6C<|beb+~b%0HOztYBNf&m#%(N3WcmHWEs$qLFoS*0Pg&3 zGVVnIm3&x_ciW#b-s4V!XT2FJeE$e*Pd-M2Y(lSp$!8m%@wpt~RXo4i9xHE$&~Inm zX?X2n)>q{g#0*(sf6``hWLyIC=iDxg$~p}n>)Wt8Z#~|YRb<-lTET%S`52d*N1~W) zIK{iaJNP``BzH^jQ?OzDlV-z;!#&WZe2kvHWPzJSM7fesIVMhp3bxJe0_nH=(Ov8W z>=1b1mwq99Uh4yL&!WXjgS-2Q zm|#A+KJGMG9V{snmwUpxmbhUURx#p&vCw3xM>ac^5+{9WDAL@7BWwR*NAGMjOPqkZ z9}1a%D{-zjX*XOMM0QeREu%i>2A$LXoEW`IgqqJ6(fQZ~9CJ4oP55m0YQIjZYjBI? zp0dW=jw5_O=MbG1*H64m5dO;kgFow?@X7je@MsAl*R8zSe?cst@s*|izsy0`dm8>H zwt;Tk#4^8>l&Ddno5e4GQFw7t7XQi^;d!~!blus_q|N#xwe@02;)zR;=iP!@WBq8W zRToWp%Ab8^)sm1`I>cgcGLa9;qfsNb&}pkCY*NcYO{tT_(c~ts(r=(@b<$WRF&0*R z=ef>fPw+1ID@?w~OR6XNg{__<0k*EK^m%hLn=hJXRUbhO>5C^*P+%~=o*+v&!KFr)dU5A022dMkD8`SbaKUs1*3>y@KQ8?a% z_z%A%!4mD@s3wm?p1Z(o;02Q}s(^Ciw=v&SW#Ok9;z%LT3$`3-&8(qYn(Ty|^ETuC zs5_)#!e!9$9bsYW7%E!322IKq(+eFLc;HkGyE^Cqv+LIYv+A@2u6Kz=#!ZSyZfJs$ z=K;*+KvODI{=r7ZtN?`tONddT5%Jui!pt8Zg;we>$=FO|R^MSI-b|c><=<7Mq_wg75!tQ1Eo}sPNHpsEf#xesi>16U2&zL^t+Hu z|L8_P^%T?jYf9Jy&_Wk6X7J~}62FI;K!cJL$?r@tk`VZU>FfT^Y;xkL>&XgQKPQ*O zULm+<-w%OinmBnXG{QIeOKH8^M8@jwTe^D509`mH7k{5R0&Vr9glV-M;PtZ+|BHCT zaM6S+MNVOrf0~2-n(x@uLo3>$XO%xy@!54_I88lKboEVw{7h0$g)5sL2}fFLS%G#oChffdjj!YB1TSrDpMH_<9_vPx z9fVxnj5Z=Z=tMkLnsei{cxJ2UI4)Mblcub#La9c6-z)EfP3N>BWyvD4|LsIr_Ggyh z&vyaHWX+{x?X75n@m9WrDhdW6TUk$YBm7)78ojkrafhS{>ON~_M%(|SBSVFxYJM}! zQPsoYY7z3;p`5z6i@^7(njq)Ed#KMkkfxvOX=zdcF>%`p#%9t?UAQrM#EqfXLnmPM zA~UGJ`+&|B^7-H0weVB(7Y!-WU^Okq(w@`V;HeW1d6x;=y;Oxyj2!;>^oyKvS%y9H zXVbkOx5HhVv0!uMHgifN0>=bamOJn6z^sysq$YX^jrK~zkL)60TjfNq*o@wLD&zI@hH!{?H)IWT*?h@*zZKg|Je4?J0ol$Nb?@_6r%lEf^@J?YaZX6p7 zBZUQQXnYwRtx`reoNB@||3>4iTzkCr!v>YD+sL+9-e)vdgzi#&fFgCvQ2VL^CyyH- zacm7e{;G|PJP@Ph{b|g4z7u_X?gLbF5Et6g!_eunfw57X2=DH+QhIk!AeMISp+3Cee&ND%{v=L#(lml|dYoTjPu`$-dL zPOi7B3M&kCIBCNuoc+v_`|Vl`|NZ9#MeTe>Y@8n&b`hNQOA6g3c46jJagdDQea0VW z2v6-Sq``w4)VkOmY3y7`@6f}AU)C}TkurGsloNb%(!y2el%O;AD|3`*K`_=LGV(i{hF9HN`wM`ByZ3@W-i0!`(HscN}U za8uHs`1I(&FTEuCoM)`2iNujn();1pk`h*P{5)c}XanQ;hG#3}#M1?$RpjBZM)K~? zBKkvfh?IAZ6NcvUz0ZX*M#78|1kp zjgM#?Cki!(>&b(m_mtJhg*QCkTX8`gxH&(8!H^+Pbe@J!uDCGvm-3-pc_Lm4UVvkd z+#&rl4TS+FuW;eTbr2RLi+awZu=A}6PRM#hK2LXLIwY<^hVBpYJ@_ZJdHEhC>z-p~ zh6YwVv%#Q!HFWfF0&{CH1!j6Lq53;A$=B96;+1)Th=oQ#H|`*(WP%ao-a<#?GaO&E zjT}E#4O@@%D~wIxJv53@Q4snp z2|p~DQ)V-nED1&Z38)cy2xyrTT{EWVDc>Zvns> zI>0x#9;%mKiYd8Hc=+jiwsfux?hYP{trJV>F@=*ApQYq5QoVy*K37LhRa^w`8|EZ^ zmK(RYtcQeORN!8UekH4{Z;_THZ8%_`!#hoc^unkda$uDjr{LCty{gUNx0ml3uUL(y zx|cwlXDi++8xIZl+sK3J<@mca49a(DvO4NfWSlUAv5V`%Bl26}+RH)~L_af~IuCJN zZUB7tSP2VU&tYkSj^$b(U0AoakFK8+K^HWN(akki*{swlJWsL(9)u_1q>WPvb1NMB zj-@l-S8l@hF{va(TnLjVjKy`)HrPNOqWSNw*q`ZyJ0`_rcV-rDcu>sj>K)IOEf6yQ zZI6Zn#lv*_@Jz17Gm06rdKX>Icgz3i4$;!OQW{l-73;U;q4GT`TDQ&<`vk8TO)Cvx zt;DHVj2~^;D+0T-B;fF>0@g165RNt6Mb}AwWP2w4K#gp7+G8>xaJYYo9-i`nbuBZ2 z%TWj6_s2BYoBxn5e=1I#H@u+tzr>Wt1&Da?y{7{?Lbry?@CH<1uiexE0I(tc8l#9Np~{1^dy#~P+c=Llsiv|POZMPylXV0%!jZ8oWV0?Gz)3cTL1$_c zsUMbsa6@Mz-CxYqZMsj&zW0*_-7o0!$5M2yR6R(@hJwjgTdMDIjZPesg2(LAsp2Y2 zuzMNBv=q*RHM(B`svJno^>l`bN`zG^(_u<=FjYu9N0J47L^LQ59eSl;t#A?ZJ|-G| zk0^0_ww)#4r2a8>F%A^2q{!shCIqZ02Lsqow(bTcv)3SM5oTQPT8eVuIf z^P}z5jnBH~Z1_nG`^@pM(lr`BSqX&O>#?eDHfGwKpdB-L?_dQdIGr>?)d%@KV;!HN zDUJb=z(5>)WkQWEYz6zGDHXfLPT=gia>i7$gIq;~q4j4Nn6;0NzqJ^H1Kcrl-)d9{ z(#Iz)qQZiw$5Hv64~pI(*xecd7Hk;Ncl9Oq=LK->b|+(%sLmG(Kw=}$|ksJ5lw?)GPv^`xE;8O(`r^?XIlwdCh-hUr#G`fL08RX zuV|6dZe>~}BLi8z_U!RWGu-`ITzI+6n7sI;iSx3T5|4eJq&v-kh?!X9@xKu;VVM&a z9XtS|b`;>1DZAj^AAw*}xVTVJDVwp*{6qKk@_m2{Lhj7%WAsDSEK*b3K;6zBqLJFU zMEG+e5P#S5)cvA3l5QmEXr2LuFVdNbZv~h>XE&yPGlyuW#W>AQ6Z%~D!?!*DpnB^S z7C40Cm_>KdxXBlV-&}-o6Q!_AS|4VpuOsXJ1>m8p3&}O@+2G-zE41J}&^oc#uzut& z>@1x}!hFAjWa-+9!B`Etcyb!7lv<3A%cQ8Qd==~~(gv5Jy|{q)Cp@yFg2Kf~jHsS9 z8Z64jeZg~KVe?bio)!;Zq(8uP>u@;y`aArtmd4H1yJ_tyX<@^+IJ_`u1!L00IisUn zu|i7%+Rol3W8$Ld+WCv&=}~zaD=xsvf=d`<9mO8r8pYuWW$w$D5;8ny8qN($!@ZJO zWKvcVk$8QPD(@c#i-HXp+B*hEJ4nDIxC{y@&+zBvNIZ~i2~{qWu&}WX8}wUo@KY@u z{?d;J*JZ-9ZStV7>ouN;QXu)B>g4HCCCiEP&r!2516Ze_3yqh}1rXJ3lQ z!*^%FZrL0RKGH+==cR*|{1q~?vk13}OXIDXDP)G(TS0USOMZ&TgRDa~?6NyTkF3c+ zTmM>E!vvs@fgv@>-NX3Yj>J5LDWLo14vAZh( z@^skOQAsmRwb5}ui>}2YOyS9G%;}~?EU)2vmfK`4wKf0IFJ%i+hx^HUk?xR|N7?-M zs*0w&Od==Cv^h0RStz-$i^I|mf*lg@W-t6*5X8AqlyO=t&SfrV@WLTkF`%!d` zR-k8RnN!pBYIY;#T{83S$jMF4boz1ry>w(5T20p@+CB<6^m8h4Kh#X`B(H;^o38{P z_n6_v$rX^g^AfT1HNzaxE!SVjGXkf^p+>GQ8)*NAF^&CH+PTPpN_>~b_{^2Cs%H^C z&Q4<{Xm;2NeVsL_!G%ovAS(weqW_hjvJ?jir806< z{2ff*Ap(t(4pb_Fp$d}MU}pI?95q^#bBTzh?k8m6xu+pD`8uDP1%%_klL%7n!h757 z%OUlTC1<$xKX@ThL6??!vsbT1!wvI6#%t0Awk%*49-9x@YY9$R7z>-@wXs&s9j5$F zq!#?ni*!b!Lt8e_<&h(mkL|H*`&qbi(gi%{PJrgB1iafNhb=F(U`qBp+-JhFp@AFT1 zp4Kg>@s5F9xpVA6MIZX%fFGT**Miu93N#Enf!*_yVOK!`WPU5534weIia+DVEh>Q{ z$DQ!Fd-u&3_AWsL!AhIZodeR+a!>$Ve826&+Dl3 z=H)!Eq5;FgTcBPs4mWsR6&!w%%5%zBq3&qDyT$0UGxvt#@ri#>^i()Lv6bO0aysyt z`cw#7brdfD9Zkfh3dr*K9P-B}1l(S$2;U4bY}>OBZ1}WPdQ3Ws4OtmU9FFFZFL~*V z@P&KD)UbYH>1xPzTA$@zbmdU_SP>JGYRJFOYr(B&CcKkf&ZW%0iYnSWAv4UE?wKVI z@Y$T+xWvz$^V-P-u}qTh>rYynpHU-l?Lg3iea2%OC3b$$Uy>&i=VL9m{7E0V;4c|SY6CW+0U;n#8Ni} zAbZyh5A7Bzr!@ z;@%z&;SYUXeEIxJ5Hkgx76W^oiUCXF9scf65tvgD7>mu%SK-@;F<%+!;Fy=%qo3#s4QDX zsR7@OSf)$5PZiUN>H=Jy=LL}y3MxKoeqi6u;B!4&y_u{0=e}iZ35ukaqpr*<-ZfMJ zy6GyAnfsgAzo;Z9PpAmTJc-8h20B7xaR;z{y8x0FYEY?N9T?{>PKUjBgQ}|iTG&j{I8N(Uf_X)^(|x`oXi;{Z)EsKT(9}%Y`%s_AmC(*rh6wRG+0Tz6%W@GlpF)z37qn~ao(AuPYc)hZL zJ-<4HxZ0^ucD)a^sYkUAc zT^Ylsyq1JL>D9#aRvOP@cY!FeT-Lqc5Y=uR$ESQJZM0Pk9Vf7%!|o2$Vf$BN_+=-y z#f-*2sjILoSDR>*%F^DrU(~bg9}|^ZBG_U%jNV-ybh3Ug#{10?9-1mmvW2%Xl5MeoPO99pwUJpV0;Sy`=Hjc|4f>1hZ%NkW52$m|Gi&@N7413=zOW zhf1==JPoO)GpO;|p|Bw%?sU&9*dmbvANStDhnqyuV8R=^FZc}Uhx2IqY5=BqFW>}6 zjfKj&o*4d*KR>Fu@yrMfE-=vxSI0dPf#-ESbqXtUQWIL%!&HSQd%fepXN*PDZ*iX`Xu@7&!ZriLRF5 z_CgiDe2{^-fur>NXLUYjo%wO{%fP{+;Dt*_dG0)AMyVQfCZ$r_h7EKJ-=EW4ssujM<4|SqSok4~!N>Ou z0MFQx=MN5I%gBAU4Wzi*dp{xAiiPbXLb5i%4Dwr!(i^r$MAhF4lSNX&`{^!-?{yWX zK24>0JaeM;jwznJ7>UAjlj$23B^uwt(WCprIfI$U@n^n|@S@ji+MrT{6~(%sC!sAS>K;xaQBTA#hA|GDvbVBM91{Mju6*^W30ee(o2ZaunX*}Pheh~_xtKrZ7bI>|rOu*A=g!F{aVzf; zKY<+{GrU2U1s!M8uIYf$TPvo=P!hX>j$?ORBVBOfe;7IsM=IMejN6fl8o$-knleDK}xA4lD1MZ(o$)vzV|P{ImdgR=f1D& z_X{W0x7T8$!G0Q?WeHAm^GQ`{2J|g_LS_kHB^Rbs(th0>cExu>@Xb6l<$BchYu*!A zc{%pv-)7qO<0)AG6ood{fjDN(;?spCkXELE%}<4Jhp^uwGNFJxGB|>QePQI**C@^_ z=#69Re^SQ}x8cV6%alo0!2A2Z)0_@7!Nk?YkeT{{L>)=yIc-0Hqx&ZkagKH4wLT1m z?(`8q7Zc)hBM^?&I^ZpQh)!ntV3HV0i@#`NSWPq?t2_-;Whd~5E03@rIHy2*Qz*MH zhyj-;EkyX3jNrs_j%%~M46PVNzVpkMXkOldgGrj0vdkC_2JaG|`}qLc0d#}fEf6~_ z%wGMkyxwD#E|y6r;`9YPR0yf&W{bHvboc^$Dz6=+gyzu7t0t_C_*4>Z_YQ6!h^DJX zdwGkMSyrZf8Mr<7$Mm>R7|cJ84x3h*#TBoCZ{gu^dbc>#22#3WrW+o3KA&W9dAUn2 zf9SqLYrr?+2e(viA{Y_~4Ar?wQLF-pq(o6UjnP$>%R&1=zUc0B2>Z^v)9 z)u?yb26&fPk0rakaqWgpSQUGo=g>6|E<1i^&(-IFZOBTJk*`jyIQOU!l?0tB<)n1^ zFkNkGMay6AWN+$J!4c&Mu$sifCHJ&2B*mZHOif{peow^1@tLG5BOd3A9RtsJDZF8D z4s<3=z<@mup{K5nTFHtD2F?V)`^tUzF!(04z2Y2IFW14mA{ku2f1?jl^ND^GQmwKw;a$gD!KRy$-(X^jkVNN%t5G$oCbn2rbke{L^=y;A4!%~s_$!G6sUZQ42uEgQ@ z7`L~m;ryJN!1awZQC|?n7I>C{P3uR}ldg!PT+XtgtCzh<=kskh`Jhy;2x?a>Bl8l^ z(OrkQ4xaF1!X{`7`t)V_`mcV_wJraVUs}=h;YwXRo$#0(5KYIq0TvuP>jE-u%EYiG z9=;6!qS;yn*ugn*K5KNtd7W7#kqsj@ZHuAezhyw~{Gvh!CZbMC97w9I#;rp)LFuCz zj{EQgol-TByYkSG^%qE=Ni3N)XF5cD7{yb|57)147Du_n@u*W)$cW^9p!ap0Xa~n# zv|3Y7cY3@>-{h=%>!7)e)Vuj)a{mM7*(D{ui%nO}1 z5h9GNiOybCOmWr`ERqpr_EmHct@S&|=!0I+yK@wSRsgxA`kJj(j-ro`U4wDfn*10d z#V=ky1vW01f&1q1ye+wV!IaCuhb8yWS#c8tt`nqCN~n;&coR);pX1ZdM*28c@ghC` zBm*Z$jnMBe7SgY->9D~40nHOy!%y?- zx;*a7bir4Xq|o3>E1UmSoS*-mWm0;Fm_qH9SpAa63~{bV-+iBmwYMUqIVwT)#f6w3 zT7@IudY~@eo+fuI!+pc?Oba|Em5oQC>x&dCxBeH=l>b}5(;%IZc2)(uW?vjJRDz$D zalmxX(&3G_!|B+4$U zXSR5}LamoMIE`O|Z5j8d>FP5Ot0f9se2mDsO^tX&`Zs+XbPks#uV9zojzpR1i%D7b zVieQcLT^;=CLda5u~%IJ{#1)n@qyELbdfSz$owLzDfxI&ItD-8Q~=@6S%O$^OIv%-z$;=2mrMCxu*KeY& zF>dhHS6|>ZMFL$HstU}?lm={{DKJW+oHt_?b*V0(!(HQWM(PQ+&3y@~KiLPBg5x;lss?}I+;gy3 zDwbV2?hd_{7>gSxZsG?ur{W8;hWc_@(H%iS0%PfS)PGMlzVWWAzwEM)?piBB`X<<$D|EM!PW?ReNsFfkUm73@RDt00+u*byjm}xS2E~SYIi|lMC>&<#;qnAhu&IK0 ze?3kL!s98sE0$GzBrE8gD-UMF)A5h`je1_iJ)FgDzavd;!Tygv8rf!3>!Au(xMm*z zMMfkEkc5NqlK=k!Gc7IEhxG5i=e z8|fo#1;L4f8+j+{l2PBLfa{$H?4`C zN4C+#v#!*5*nf0u8Hx(ws^~LRfC?|FIN#(2^0HzH$$6uWdncJ;>((b!?W=%{s=lTdpo={CTutb+ zd@3_bggsYujq^izfp=gY4Mt-jwP*h{%8;0er1LlYjbGshrRfr_*VVF4{u3{ zj~xmlEz_WA#~^;0L{%`*#NCPnXA< z&iN?sv7IrhK87M2V|g_yYp9MOpPD^bD7bZPJXyALJQ=r2Pq3sb16MY5G3#GC)t`Di z8+R!0B*(fxkX7k#QSQuKH1W8LF}o+guKI4AA!17G=>wcO+)oxq8)EeFMPPY85RGdz zVdVZ=sP%oy<%L(1x_8=ayF)xWODGWSzH-uhdNK;ecCgEH^&w%AFP@ZM| zQ8IlbnB4cW#FdM6iA7Ksbq*{dhYh(d4fsJv-EAoBJdXEFzpA4f7+jH3;}`mBurxWX+3W(X43Wmu>zdd~8zX$>)PcUe$623%$%6Ogg;agN zAs(x~$6jsg2F2HZt`rXbAj^Kuq)9ts@rQ;Y=bU_qRu>zv;%WdbFq5RyYc25UGhMts z%@8;2zf51A3Pm5|m0%Xp3a23o@_0?;>f9}~d9fCDJ}+g~9G$^9{}jfucL}(2W;DjQ z{===7?et)clE8+mwy(W%0j4EnGfQr8e!opu@ke$mn;f|tM>owO=IgiJ!qtM32x@5;n5xDu)=Z^zOK{YUv}S(#qrc{?Z{zN z{1w4l;5Ux=_6eg|p9R>h>LZ^%?!fL~G3@ThLD4il^b*}q%=L7zc;Q4rMywk45C5P^ zE*vA@!yj+{3?b*@I%r(zHc}-!m$iEFoflO11YVIy>~h@!LUAV%Go<+=rIQ3>izf-( zBn)}=4&|`BRFP&np2vvmz<&{P3~bLR3z8Jw~`oLi;398e=6S2pC^aCazJ( z$V20BMfDauan2RD_ovY_GWr;p8G$8U`tZhF5{Ey_z`+~maBZLkEW1-qEi`gznpPq% zyX8t{AAO^4B~t}=xGYKfpF!g370WK zb8XMj-Uq2DWd4iGo<2dv2aC{Bv5B3%&&o{%&$Ek%JJNgWhWB8R;v-RxnKjU!M!o4_kg)Z_m%mj7?c{KX`ak3@DgUjsp z@=j|T(cQj-y#8Pv*6vX$Rve!M6%$UdiP5p>wsAfYJEV%gcb|a0gKv2k{zRgwsR;V6 z@S%qDY{+NFE%fl7WqkMi)r@J{M|$nDB`sD{5EPiVFkYL+|*V|ud#QU-wFUP-%L5X)G-yc@B zc|B>|q(H4JxO3;fTntw#qjlMm_$Ew)iP5NI_7D-^c|T>emXD*#x8((Mj!DDGBReqe zOB)$}E`m?q{-Y74ciCH`im*VwmbrP_89E+rhV8eN1PZqwLujBc=PKEPyX}%`fZHze zEa@4mfBP7H(e{QKIbO%9=Ynbf;0Qk5eG6S*=D~Ar*QI<}fU|Eq!GwnMkf|Pu+P9uj zD^+oD_#8!UE1kfX3eKDd;~>trq#(~v+p|%yEPeCONn5f`!+DV9KmF0+^6PK^Qcl{ zpq{G!bnA~2Y?@`n_2mo5jIR-(kkCojK0vzlbRDf+od_zJO@GIx&^r~S=-1rEhAg*- zwU7I;e0Uj*Uf%{!^1hRK4;$zY)kx;xU@Fl#-bz>4rJ?jkOVoH1%Q{{CL|pSL$nnZ# z_*69o^*wY*w)X|LVYwYjrzAp^P6*v`#~7-!SHptm_9WRwj=%21b4KZKG|HctK)jE( zQICiqC|#flwe$V4BtI4M4p-B(5>uk-_ZU4|Hq&SErBr5V23~nCjl5Y=cs8m4mpnL2 z>Ssy{_80rY;AIDTe)nYlWa5Lv*E3zbGF$Iy=s)9@oow%-Rh00%V&`t(`DcjKZ6~%&w?ZI8!49gQjJ$7B)C6;uDQZF z!=}YR*uNe8qgDyvF)tdGm<)Phg$|XBYC-b>3AC9uNKD@OQS+pg@V-w4({2fHL{}Rh zZA&L}B&7K(Ki(trtBv{dPVU9(BO-8N?RB!bVIAq$OUK*&m$`dmG<);>B!0Cx4}vMR6J0k`-pM#9qm!NQGAl_U&Or51!G~Ldz zCl(Hqd!F~nK=@9G&T)YbPJWYYEy9=CuotJ8)YR)edrPe}3#r{dRg617%rqzkb`Ep+gdt41@rrt- zt|GG!K4OX`#^~y`kJ$8^Rd92&A3b$b1vz~tsO`+bQrjn_`G_?YUwoEanWaFK&+UQX zCQX60eV)J}VJgN7OGC$g9WG8g8-`3D(KXZM(Es@iyl~|yTyZ}R4{VpQT~5znsaZX- z^<0Aibu+*)vzXd7PA67AwwQmh2Icz8&__;z{>sZot3w)$gu!AAC~}}K)=!u-79kM$ zn46i;(ZfCc@|3G>fZ=UnRLXfxhmX2(-cc8x<%eVJ-Jk$W`)Q10)s{G0c@HiqnnK4Z z@1;r(-L$;SfVDLohRfG02|Kg_+@)h_c8(Ub{ge?r`zb?PM|0WPkG>F}#TS|DzV7sa zYc72Ltx5yq)WENZ>)Ff`=8LlB@Po+X9q;KltH21^YzZifD?_sjYVe?|j22X~R>?z;RI<10kDQxb#X?_;XpPTFQO z5BiR4lUgftSQci2p;ZmUhx<-8e>;;#bOGHW7R;OuxB%;|6$Dqk?6`b$B`PHwX}kf6$gIHg6*)9zsG7+v z{eZJ39fNd^i!qe_j_3_Fp!PBc+AC?q*Nr?)VyC@^3eR0wbk_znv^52xCp77y?rHEs zBZxd5>@s^TvlE634$=PEGGzQs#GNn8Q0aCWj`?Tf)0coZ z)I(w56VesUImry~v!{-ogZoJxBr(+(2Y|@6V9&c(B-Tch<(Q28!2d46mB-)6+j<_o>SKz_%jc4wO?lWr)9Kd0 z1m@jMWBOpTJbcRa#3fm|To(5^8qev%rZxe|zP1SFev<{A9C`3MGX)n;F2v83qIlRV zk?hfthOcAOVCJ4wc7^X^_DTO=db{}(@zyY)RB}DB2z*3oc{(=vq|rw~{&am>18Kq^hhM7Iwj-Su|v-AM>Cgs?E432yVSGy{PL-7vILf%E#(>+~dc_{(H|#N|2gIN=yOLk-arzq1BGJ3;faD^AjGBCooqWARow%#zta z+EywGCT`9p(gQ85E%(p(Z|oTTF|!QY_)qB8z5?cV=m^u&mW;mDMjXm<2KM~(qT|1% zqe5*S!i5d^X4eE zZ*T_vpy`CUZVn(mI*aGxri>py*weXBoM_#%RvI@LLX?lS&~=A&P*QXbioOki&C#iF zjR~RXz&SR88%gfF9Bw~wh8)LqOem~lw2#T-1I`!a$9-o=26e!IOc%{RGzZ3iTYyu- zQrXN;wIJd2mQ?)8XRd0!qVLSYA&n^0Ln;$kGudM}!TuBr6Kb*TjRFK~JE7a+6yiKO z$`mC9V53qgluv9Xxep8A!`xXc*#E$8BNc4bHbu)nGTeSKpJ040#cTS4B$`LBOux=9 z4N`(bsg|5az8sY=rGs}%2XE~!c^EItHM4ftvNHd+(FprmOyfGF2Ns0j-feeSvt?XA zZ+|Ua+q4|k>&4=z$Wsg%(4e0)3eg)auwdC@I=w(ta6a`9U4Kp*9oL4Ew^l~fO>+WB z$qb@6vku+gE`*%M7w+v>Ek$SvZy?p*>PG14#-_2 z|6Q3(+$U_r3HN6)PLq}K`NGd6ZPg6MJ$;N+M)K&+7uBS5bPFz>qXoa6gJ>77hUZ6e zp!-1`k<^@q%g$yqxAo+xmzyEas9`5sy#EKB_Y1CZtm8kP;l%z*HJ;PRFq4=PjgPCV znD?1a$f@*JGp|^7sU|arYY2MV<{^Jb?n^dqhN85icy9 z5Bm!*l1Nu2SRXhC;#(BRU!i}H`^A|qvv`P)D#gh;lY=-TvIbRx=HkD5yD&oNJ{}Mm zrGNiyqu!AhSmAle?50zu0?!ZwEUn?*GmUV(sXhx=n|&baKh|;`_*SYZR!T(ETUn=r z`>+DQ&pFJxMVr38(($8qJDl)yWu zg$^H_NF(=!GJ_|6lfRpD$Q(f;=&9?&_2^Md)#N(&8Sm(G=k-LiWd^xi%k2(|_mi8C zYoU(oSBA%j(y8tu0^fHLkm>T3f!|B(Rhi~D1sT;aoDR} zPw1<2LWPQPM*EeFfL4qI4Xy z*UXh;jaOkuy8`SkI?ZL^9I)e%H(U%|OPB{Cc!f@ZvVdN!ThR?Y89V8B6D{Z{w86-s zsgQ6p9tWi~AV7X0I2cWVRX27ZZ+b~xknJ=)J~58^PtB(v$E`A(a4im(eiDXGduzIP z&0T!3LK4n-o}fy;U+`>f+XD%5tMmF66f<_$XayMz_*8gnd!p zbT9m9iiG3nL1TsPGR@qM?P6;ed*`tbIk_sBO1=yryvny!_WKIro9xV+Ki7x0InJgF zCoyog?h{SwjHEJ$TgZ1g3wZdWrGC~3LCM+QS&xiB90+Y^6mO~EC)->SzAb@Vu4I{p z0Uk;3F(S8gT=3pFLT?qVLJM67M*PBfnjjDUUtY+BG)s|@y&_<0*-391 zXyax688THOlkVASN&bnwXZ(E^(UH}|WWX^WdY?+dzU6^5eSHM@9D7U0dJj^vSDCT9 z6N;1CAL6gNB)m^f;D~lF`Id8y$~b$F1&@WmR<9TPLS?YwO&-a&R7H__@!X6@5kHto z350&+G9SOKfal9w>7Ol^=%mgIo;vLk#C z5y5VijWmSI&9;|Tn6=iRC#h@C^t3dipQ|Uj8*};h(-S#Pl7Q|T&vl9J zM6(mD1R#<5mDY&GaCe6&qUxIkj#_PG+hTqCYlSFZcIhTuHno(JX3iB!lgNx*O|+9A zBr7H!hNY7yk~y87FeEvJ|4cF+%WJk1d9w|)Z~88*opcgEsGh_((N<(4=gj%)7=slh z7x8A7w7?_H0T%u}03Ll~#Jq&hi_|DVN5?j@D336vSAJu>s}c@Ns^T=2X68xLTttT& zqBuC7^7jbyZfD(qtd!~e;l_Hhx$QJq-t|Glv9KCLY`$ zixV`P=;TLUbY<@sB5v=E@@;AOou*=g%m5oQlufRgMbncen=n92m(Dx&1rKa~ixK|{ z&~Z`}dTrf7Lp~+rr>{eBBWsx6na-VU@0tJ&mJs-V+DEk{bTQKIC{9YsL}q;pl^d)3XF>_O*sOb1ccRhA@2YBY}aJt7#Xxf-xJTN&KEC z^pfrZdheYPMj0qzQqK~UTe_0!uhGNBE^1WYXg7VbsGT%jZlmq_&iJYcu-b@YIEiP| z>*Kili}EwLqx78~IwTFFQ@$|GZv(LSzb@hvHyzETHE?7mj~;rx12oShkm**%D86uz zxQAWAEC(Sd|MG=i?KZ~GJEzl?K9%f1xE8)%=?8HN&Fm5N86-(0hxXX~qK2l$^%)8i zz}NO3&dHgL0q4H6BimA_Uvmz5v^yG;t$krnlp*!0pS&nLE0lb2P-0(9O(Xq@qpZZ* z38YwjGP%Pn-D{i;!TBa%$_WL%yOZqy!j1G8MlUNf? zDso~Y^J+CE?tiA!ioR9U^PL3Ud-iy}A=e@H{Z$Mf?i)a+^lQ3jZ#mn*TL;gk8lb7T zE@NTqN4MI=qt&0OR6)fLeQvI2Z*-V&o{$-MK;IQh&n{%&ZZ}4vm&V>&dk@1de>dH5 zAK_fu0@70ziCI?^;9K52P}aS{eb$HBlZQ%h_6-?u7&!@R&p1F&QW4k1*hddEyrkbI z4iMFc=XnA38sz;~N^M&Hl0go)eE&)(opq=cqmCqEe5)5qKN2PtMN#m0bsCDA1;Ar> z1;+fqCc$S5BRUY(iSkmls9MOe=qHZS3L6nYxNt4K`H|yVFY%>W+-LX9I4dCLeAwA- zkD0Om5nk>{on)yqK6fZI9WvewbJ&G&_>?*uke@))v+dba2_wt~iMeoSK#i@MUrJra zx=2^W3V3~TC3*W;8%DC%K>f)$qP6EbDViKdR6M6)oWndc{M$*-y8b2)-*SvI+<|V( zVrh`ZUEWsd3+&Bf!Z>&695qO6r^mm!Q5|LmF7Oqi%XCErSBkChUZ54L+!_N~@$tCj zSpxISa0RC7oab_w9^m<04l5K2F#{QGwDnykaH@a0k*{iDWu>bk<@ zp>S3v)d%Ojo&^4NEU7a-O3X||1(Q#4nQOU6B>VuNia{=xKf8!No;J9;{T#+-=wXst zJSnmJ$8KGm#Qw34fq5+^Sn9Wu-T%ZI`?51pPkA9-mGlGqcg)2O?(A`n#B0zLdSlqfbCgY}^*{K;H~^q*2_Q^q-$Yomy%SRK4Kxj=+{453^? z0Vi~OgF(CuYSqNDUUn_SCO8U(=7po{(s0Hwyd5XE?BYkS&89lTkLjC|WX{)T2CB;K z%-?AVaHX~zMkOzyW=B7nq2EH>wbPg~9Vz-&kVy+eA96jj^K^3Y3H&B*PP4P3@bkNy z#CNF-%;54Pgn06jvJmWsFO$Ecz2tV}#L z`+iCo|N51}hORiWIf|Pr7k{Pm4xFQrVvXpU9>+e^7RPf%LB#2x1|E8No~$`?obeU5 zHEX{6g=WwDMlkdZ&ElOQReKfCY0*S#yq%#MY9`oMUxFzd_h(klQv7xC92ROsGOmN% zPDN=YD@e7(veNk&*l5LZS9{5YWOLNiY~+|*!q~L1k9qNYJQP(c;jtGlY4g|rsN6+k z;>&EH|LzSF4<8e36`RN1TT2)z{j*rQ_6pf(!eVBh+n2<)zui9hGR#oyfE|a}z?ScW8VT_B2IlUn@l`kCGgD&d}$?RfHTy2>| zw$4l^^2$1Ru}%n@=KN(XvP1uy0Z#7w*V^&uj1u@cB7qyf z?&o$tmDF8*vcO>NNgP-b&D2=yq2paIw7EdZ{+$|-mKa8N7$)+zJ$GOtr1!zZzr{qc zp_pWzUr+4s$rDS?;k!@u8vFU`E_!WbA)9}?jhXEJi2dSz3hwx^birOvy5A}n$KSoi zeHLf32d$1_OYJVAupx)m$zEhGzp)@|KTG2*<_Jhmuq2OfFClsXW;Ab73f>>K;q#Ar z;Nj4fOvjsOcz$>ZW?KBgUP-j4s+ZK8{qqmOf;yg&q)R6c!lMHC%&EQN6nCo#*5 zbAANM2v+1+(#b92X&Nb9;wsG8L@M^h7eLvgbQJVttWX@g61Qgc)m53nS zZwClxJ;UN7FPW(YQ8;(=9p2q*+i*haTqw>L#?&KgaN4a>a(s^{_F1jro@WtwE%pt4 z+LlLxcAuw1Y86EI<$nCTD-|WzZUF!4TsSM=M4Xaufc@(*n$$Rf#u@&`qWUQI)SgVD z@MDN%=e#4rzK*QWw@O}0dkvX>a0S?%a)BI`>4MVJn`zqHMcn?-00(;QP=w3!rfQ#t z7}FoHX0s){Sk=qBpDBe+KW|gZ{i$&Hd>2)DAVP!H9Kd3_3K}&X<$h)uz7YEl*9mjy zJ1vR>rhG`>>yA$?jFlzD*KppcGrShPCR)k$e0S!I(fs|_nS&Pj zbdK_2lvjCsN>3M-GfMCbDxD$l-;$X<*;lUqA2m z1)N{O((vUqG*;&>Rh~2j3p-<&QrisZ|F9FQy_Ip9*LU)=MvSU_`%9-M*U}%mR59x1 zct~uMgPhm9*dWDj5~t)zPQIDRZ0uV@TsXdzX8&SLAK66a>X*<57vFE!GBXAyydf;@eqnul#$ zfK&*7ry_mtsiTu5#OKy9vyE@UnL~+mVQDnG;rk1=AvF|6t{!W8>76SPd#*y z5r)yUne5TdU-(@~Sa9)7EqV8~1OA)e#vHdqyr-&w`g4-dzvvzj?`go8mzuDdV_x*B~UL%O#rls9?&M*2Np2Nv3bY|jq1VD2Rv9B#{W`L4#*>#pO>H(}(Ddnqnfj%E!S z!|3~hGrVaxACjsfRdkm>rWDgA8N#< z=?$GJXolYd#%N{1?dncj;e%Vks8=nGD_dn~91L* zN-et~xrPa9j-@9%i`mUV0_KQSHY$BA!%my4nEg^)Fy*%;I&HTlqPI_A;?#>YiSr(t zdqseCRweUGw2e7cv=7tA`_j{ID^WY0GEQS^=*Dv<8|Gzmj>uKmQmI4TSAS%`uHHZ* zSI;E-0{pS`)ImllH5uGgZsW#~cJib6GRA)Aey8FvvL*CAEUjA%J^snGdY}|-L$k@l zcPfHUA8hc`12vePoKCK-Rzi!KEE@j!I4TTjVCnQ5cuD;O+|)WohRJMDnc_|gZmF?G z>~VCtsvx+ny$srahLM)m0t`u<#y4C(jybmbA{m_~O3DkY(6vbv?Uj|-1(`|oQq(ol z#<>s=Y9Aq4vL%?6tpf^=-x0sDWn}Fs3Be%~X@LRlr+1d_MU@w8`BB#sX#0{8GGjp$ z-i?@#*IUC$-y$AVSfs+vp0$jr-G8{G{WPfk)5m};N&f7m1w`rYR?OLG2w%Azx^~|Z z?io2m(jFP0#Bw)&h23r1tG1u6Pv4Af-z?yj+G32Ctl=`T*XhnqH!zy#fjbV(K=Cst zpt`q;$xjZ1Z{>O9b@^(NpArGyr)7zt;tP(wYay3aztWetRM;(x4x*3EGP;=KVf_ww zB5{qJ$7QFQ;9Q9~NJ>q`)^1t2>newLTKZ{C+z@>%&e9K3<)kHf3VP3QK&LmK;4L>_ z{T19nC6vRMySK+dOV(;C^4|}7=tnuev5kRi-iLUKvyDkuV=_!>R2AgewKCjECcXjSNG3W{^{b}fYa)6zhnr^ zm|+5w8|T2m{RK3=N1mQNQGiMlL-C3>fywJ`(12I^^wLiyzWEtlToW3L@wLy%GdWc( zIQ4;g;2B)CU7n^qVTrYt0UB6d#okeIE`J(A_T4&7X^*PFH_eBPLK0DF|HKB!b@Tp> zJYo)bJf&hg%W0LFG|p&?fcGnY(!ur}c?6^Q*#EgX4_$qwl<1 z+YXUJyC{78bUi+F5JHtZ6CwDSGgK_;BE0my!-%d_PXPY&I$l-jB2zlFc5qrJjH|y0e2ASzoiCyU(OiADvOdp1M7}HKZe%egu zUb$DFyI!0|TU}`&7n{QM&Ym z&fM8Yc8X@8sbMgga4&*a$vxk5!%KV?Xx{TozaW=IsiS!;w z!TOuSME8pjzWaWV24#N3-iTpprs~Wq%#eVTO{>7IZ-g8(pF_%rYO&-n=SBa0oyOND zkX5b}aQ70g zC{)IH=5KL6b{p!}7ewfh$d04*q}N&4ZUgw-A|EGQ;r1pUr9ppwI`wsq2GsCFp(nSg z88esk>nx^xtx*&kaKjS{2dGs1dh+dpDNHOmhj;7pvHHLmxt(>8NuOOJ7#WDf&v{&y zTH6~+cKPsbg=%oLQwu#u&-|L6S z163XPutkMPVm+JiK&sKz?H|Kv55ERuj%swb#kKR2uVT8|sk zJLuP~?yzss43rZxCFSe5?6v+X!94EwUvlOm?zrC%x5mbSaAz>>%#Vjde|K^_Wo3H3 z_6|5*lI3sY?)-jV-{TC4wREs^N}`x={TBk@wGHa=)FRHH2fs zzYxde`;5p+lS|a@>mJHU4C!$(b?^xE!kA`VyjOM?U03{}Ay?-^?eiu&>CZ8I;U+FH zx46l!tWBV=>ps(;ZzqvX?_Q$mVn!kdrV_(%O*HrSW&5zMZW-2{v84VN=8^d;Y}v~#8N5qEhF}`J89wXF!Kxu~+{|$V zXZHFri-R^$Zx2z>z0UQ}gtN#KyM^?q*e7PFO%^6bCDW~!Yr#eRDL2cqApTC1@Ref* zWmcN-ThBGnh9yogeq$0FI2a6ikL&1(H9f2icb6M_3v0WK0EOa z=FO%qdArf=jw=3;okW9#)xoj1#Hq3!>~$ZqUDtBSilcrU@6Ll~ z8&}O%)lY+uA0620mko?4K4t#=n*s4UMPydbBXVx67`l1V(Aj*3c4_Uyu)Z=%>$s5%-+Xf9Hww=dUGx zaTdTKi!jTljh;zehr?mzY@H%Q{D0~Zv4J!^@2ZO*?i5l7gJ^cD=v^9kfP239I|u?N zE}`Xt&*+u+KgpDh5!mc`gA8@g$Jk#rRPyTyy!zZ2_}LTi-@+Uys^?U1Hcq(w!z*fk ztDSb11k-;_YUr}h8;BoSJL0qZJ2WmY|O=}#n;_~Oq? zw6r&XDjzeKj(2v@l~@;m#XWNW}m0ie%W#y zuo_y%%`7)=n~b|NJjfnLS=`NPuy%__>8`W~)Iz$D8CvN}_j|7+c260UKK}^c3mVDz zM^oTO?s;0VON9K@TZ-r*&0j_&(BlxrZI{Hs+I1GW94*R@7Z|ala$cn4$zgbQXA;pe z`%H7+H_>Y|UekfS$KhX)4t~)PWgCWHKoz(2-@itdsPrwTeH!J&Idctp+A9GY=1k$| z#kb-vb?%-i{FMBgJjST+N}>CYmO_=P5xlB?MYdTy0mYUSsGhtX=cKgYv+_7rNN*7q zxqG8jdJ-(}Iz)d9l(AA+l#UzAgl3y@IMmufFIbBb=Six9`Ki|YNh%>^FljtE?a+jE zx&ZPGjiiyg_tkxxL*F+iQL$;&MDcbaaoM56F&BG@&|!D7%_kk|pX=a@lIQTmAsM(w zCKb$GNq_oJN6maq80GSPswpe5Q}CS{-sxkmxpl(J%ZxzXn#=3Be5a9rlfX=I8o1Az zhqnGsa9Ohq!#<0`_xc2?9U&?ZQ!m5E5g+N|lwVBI&UPmFjU4H@9}L|;B*~Sh_VlTH z8yWkr7dBfKVM4wmn9Vl8D^Mj}Sjf1y&k7-V}7S7TN z0mJWqcq^{uL7mV+V&maQJhk`YiUobFk&rOY-G*a2{h2_^8>L{mLJ7P(*hqKY;<|_< zaai$Bj=#<2CW^RA65|E?pm*yYl$3Bqy|cMEu%B{o4r=O?0PFpEb< zvq0nX4QiQe2C{$mquty{ny_pKS^7*Jtd2#3Y5NhJob(V}_7%afaTn)^l>{%_7`pe7 z7Oou^1bUT45O69Jc5ev5>Yk<8qP?5Ab~@mx_r*lQ{t|n7wFCwXPQ+vm9M6=84RJBud0G9;G9!t(?7$tN>+m?_hR8vGSx zqtGZTS*68!U#G*>bFbM4PF&XCU?$;nETtElbG7Y35G);>uI@oq7_^j)|nd4qm6;DeW*b@dc6Q&KKl-XMK%TBh5ac zMOo|1ux7M?Dh-~(MGh_0sYo1mwQ_gnu`0US!H2x5EMa^dl(A282O9jX=e*K0p?leR z+}Kn~kNF=bbAGvy{)viA&WB;t>GFpu(o;dWKNa?m&j;J5&)~!V6rE>4PVXDXD=nG| zX{iuJG78nXuScY1q(xCEqs$0d6{Vfhl9tg>sU$?>Irnu`6dGhx`H2XLN|Gf0=l`O& z&zqj-KKHq<@AvcBwhv}&DMP^o1cwHdJ{JvmqQoH=RAL{ zjk=s2B<{01*v*lVcpEeE*QXe!Zmu_7{puWP!kM@_26>;y6@K0{2A({WreY#T>Cu?y z%tGBq)Y_TP{Z$#j^v*UqrE4b)=xoDNxgG3hiyHQ)gD?8(*5j$2XJBo3G1QBy)8Z?= z6kc7y01;DC_w@jU%wKq5h8TqW7m2yvnYg*yfhpmgjwPQp$*Yi7?xS zS;tPHiAFr%B`>Cja=wtUr&i)VO#_lY)fV^pt_F+!uOP2t9NskI9Z<)eHB zNzHnWa})ld(3KYt+&Y8@duQP$g<7)Dt%r8cT#%SE9@Nlgv-wjkTcy@$o{gx z@CoC&lK3t%=#x(GOWnmI*CbJ2CX>Fu&3l@)`{*u<74&je8UFd! zVng3H&~t2KDr2oNye5%pe8ca&y;tCgdwfRMiIVZPXR%7Xf?jZx!vhGb9cpneHH{X(pvhU|% zxKsv&W!Gc3L%MPk+d^kh`e|KKHoN%&G_soSev5 zNp&*2^!|ZQ>13Q2Z;X`nAh!q4!mCj~!DrF~_OWy}R%b}E|Bc^-7aq^W8kGgCOYKrT z^!O4wN*6HUHI{JO`~VZUuLd>hUqi{Jxft|eCng48qBdJ*VB$zVJ+pTgoDQ0R@$oVL z?*r2JWon$-4qx!glfY5~4|3Fwhmh~tWr(5+KnP&n=^IF_>HQWNh25{sn?DMX-Ekk9Vp zeU`OKVyHJEgnQd;fHD*LdCAUmL<8O7>Gm#=7PF*A6V+)a?*Lw+Cd)loJO{Sg@_eDN z5RjXG5gjhY5(#q~++f51@6Przi@xlpOUrhEY3MAR@#QG}6{p3mlpdsuf8IcsQT+Tp zB^DI&(n0g|1DKDuJF40e7?#9N8^MC}pJRSQ{;CNnj__0A~nxn(ImEq5A1>$ZY@W(1e1*MReT zPGBvcCA@L}9g0f_K+Jt#5V?I2?8=n6)E}Yb_M0mxv3CL-o9%*M8^kHya?#WQd#H(} z3TLD_6Q$0_;F#ty!nO|!;a2=)?yE-_4T^LD`Myk;bE1&US^keHpK%G+_zU2O=~W2U zJd4^A-p3{U= zJxj^Ru9M*9;e|$`Wz_B0B|OZXAVqdIxWufQ^z2(lXT@v7vGXgSS>6>Qix@aK|2ArW z?j;!?dk?4glgtd&Y7nLBmFRQsSirtj}C6P2oll+Dth zxcE56Uoyro;RP^tY8g>|z7CGH+F|#~Vqw(t0opdllDv;bHnr_5)Mn3w6DG>oT~$gR zCU4_hUa;`PQ#493H@nn(6cx*_p@obo-u>8!bwkVG-_t%? z(e|52x9y@o{C%ju!&{VCPz^`LRj$1yASu(B$Kt}DtNS68^#zrz%JEW#8f|<99uYrMbgK%=}^bF9ulSyvR&m-bz@<^!CE{qJ&geyalLg%ehV0QFY=zN$+ zzV?2l(FLPufpt1OzovwD*Gmg}rv*aX{c<8Y{N2o4K?JV-;n^a}>dg3m?WnL)kJ&nV zBGv4UgBcGsh?M0sSgTltB{$CC@0%egV|;{Kd>D-r6wl-Os*`lWJY~2Yvk@INzJk;@ zA=#Jpi%r*8L z=wG%$CG&Y8*VTnrZ!}|HLllGEJik3K1k75p@awc5G;|H3v;Ds^V|3%e$962YGSgT% zQ#uM;+)t201+G-}SBOALaRS;TSm2vR0e8pq0d;Olz=(-GWdCEnmg2FOI^~wY(lB}c z&R~Nv_Y`1-^*odlJ&)7xgwpvv1+@RSJ;=^f;eLmW!e62P&?47UVBdX_j;#{Gd(3^9 zxil5hG-L#!IfeMBE(^4k!=YTQ7am19kbi?IknqwI^oqv{W_bT2)7Q6>Lo!{^cR*C& z?ZSJG4OHQCeim1#7mW|LsB)>6J#AfB)TC)TrQ+3VunDO~qtx1B~x)u2PpgWwpJPsce zn9vV*RtaJ!M`Mts6^*YJ<5tLMl7-w~cF7~&)qHO;oEHwz1ouYX8+4Gk95_y=9_KTy zQNQW_`O{Ho{+`bN^BxU?QXv1v9F&hYL@k|fj7%2`L5m(TYIz)O7K@~HCj()@^UuWH ztP7j#Bgy#O8n)1B7feYuAx9ocp>%dO6T8p_U@@Pe+b<3ko%01PbJoM2mU6l$X)4*$ zoWQB(tVOBd2JHBnO$?)f*k8_ItD%QZDT_lBkMS_P;1^!^bif63G~tib6gb`%NE^PZ z6G62DsCcB)&UAT{&`bbTR$gE_W;|VeoL4Dax_;lMay|aMN zL6o%8T<=b@rF<B$MppqbCUlBggP#_r&cd*)v z&%`;!L9WFr++Nm3FPQYx*Y$CjC_4#T7DX|B>3y{9>nN_o`ZG1pdJU^~oTpFuJKJ%G zekShVIXv4bVrtr0N=HhJg{oq_b#iksj#FNSIn^F$Y>HHJ>lJo;vjN;|-$5;gFSGf} z|I$D6oXLdOOQ_DJ5&BsngT}7EgRb?tdeP2NvsD~)(|JE0jVYTT}pLh_kOZyC#tR*pU-`{0T?P zpCHttg8ccDf!gOTL7nt;LB+8ee4Z;I@a-++qo+0O=~EwI*}!rfVb6fpUwfFmauTN^ zKOWz)v(V_4IPQM85G;=xg5#|ysu1jsBd*~PrE?TyZd#DTdKc)l)4sS$b}oL}F-+E2 zjmDo*z##_pJei_fJS&ggpLU z$P%A3r^#f;96a|{0n1$p2EO}`>fZH&qwXPeo-hJ!RvjkA&o7hBudB%9y+`@*&N^~X zQj@M+5sNv_c~s?%A;{azAxk2MSbxc5pp|4yFW7FS?L`BXp$+LM^`r!J{s^(NPn74> ztN?|mQu=zFrNHp40qLvvK*<1A?o9A?y!R@{%-qA4{NpnU)@AYnUAvVe+VeTp+%X+H ze-uO3yJhB*tqx>hMJkvW15HX<@Nz8x+V?tvo?X|3rpCa%jci9FOi(qSay}9I-E1928cgG z)=aF!>qcAfWuqKcU#kc6nrUEm^fT@oKNrk}KS;zMo=3Mdj6}8Z41<%sbj8jT81IvU z4p~xy$`ALTdLWbTkxwT2iPqfjz9_yOmkI1sebQ>I1_noWqQlP?I$mswAh*ewWoGID z6SN*4*2-YgTotT$dx^b#7H)phUMl<95dM0HkdLxvWU`1P>r~c3Gv8dKHcSM>J~(7{ z_g-IRdFnTIP`eq+SB+;wXN01)V=;A)Z($4;m(e{-gK*ca8v5QsnjwQ#JcTuIZXQcJ2 z4aaO;Oy8uwpwCV?L0G9k=rd6hyqwIr2_xBDX$_B1A(l+DTP@A0vL_1m3b5)D&;0Q| zg7y5IvAO9lkycj%QsNFzoT^F6H-3*W`#kls*pJl12R`WXJruQWc7<*TS_W*!%8Aps zRa4H8$=}x%dAADLSpJ@IDm5Ebo3D}JS-07UBY992oUi9)z#ii0Xs_!3UH0R*J!a1-D?Y2O|KFI{awFJ~+$dKh5ij$vHfOeh1=n&vX$uUe+Ck>L+vGog)1K z@~HJ`F38ygz-1d*LBy96)U(b4thHlNF?O zD%W1sO2xv;@mI=(N|Ayo^pSl6IWG2tz}6eEV|N|8HtFGMv9a93jk(0L(F`=IGwJW_ zDR6&(AD*A^jpw$W!`b&;&8qRY@I#v?ROgQ7SDkA?MAZx*Mx7xs|8A1>5k)R7zYY}B zUyv1+56SWs3|`u01J_Kpk+>iBSa!>eyyACA@9RH<-%BC*&Dl*v&tyYt&vsNDeHF`w zmeX3<^cd{PngiYs4 z&`)rF=P7iwj^;UQe9l6@2=S39mfzJT!-tk)R@W=K#3K?%w65dVbuVe@7k-x7GD;Am zGze$E=fI1lawxvIi>#5hhiN~fA!>dg$yJfTOK*OX(hgsGe~LQ#^<^VFZU~oMH{|S< zqQPxpCU~qjfqkDI;OdkWd`5LFee%E(1ANMXyVy&t)#PZWer5i3$IN1xDCL_31Et5zr_JyYF``JT} zE?`g)zgtfH0&&i6WcIR1&>GSL)klvRk!6zv0a_(k=Hdj=M?b-@yeWb`VOg|#+DZN$ zWDEQoyP7EbItbPrJb^ERlBsFZZQ?GNj3s|&a(AE{(xN`lu%SX?ecl77-(CY^RUTNL z&_FkZ@wt`Lx3Q-pny!?VL7j8M_)*~m*3G^AJ8Yx&;Qh|YERx^+37X&{99FYa`3?j%0DWk(iTPDAnM<6)Ek zMf%F-CRvi!M3@QrWOmdRoJM9qSKcU8w8*Br`ya8teK!iCVj0-K>%~o> z^^g;}1J8C1)5g(dv_P|t&*$vMtEG~#?+HU|=WKx9e|hjrE|(lni-8YY`gkr}F6q88 zC~Q4kNCNlV$3jEi6Q6L2zPpu7_RM%suUMWXRUVYy%a_6a_X!aA$_6%Bi$Mv`4L^A7 zBCeQso;GBS=NW&ualOjCcL{{PDbcM#8Z>$ca?6TYiS=c`I0DgPJ?xXZYpI|jBme`Za&$HK;0~XyfT%6q2>!%748l9>dR1OZVum}3L<;mO^Er{bYi5|iRYRvFn6SroQZkH zTF<|U+sDlVz1-2*F+L6*(^sOqk&rpAK{0&R1Lp7asbpBi1*gFj8asRs%YyIHk58wg z%DWnf8&pTMe}+|A`b1@yHYP5S5j@Q)fRg9H-Bo@@%EJxuzj^jpeB~Wga#G}y-6x~K zxP_ks>q2zud3LR79K+{X8#xmeG)w$jKa zVK8~~Mux08K=u!v!IeYNSZ1I?-(55SyL(kI=e-pu*)`C^dC%bwqs4P13_&Wwp0-I> z(&-1g7+bAWOxhmC_=`{AyS87jMO6W`t#Zi2XPeQcD1}ZuFo3p`p1?%C5&YtC9CvR# z#l5umBcUbTblJ`{^0ZkD1H05v?@R>-J@#UcOpByzJcP{tHw!S^;225iH~~G5+Gus0 zg&m*IqUMPqqOw7XD>hq!o`p5+z`YlE(QFnx7a3+E1YgmBcLwvX%uLW>4LnkV?7@0} z+}PAZlaF-K(6d@-VU~x{XRER1TLh^aH4BzhtCCnrZ@B7a4Z(+wauc$ToBok*#1~tA z;FM1&-%E^!x*v~(5&4rq>-H^dzS==6ElTKEyhA2$FQLv6Ms%FzQxdR#ALjb(rClb$ z_{Q=&JgzuPOP?4!d9lNQ}GROp&bHf4yzlGrzBej$p9{JIhL)1%P1=00@H`Gx66U(;Ug%WNY5 zPEw|7M!vZEkggSJ)Z%U{iSUl2O6l9_`2;`gc(RGuo*W~v_|FXv3|Qdv1xff*A(5y& z=Ge9KR`QN#U)pV}PgO!9@KswUwV1^b~k_M#FKfkq=7!a zciePf;yO5L7DNlPWuaY#pILub#Q1A}Y5$)2#2_0;&9O~DS3YJFzv|FqQqmy$%Z|G= z)rX8eJC1%GZIA5lYQ}M6HN+oiparWGahF3Q${b$IMelqJEPpq+-`~U}Oj!n}KL(mk z@9#s2j}E-AAQJOL2f=k?9nADDCFfI*q4?B8DAG=e(yMVy*VRbk-gt|22l5`sNDfCA zo}vD)&olpC{bEy2*^;zCp6y$c1aG}e@L2S8_$zG*&M(Eu7k)?o{iO%3Em?}*zD}qs zy_~Zvv8KTj%W&<3LN@p7JF+a_9WGuhhQ9m$JTrPe1iXuc!?Rtfi182^nrq9PlWHc# z0}O8b^?}aLj-x#f3@U~)VyHx*J^k5v6n6$nLuBQ1Qc-=6hNP&{BjM4IbagMq2U@s9 zwv1*^l*EwPvGnWCUHEC^EX@0Sl+?M+B(^;4HtMn>R9lCT`t^jqIDLa|ANg*!N_ZHGL zvkkD)O#?5C^w5!aTDY&>22HJRfmxmiF5-CvGXGTZrjL-%do5%(mWG1N)N2saU`d#O z3pj2A?;LqBi99`-K?V+})9?gKZswOew2&*p>ko`fk4!j#_G4}mxf>6NSg*Xm2j*c< zOgvNJ}AOnEF`u1+Me%6V{pupQ{-eDZrn z0$JoyN&R^TouP9Jd7CRqZl7?5Wo9~%Yx|k1a>2wjexmSq{uIGt45M2|oufB2@~G&F z>$rKhFICt%37a))=uWE~TI1MFXANc0irXx1ysyr^9cybAx7-(`O(}DbbdjaQm6(8mA71jjvAY{Su|?4fVEg0^h%Nhpc~Ub%$=d=Ghr-`J^ z8R%L%ntS@_1;)!nQ||&lI*rSKnqXxZb#x^(48-HA#%4_1Jiw@RGtl)h1#Ps3==NLd zN#V6C^b-FZEFHI<=6&fidvQhxDg9~GENK_LyRML$=jOp`^#GdhDI*+T#)5h3S}YZ@ z<=vAJa6_vH7tXj<*{M-Y92-K2QeHBA`W!3_d*=g9un>a6n%N}_%E_c)-Z?&ChTf08 znLWx`biADl@ec7td9hw{*5f!`l&wu?8Wqw$MH$#o+XZfC)F5PD7ENq1MEg;fs7O~W z)q6Ut@|CJG8tqR7&*YtWL@5T7ZKl8q(S>lB?Pr=_IpN>iEbWLcCUL1MP^M_cS&`8?6^#`l5 zCk@LUgt5{!aH#^HjpA95x}81j9BVW?)}I5KCt9d#n>@aIewtdH z=h+`7Kk3xhr|9VL3+NTpLx1sSdB76hWjk{*o($_{t^4wr)hoKG@%-&kNfOw@0Cu(rL3Vl%@5|O2o6PFiY2lZh=989v7f9Z!95PvO4Rx9W z=)Ce#c&jG@GUaZ;uAS>(zFRU*%m~D&P$h7B;)mCl-k_r6LI|03fJhu;@S3P1j`n=O zcyjv8h}3fQ`Im|tXB~xG4ME_xEd`q5_k+MEl7=cD#n#)Eq&aITZn@A!4OHa0CFecJ zgG1sRy8(pn-Xjt5vVbiMJ79D&rQ;)XNYM^`Vth@G&R#whbtihFp!5{I`}q`no_(EF z$6>m*R}Wt}?jehf1874<1X*|YEjj5NMiioMGVX7R+4+(pcqVoNJb5Mt`LpWTs;y(` zNwHeu^?W0`Tu-Fh?^k12+wo*b2l=h6M}N(hB_BTDqITs)?7~rs zcuwI9E?tprrXondPNy%zRWB1zuB?^Wyu}zR?Bc18WGk(npN_Ua1Nj`etFW=)9s`Fc zeyLc`dOe$fD?bjA4c}|=v8FY*J8&Tzu!8Sqs)mpsiD@|2$O)p%b@7d%0ZCK|$4vEC z;Sd1P{*_>mXJMZ2QDl;LD{xOYIFJsdh>f|1%(6< zncRWh6{}4TI&a3!%Uj^l7Ejpr!j;VMs|L3aWv;_Z8LBP~!laQrbd-|?-2vVms}o-N zO?!X@yqrOsP6m(}#!3D5+v>mHn)q(D4or z&%>SmD%hcO5qAn4NP0;l9(*Z-x$6>`ft#_^-sBuP>wgqBZodgvw12?tU5Ua9;aKM0 zIZ6Y}G>E5K5*bx<2-8l;^7GvV^m$M(IrXQL&6zTn(La+4t&4Nm6OxOV1IrCaz~)l6 zP|<~{+jdu|<O9+FHr@?$xxyJD2_9^^uNt1aL?$Cz)2C_@-A6yLtCEDol6c zVjSz4xy5F*^iDWVx#~jy4cI^gpYyVe@5j2DOGK~oEaw_m3DYPP{+@e~ zRZ}~G57xHO{SM47m8e0fiV_)7SMCHhu}wm8P*s9E&Y`M<|8Z!Z=Ip< zhPQ#hPoEf$bwEuKmOAom9OJY!=419boHSkm!hX#~BR*sKsZdB4>;Z6J@&^_M@Q!e= zbgHOu4DO2i(ES|Gm3wLjVAc(t+fD+WP9lL4qXjc=a_qjMpH$kepB&D5#jlM_0IT;= zVM8r73T>t9E=<8(YiTMxV-GvSD&fE0rDU^|0zUJ)Loz@9#@-lBL9((DnWiZY3QjLT zN%}k0Z{k1u@f)gm(T%t#WW&CGNxF}@MZVZ7k)R59x_=U-@3R-fA;%PwJCw!_1oaS~ z`Pbm(J_8gTpGJ@O_?mP-aY2iTyK!QH5PcP8vFec?aBqa9>TC_;v+W~syPMC+UybH{ za6M?X!JOS3YG?Ya)82Zb2@_~ zAG7IVsbV}5dzvVyuA-?92JkgX91D+x^0Sju{4SvoQ?3RuF*`Pp#bVOLooCV2stW1A z-HoiBUm9LwKhp{8=A*-$dTKeG265_DaMCmvGY=S1Vyg@44^3cXPXyG?%fawne8%*8 zBoS076Z>(`NUDD*P6!^L7u}=5DkKT zj9mP8y5>C3tTYUvkIhp^$^JAPwiJM&S|BR^R3kE~QuJ!0G;6&##Op=d9 z50PEi?KDDaw#>sv3!*A}`gcN2h!~cJD3IxM&QsAZUbrsVS$O62PUtzu!JA6~&@sai z>n&SIRnJcLknlQrlFn-a2L|b_cTF^byGGk>hN;!eY4oXBE^vOzT*2Ar>`JdGK$_Aj z8*XLeSRGX~YBD4BUX$Q&cPfvn%VLD{wS=hwD^XYU9NFcXjh$)wxJR>#b~?2(l`2{E zb=D|&{nDD;+Rt~vTofu-?$Tx^WRDW)HS1HCV^3k>j8yV-Zx}ORX^QpB^F_D!fy&@ZaPhOzETCl+DmlRW@nx!yoTAa z@(?WCdY>3w>mb~9y01%0cBD$iD)N4sAP z(UZLlHCunt{YzER@AU^JqGK0xc5g1wfiO@cVlaBz3dka=Xq@j!{vGV6Yxy(ca=sC) zElq@YVKJj4tS8gUtZ?^Z-bKY{HKq4B(A~?X;QHhB%tCiXT4a!dx))O*(XWy@;^&XA zJKO2$C;ZNH#1I{4#xNaxCtATe62vW)m{%bifK~;8;z$4$CR>rB06E<8F_x}~e}i)U zF?1=PL*A)|>o#!rik-A|Ivp z)8a}?YOSvVX*t6v({YFB9!$bv5W&q>*+ilz9eySWak=?SEVo^PzYk5oS95!WCIVl? zK5P8!CM~eAV_-`_EWGUzR{nPXN#nYeam~%qxbV<68e^4(wtFAI2VV`GrMDa^H&oNd zgTHCbqGD$6{g>oP!g?~;dlt97OM)q#THMX!#lQr2@}9~B>a+YW|9*NN8YV=N>Lo%V zy3z>kD=$(7zf{WAg<^-J7Zz(v!KNEmfh51=-Q><_wXI3`>~J~iYzU$%`=aox!z!|F#ymYj|J{#(c}p*1*OKq#Uh#kMtLX&Ov2qo4olr=wNFBs{qD?#( zzoh?qE@8aO7{SV_7$9$@Fy_#1@cTRk8m_8hyF(ymo(FD}_y9`V%W=QQO%bfMxMMcG zzLE9Sd58zTjVF(neIm26#RLQ4tZ>>GidWmy$=(HdkZXL4RGY+LLYV~Z z)?bSu0ZZUb=~x=RavomKTSNCX>SD~@_c*gHg$@L!5Z6biF?s$}m@Etd|GO86^R8Bi zm(NFH$MZC|#L_=TZ$L@u1We4x;JttiO!r%5fni-2YNw81t7IT^ZR-n~F}R)dZW*L0 z*%Kjtoj%6q?53OURpYMs=kPQ$8-HqAz^NJiBr9Yy>B_x>U#^Uy3o{+Ko?67Vw;i9*5QjErzmn__2}*uMvHZ^zbRXU%T8h~y)awD>?L}$cI`HfM0ALcz zoMGT8n%upP4E}V2J*$;qw%i{0=rR_9FDHTP)Ljs}S>8-=>>F~+8s8E-6@<`OC z!nU(He0Jy{Efa|+X(#pJ$T*%wvsqH$b4NsQRWY1=cL%W znQp2N7k=iQdJ-$U=)H52h$#_Nan2BQz3Ct?W)0IGoaB<1oqrErW(R6 z;O^*yiA_1|z8#;~uLV0`|MyVRvATk6G<;8XT@1rsK^xQh^cS8#SH&o^KGZw13TNpq zB2CgKA>#H+^26vmp+-)eTu?qfh`Nk9qGLIq$}%X~Swez`kxXhy$H`H}^g`uR>f@ge zsSggpmDB}hdyhHeozd;|&U6z{!TSkQX!#H~Y^LsFE|ZEi%42@B zA*?Z!Wo27K!7E_2p!N{Ym@bRJ;Vvi0cp6M+1^g9GI5vUJh`1wM{xKipQxkCL%O*?~ zjwV0b+~~R&n`rRMlho%zCbj)s!94u5lc>Dqoj1KbP!zw9UNgPOTJ!z=)swH{36Y7U zesMhVRU_ec!`q}QOdsS{t-`lU>cIEgVftl>A=~q26Vp-63qm{Je#^L$7$8pBhb7->e8BrFE2Gw>=w2J*mr(GU`3vuT$!6A$k zwwF~(*A?8=h$xbNh5Oxi!pq$=KGc2%ZBu~#68PJ9A$vOmy$n%dl* zPBqY<-G~eS^MHn+Aii&ZiS$1{OlJfN2xHewLQ>7S==+cGFVFZ`KCY5@|C)IIY z)D(I!bOY&`8cXE4ql~*$Ki*lb$X#lAPOpaEp!t(LA%Z_shF#O}#+yE7W|Se>@Jxcc z_DB~xR=5f#JrL01zN!2<1Gq^>4XU3uQ-f9In74kqV8@L(cpGq^eVqQ1d^6RCnWKDB zTSf~OKJSIk@D5UBO;E=kVbcg-!*?@+_ha3d+3p`H{3E!Vi=+iH6S(zeJ8_Pj19Ib+ z)4|nCFylWpL3Eux9X{}rb}c=H!YF0(+-N(59aF>~?L#E@%n4jP5SEd=4Lf}A` zELoZp$GdOK=!@F(}0`&e_;NtbOwk!bOb4%^UVkA`Z(v}t%OBJ~v4S3rosN;_|A<{@33K;v3n|N;4S4(r zj4#k;jXrAAH5KY$U$~lbZ_1gN)BnJ&ZW3I7QAcm8`eNYVDw^&1o9ZpehnjEiv9ry9 ze%{r}d`JYmS8pnqq&3XG_k2g1Rwpqb)wkIvnlrE|XbqM9*utiVexv?n;y7mD1Ua6t zn9NZQM1KVa_dE{A#Gg-zrEw;|Yx`RH)@DDM*-;EHBtl7*Wwhy}klEBOl;9uq7wM6MU?MpKAkl8*4M!}2b&pd_5=UekxBF=519_X6GPBaVj~C8)#dLb~8r zI!wCGS8XIjG1y`mM%)U90oyV7=%^$PhW(|#9dFQ_z!_BX;RdqlSP}Dn{Rs2)!$+cA zafi6f9)}(2=``(j1-%`bf*?|aMk;Z%PE!`ee@`LHRbDbRt7IYiQ?J2&OW70chTMOBgN*n~3GVKUY1|2=E%fOQA-P(6oei%!kH)tTpv;ME2%rCk zSlAzdowSt%B_GB+8zrI5n142Rzhr`Qmc#nPJY)Z{93)D6kjo3lk)6lH=%BL*M4f*F z+ghXX&?r5Di2ONdEwm#R$*M%UW-6*qkb#BwRfu%fUhe!f1x{(587H+bUFfEJ0gpNK z6I+$(IPU^Kk24>Eh5X8`IH1YwOVnI24Al-^RW;zg*q_98vQy#%`}(YP_CgNT|5 zP(D-xUS=4QA8WkGc>k%S!}baJ;B}lXZtGy&?#?0_Ji7)~d6Bi-#8EWuC5iq#8oI16 zz;@MP8m6>~gkDvLi8II2#WD1qtT^XzW*Pl-@&&7tCW1z;56KD- zCw_)kfW|f#aKnn-yw64+Rb#~QwaI8^YJ?g6e2nih1Sdm|$#$&PREMciTiHVA<9Oxf zf28X08)~AjD7YC{Lr#5fAYwnqk!7Zz(0OSkyXU_+Le{T z_fMDDe={z_E`?iIkzhfyYn|wjUjb>>QozKS)@0>)4qFFqLru$F2-)l77 zcH3=w)6tn~IbFl*I4{^Mvs(D5cZT4}-x5$%@Swd;Lb}iXE4<|QA1}AgB1!MV$^K3N zi)r^6v28kn{_=G+|4lz>sP$vT`7WAG_;u)tUJLV0K7;(lbI>xQo$l>lkCrhf(T}`D z-E#_fZeuB4;8`TPO+_f}#os|XLg4DHHMC|$FuCQPLBxrMV8^qKAb)BN{otHMygp1o ztDG`+2LHbG;qz%a);b0xCMn|m6KjB+JIKiAOr)FIPr>B^3HY=;ikh{bAw@;^$dBEA zBq1*ep17`ot?Dn$!YY@c-FgBmpKXW3|5mYI+wU@$7B0N z;rPo)Q1;)7^s*uS9_LN&B#$P?7o8!7f12phts(+{!AJaLb`j0Hcfjn7Iy#cp$X=b8 zOqIK`b7(`hEsa*J!vwz6q1!_?eO$-!a)b3*&y-qtU!u%#MU<+-rjp zcHT%O(=^5iOXTju0{BaR1kWT%7x^5t^96ijzl%|d$ZeL+juF@cW&)V5;~MqjY-BuJlgB%|`P$)%~hiKV}O2r*RQ{o9_HS zPfRtgCjc6H)0BtuP^}dU2mR#)u8zj=^pScx+koP>FgWH^;);aIii2fJuf z7}0!~!^UYWCij;wg7*tAk)7*{$O09BL@~ZdtRTVJFN&f2mmMJWM#1nf^d`ILUXHMb zpNYOzR3JZ(-6ub*6>-Va1auC0iYp{j(Ce%iIXm+-!%g_fR!aL~vXvOn)@t{6)Ud11Y=ZXnC%A6Fr&Z#hGOI~CasyiJ6+GFd+Wi%l#497i8J$FJ)?ih zEHOs;AJxx`K)ok=@If_{#+aX@0b4GC&B^UdrLP>2x%MFXZ8C0w>| z41V^sljo{opf~#-hRn3V%F-5c)qD(EsQn=_A-p%OB@=5pvOs*qnCqH&1AQD9vD;eK z0`@tPPcC(|dwvv#B=UD=XGIv8yB1e1^&pjdZRwanUAS>=7IRMH39Z$i0_!Kpk+g65 ztfBuR*sUW)5_14n9#tb-U3B?;xf#lJPZGS<5CQo^(@;+JH|5qHqESJgX@B*2;EZ3Q z{kpZ7r~Z?kj22MIL@ye=j-S6D5JQW9!A$MpG)Ny$Xj#@>vk3#p&rNR78Aq2B8MS8i z?-xbEH{OBN@c9M|dlb?qt&VhcT0c2$H4%S?HPZWfUck=&&8Bu+(Z)m5U_-}3#%7!? zY@W6XyRwgL zn&DcbPSR~#1a{0W*!Wr&OLr8})^ll0%D&I^v6VhN8|y^OXSRUt?kH+-*$cI7(!pQ# z3|ux0!Ut`qso=*V{QQ?^`egZF=3gslkX(ZAnnKb2KqK5t-$?>g9y9F{NAX?nVysfr zr3rQ3xRCF~X1(Wq2;oE2FrpX}U;iS`FQQHMKZ~W4n!IodmrLfdV~JV#c4F@lO;xyM zcyZ`7^cy#mdigsv@5N$Rq%#%|cU$1XhlA{~D^I9{p)0Mc34u`Sr$qeAC^%^)$4#C4 z1j(jJc>TRGZfM(26guaD+s#MVc+`_jS82ory{SkZdeM`g-b3N1kJQU=0Y37Y4CghH zV9UXWeHMnFE`)$!!zeCbnE8Y_;g72uEKvz2&0I=Y(yOjapshl~@Iv67vOhm2(86 zEWeSBH{u1qNIuMNc?idIQ;BV88ZN(I1b4X$sPsRs&O4sUH~#;Gkd+mR$|#yjD$aFX zH=#t4q^UtxDWNn!rR=>4l|7QmO2xU}*WJ(%5|N5_8rpkl{O<4fzwhtiZ|8CDbIyI; z@9}y)U(!dQwtOBRzIlLbW`>-b);*@+rOK`ANa2QtFXuEH-w=EG#hjnGm0np`Nan|$ zAU7|6AeC_noYJvq^3HG+oPN8UlSnV;|J$U)y~Lp`(_e+}l%K+lUpGQ9;b?QB$x7VJ zA|KAZNS>#CQ#h%N3a(r21s*e2ild*Wut)bhiTld;?BFn2en=4JL)LF!AtNCn;T5=L z$MoP)`+{{%HVpeeSF0ynt;qtZ|NYYq5iZ)TLqS5Kxn1Ji8IxePtoJOBrZsVT4KtW; zLNzlwf14~`9L-Nsj^y~M#e8MzH==1en@?UYMS_oekdV%8+|p6c`EKzpxVFpi+_w`m z_)9mHt*QrVxon+wZu6QVzTiJ?uDfy>sohb|g&hw=As9OaIs9_B-P``F65 z1HAInY;N;$#Ft!pz~pW!;vYgF^*@;??%fUxGD@k9nC&Qo_fnHMohh2!dwE4}_<;a^ zl=MEfrr3{{o;fJU#Y*|>Vrx!mss>l5QqF&ns3fjhkBQfqXzq!uJKMH)U?xaX5jNA0QjA7*yq(;ktd7V!qU9t-}`9}^s*Yb^3`%U5(mc(*NKfkc2rlH(5 zb3vTB;sg8i&Ky1){3cds_VY=yXSvDs1MHUn2EL^&n=`xKK|amQcWj9ywW)nn4!Z`*YOxcfVP|Kk-YGq&Nm#&0ZZ^9=rMz6Dop7J{w) zG57d*7QaSG#A)Xwad!?(V?C-pFtOkuKjaDH2KP1cyKO{7QNNNu_dSvKk+bJ-tFGYC zO|ZZrP{H zFrsswP{3hc;p$CJGIKu{d$a-FGL7Xvjf~=tJS$+rQj33W(o7DFp2=ljspOY#S<7uN zD&W_oW#OhR`D8_V3O5YgIj>*`uKYtdQGR60?bKbt@0w-CrBxTR64Mkuv@nJ+>r_tf zNiowfFd$MI?d(vCjMcMYv$)2F#cX5aG=65UEC0B8I)7_eDF0xjBkOM!lQf$k-p%6_ zo1}D>^vs@ym)uPxgTb@8F)xz1|E7)NdW8kxZDP)K9x>(fHV@(MCOfjr{#*FaIkuS1 z#hjB-EScp~gY>e5prhmt?h$^#zw9sJ?>|do$qOxbzq@LD=-=Vora8yBb%)b3}VUuH4vrk-U>hF28TpYCfyhfiJbw<`j$_xQ}^R zobtCRT*TizZc)5FfA#bv?zt=Cst#pxV*~;I=jcklG-wXbS~a=qmxuThzXG{~ws+BZ zbdR&u`9dr<+VJTS4_HcYHXB{OjVqhGh)kLJiKm@EAZ@16IVXL2()#f$7q~Hn7~hVi z{gqNAsQM`XcF93fZ7_kKxh|Mfd@e(#-5Y(*+{lG1^8EmT%^qayQf2<)EG_o>pa4kS zcorO1{oy21KXG!=R^0zAFZ6sy@-4CBSm(q#2=D9Q-yf(Z{avc(9=$%ymAk*>GHztE z6*(2;blY|4xvtHvQp+Owqdd9k-k14BiSpdhIpfd$HO%ImmKU&B4FZ7SRST&!SO8ZS zsS)!HA#Cf+KCaUG19$4yBF=pa4s0T$H!HH+*_C9mGLUQsb-rvr!Hs zs&B^Q(W_!ukjpu4V0;g^uHBx)8{bp5QN;GF4ke=2VVtsU1+S%hotr$PmnEhJv(dw< zc`wyqF2hiQKM_{U`>LMe*E**1(o;P7*ApVSMokUo+?&R23i)8MSNAwKZqr9@+?9>| zYKOmk!H=taUZEnnr;E9X+EYl~i_65OR1MdC)8jiLw(w`(tR&8^kJx3;W=_K;lSw{Z z&uN;a^Opp0-^RZGm<^ZC?@Y1b^eZxW-#aFpUEKx#fzl=X$|RnfJC6~~Z8ltWX)rfk z@*Ju$ap9!z=JM_0XsY4ZN+u-?(mfJ-xFOMiZ1%C@y(a1NN~NtN-fshY_N|zECoQ~y z+iw2WSbsjW#*}~S8p&pSE#ONd#&BxuQ~4#2G<&%@WK$^*Z`I_Z~(h54Mj~;(cCTN zBL7#Jck`mGD)BQfy{Lzqxe+lng<65d>!`p4aoG7lep2&(cGg4CwOO% zNFK_Pc+sAhoHU5J2mOEeK7UuuTsKcn;bRB(6D!qdoKkw3)6*>G z7Hi6YM^`pCBqEtvm1lCkMZf9a<*T`y^GkT;*vq_PO)A&^a}ZM15SL`sLu>k)`Q5Kg zNzLLSe%$yzQhTj{RgX2|A2t4E!^~@V?YT$z8-bFHoqobyxE9LSY8jzS`MbRS`WQ|w z(|}W%A3-W|GdQ`JFiz!98t42niToSf$|YQB;_Hn+lUB_N1p+g?3NUQr`XRw4tOa=)@3b2a|w&8@uqMH_Pa*H`ZD6?c9_#y#+oDCcD_ zhI8W2Qlvo`{&2I4IdRn--uGz;x21Osr^>Xr=YI(PD^_~^6R+~+9A8l#$Jai;&4!!YCp&kf@LR5z@Uc4* znSIn1J|oSP6OD{#)BN4Ina5^wF~&*Ug4PbUu;`8Wc5WGocA3jN=Y(*Mv7#N6aJ zIecVoAoCryo%7Vw=EX7+WZc;RPJHtn+4N)+_jh_GUnVTSD*p)!A*l{hevC_>*c95xNs52U6SUc9_Ml!?UwVE3#P-flh?(oM$ab3a%H^V zyHz~?yF&m}MhJbt|JUS<{+}jC_kWw5Adf5cylwpk@+^q)4zAM$n#mFJ`D73Msw>Hi z754B=yJm@oZ5`w^w~WA(9J;t}iz2@IK`9H{cpp#0_sF*^y}U|RFKLoAL|=G!PS!&f9Ekp( zhD73kw8VD5SnRR&Dn4?wNN}l4LjE?1qA%1)NaQkvkG->5mefOdbKjJ0bIT_R3K}SV zLOQbbN=F~n7UGKsYj9nF6Xkbylj3Es=+2fobld?g^jqpv^xReZ^YFse$ z0`k1q4~<{`pnQdmgzO8$-rG(#t~K!$&pMTh-~E?Kk9QcOea}qA$$wJmD9yWI5m5=U zX<9h@%QN&AyNX8CB;XA>$HgXIkMY&t>)=J#B6c-xG}x`6VwbLZv=6O>5B`JnMvN5X z)lNTnTfjnIp^I0Ha z=BP07kEzu7lLb4u&<_6GQHHOnheg55^PpqHGOU%52|oNuYO>%S)|t=^ex^m}uA7<= zAtt=`m$5W>K`EN7I1jwMmO)!=EZld~K?=?ZsOYLXb(=gI-YQ1X2$`YSY0fD0aO4y` z)h`6kl$ZpslZ>#QCMI-z-<*G)eU z7K;y$QW7TZcQ7@TU~1|Em=Bx@c9RYwcfAi}>e|haa%nCdArjPeC#taNFa@wXR?!f$ zMUgBm*@z@}dPDjB-{|{-$EeFJ3@cb~K$8M73}`vy!biiI`$KJ-q~%21Lv=;-%5rgE zhd%U`+JTJsE?k|hAdtW%abo`ibi;KFY!f{deY+;2`OejJ)hmB|$RZ1kORu3t_9=9i z@Z0>H)=0~QbgicWn{jt;e4|u!mU!y5u^=^4l{xBUvvOr?kln*mS?kGo%KOJSMKcl_ z7VU&epS`$nrW{1)N(oK}HF#$9Qfzc58Xi|&rFpXHxcVP~h|DxPcJC#)DceR%de-2C zJFk#uTsMx7HAeNT%u%r3cyvd+4c1QIN?MHtg>9xbIiOg8h8AgJ(_L9$`*IX4?9LOw ze+xyl_r9pTyop|^Yo|UFY*BT~Y-sY!#K}h$vE#_)DBkyf&Xh!~n5sw{{^Z~Vt7}A~ z_H2fq!*8LSJq@fcl;CxR2btMSQ}oYD68DVgpy|WA(F*rMytMT=Odrs(oVd4@%=Fj` z{WXW^llu#Bg}N+jU;Pdu&!s_8S{S={P#qn3c35nk$Qoy6hS7gGhm8-91vNp1EgSZR zc6}5`zn;}}EI(HCqUj7=+!T%9$F8IMuFuCu3l`z`Ky^rqi(^m;m6=;Z3Oem@B;IP zo7tgLdJtM04d#jp*j3I46?%$j=|g?=oxgxW{e$rrdplO8moKuqI1I$Szu{Z&K4%_-UB(D3hdsctL7rZ>u_qejIbz)&_|t?WxPCATw%-yI3n3%0{?9_PQl*3D zJW0dq*{V!;?r-`sLfG^tUIqQ-cCg88CiO`aA?X@hEWP0ks=FphBu$^ecnN0ldTJPz zIvj&K{(VK)FLVgch$gVzt;|ceeTF!LPNoy0jB44=M!g7g$U0L5^|zcLuwIU}bZvvZ z<7aRQ*S3R(xKmWuJpc*{b+lbN0lJDM+5FK?%*INdgrp6EzE2VQbN&_WdhtO>aQy;C z-o;>V9}5XNQFwOyNhn@72N~&4gI}Zk(JcKi7OAk9R*jTnmY%~&%bla-n|&UbcgMr5 zZ$40#(EwJ9H!};}XcQ^Xr(NEt<4@WE?JHyGxMfGg`+X0y+*e6(a#}sW*nKFmdIwv@ z2yG|}C+Rl_>5wUnMC`bqw4d>U-Ot3((Pzva4!)&I9a50>NpOjF%|deF7{@o=g4WqZ zP!YRJ?7Z^1xLl`SWPSb!dpl7JO1;j~oy!K{%|a8l{m1Xd=})WZ#}_husO?gwtx!u- z7ra52evBlO{>v0pT@`Gvoe%N1#CYA5c5o3W1cl38V8u)ucKS7A4&7_uo|?dw`>g_D z*_HI(#W|o^GnMAntP}UTkA_t{&jW5Pf>Kcv^*wKdmHI+)=ft_}7)@onxE-V@y9Cvp zZ^bXP=i!bnRdRgNbT)EMk$4}{6vgE+c&dDa?!PcuNXH!u;luzImd4{0qeK$3q8Uo) z5S*%8i{d)hp#{-P(T&X}xUA(MglClE?|ycWQG`iI;Z(9Iq?pe7cAFCB%C6qH-{^08 z99IPd;|oUP+3Z18Fn<354OweT$4yGW21lCU!9EkHF_?)z{M-t*8dvd^J-+nmSbeaR zG{MsA&eM!|ZwQ&$PuK2?z*5qZq$fPI8uTes0Ce6{jHI zX&fjx^^5oX^2Cn+obgo!UyH(9i>Q8kDZVf|6`S6FfS2EpB(JiSV8B6^-8J`wwgvI{ zBbY&wojP2RipAp(?IB54v3TejU0A(uGSr_EXl}S1>$;x@tE%&0$)@8R6oPt;#$mS@#24_5`T@I!E5hlySCG212}r6>K(1b|v4h>0M%lqSET31)^6$io zKW&l66Es@snM1x}$J#~|xE7&yMP+uj^1G7XS>Ib*PwiUEC6`3pzPegzImlc@V+C;TMC7G2)*Tr~Bv9KLBQFeEl0G$h^XW6FduRglI`=lj%sT;7mLG>0`%rYxIGj0$o6%Fbo3M_A zrFc)Pa6UhKPUk(Y!!_56pswKxPHh}RO%@N*<(6tN^+h;JDqe_JwakN2{ZnDdDS2!{ zoT2sgW0bIe1g!IsWa|xQfsNKt@q;!wv16hlb5R+^Y?{tOY1aX&lm8Y+8t%g1DB%0| zx^SWUZ#r=7K6&%P5SR4NfffD5P--@j9Xct;{NJR)sO)iM-|twl`?h3klah&K8V`#K zhOWg&8ZdonDV+7HV{o9P3E7-6AbhsDaAD11QJC~K98_d>Ka=iT{;*? zc5MXZ-3|0<|4cmf^%i(KX$hQ4mxDFk4JatENVGfZ7h1J4oXN*G0k<<8&Zo4|g^KPp z`086ce$*)#CpCtKqw}z1$`M$vwMujq4V zkn3w&ABkx?11(jh@wloWFcx8V*A1`QXSb=+*d0` zuX3Hm+-@OFJY_noxopPhxi9Ey@fK$2JQIF?m_bwqYSF@3nxNTyT0AiNDC8|0g`b`Y z!f&jO(xGVzEOVnjE;!dh3e%jiVz)cqJ}?P4&!|FHFhG(6~?Okk~n!ebF8d5t~9`?&~e$PsX<3(r*Y6xephJ#b7`{qX?!(C74`R1w6@k|c_ z`nQYwJRF(4b~x2+T#hb8e{QsI+b^~Xq^Mfx(T;jmpuu}F;)^ScVdbgwbZW<0n5ZR# z-n~wQ@;GDqEp8zlkfh?djhFQojsoNqwApqun zdjfI;qw)Q*D%gDJS2WF96F=N8q^l|31it+)7LCz|WMy*_=X|5lT#k{bUnj6(h#HRe zABi6{JHzIOQ7B7DhrjD!NhJz2slkwYXy)T4h;&awX)lM;nUV5f_lm=ji6vNLWioy! zd77^Gjbulk zJ2pb$JUte2Bm}`|)Lj}a18PRcIY`m4e9!L=9>0h9v zbQba+>4-az`a)OCWcq7;8M5Mf?aKHx_^tF2x^hKSXH3_PSKwG=Uq0saNNW((l0xa2q_C8WU{sz|KPbn3+LvWYZ$+m(up8!irgoNf-c%#qZ?G>AZoE6zCQ4P-d7(DsnLcccGf|b-?M@`_>aanO3U%z$%}#W z8ci}J>S5T!lVD-=7G~!q;~rN-ddPSX&uwueP5B;lxX!h0nMfUF zOX%1n6MS?;GAzF(JP_h$!e+l05x z7D0E8CThLM!{#NEaL>X{@zLxmk$UG?y1x20P4LK|{#)g_<+ifon4#-nyx0cEIvj_O zhjvr9SamWHokoRu&UmZd*~YJfPjGtuXrdg@B7Tm>!83!A&=!9LER)J%-q;As$Y1Ut z7m^6`HQtC7t~-O}z7FhU^#DIKJ4+W?hG8>fU7R^;DtrDdh(`TE5U~6U8s@i{cIG6) z**~MP_2>XNIqxr>On%cByY8dBPFw2WZjOInH^L7+9>U43v)wQ+@IxoGOU z52CB`OHrl!R=OxGf}HcOp@Vbk(Ve||INDd}Q4F_&*P~M8KcpN_U%CRcwrF7Mea9%3 z%g49+_tMXix%h1hqx+qU;B&Jitcvl4dh2D7k{oZ7~-0K&c?3n=L()H16!IczLDoG=z zJ2GVjO}y95gt$1lfc?<#D0aF&{yel;^x(v4OY4U%aOZ0vt&cW_)&jrAoWnVUI_rhd**;O4n_1#P&dHn_UR|RequOajBr&pajzADVV08J#c?5@QE8 zAe+}ZqVR-liz9Wmxc`0$bS?NqGjxQ-<%AwF+4Wl-v8Ee5JO}ZHtuk=TqYTUbizC-E zve?U(pE%Xc9=k8DK+~P_@Ono%*gMb!mDPd@C&C6V$OtAc_7>7*c1f`IQ8gIU&7${4 z=TmQ=Rj4}Y~l6>^~ev;_))BCuvVh!FvTkx%b zd-%etARMsiOk+>UADnMu#5`~Mz?Y+?*v;=Z+Vt!Xe6@1No07E2e*Gr#@zb%`^tK;9 z8|EVl^-2-P{0M@4XJrVSIvP^WEJJJ7E~9atIcS-3ANt^5f@2OU(OGrMAZ01RO1oyT zBYm{bh|LVuR~Nkzz2drrBYDl{uwnJjrwfn8R_ps!n-Xw!%s9CZH#ZoX$lEj5n7 z&dbU8d+8Y1*zJl3hFU|+;%B0ObMugRgAAQMaVP|DlWCOxdxjW2TZ64vHVekX#dN~6 zyW)>o?dXyJcQ|#`1qv)3sB8FDY?e5Te|BjrR?+_tYfXHEue=*Uq!way;ffx9u``VH z?Wlr}TbsmA?>(sAUo~obcq_R++z_u<-zt*dt_#n@M&a+)zWB$2li>0#TQq09H%LD? z1RrEaz!bmfEGA2X^-LUOmS@hQb;tARWe-EFp7;;^+g(VU_8h@+mLUXA?U8sO%xllaxi`MABz4gY+Z*SLR%CA@aOiat&rgN>)>TmCMY2?P1w_+9FH z$WL1c<0X$$lc#blFf1BAJ{8jDPfsB1rawI$8;!~XhO=JWMV*z_pr6NVScp{w8qj_P z0o*TGlX4PmiBn_UJ@L3lP6ZstRD<`)zbMB$3SU|D5d8l@^udn@EnmilSxjEohO2Gd(`r z5(fO!(LSv>*j{@arQ3X@D~?A(#PD!PHH^gh5l@l9%L=i!OBHNcxDWO3G)8ekN*uSd zjI7l3!ll3w6B}I;kZcScOSACd^Wz~(eKxLtT7()7mC}J#;cV8fjrjZG!G>U=SG(|R z3xs)?JmtCWK$zhb)Xo$BB zN7!>Hwp?)KAPm{RoN4x~f{hK$me0qSp_bUIENzk+e)GAWu`Eg_aMcd-kWmt%E;MkXi@&q&o!rRxM*D z0`+jhCKDWF-XhxOEe$mBq4@3&X?&kJ;$IJ{sL`@>I3&S{$PO>ZD-w6p`@wE_+>!a{ z!Tf9Z%&Y=DzTUc#l{C|Y@JyuKX9-a!kI>om(d0kPWgwGrgtoUgV@zxV``3oU^Yw=4_)F4I+9CP6Ci)Tc3e>nZ-eiD1%aK)8ZcjNf8oAHw~GU9i+ zktpHYDP-IlgUtH>qSJBdU{haZDgC+vPWjx2rYR3F9VXCe7g?jVBnqc}-6bB)2Vo_n zxAdxu2=DH>kK+9H;=U&XmcJz2sQk5VI^%E~9(bE04j1aDk{=&Ms&S7{xJL>Y#g6+LU3QeT3~izV@LKLc{s`H^`4jI%hk(iRqd-iysDOz=FPcl27X4tZvA9dCZ( zB#Qq25q+8xNA1(!(EdNx;1%md7hB$;%6k!cabPqUw%g$Ej@qnEMiGWyR6r%)H-p~k z9C34u4sG~BX+FK5w4_o?vvpg!pe znt*q#m4`ic3SfRO0B#jr$IG=GV0z?E;6Jvbbz_dg#h7H6G5Zd_eIfwjuTV0vMZx-VfNx8q?W2vvEfz`y1ghS)-qZB( zEkmsP_&BnwjzFSCZ?XDbfu49-gghgw8zrpTaHqsOI^skQ$;z$AD}|X{_3kX}RjQ0@ zkF3MQKuA}XmItjf9JXDkhF;EAgj@+Xbn$Nrk`6qHDs7_hx6e0m*h7DuXhx}%Ycb9D z+d$1H|3HpmS`Z-@0S))!8qW@kg2$f|vD54mlxPY6=OoZBg1^IiRtIW6#-!|l8r?ko zI8yW(f_7_MrAwQqQmMH?DB(&1%)5J?rQVAaUSy}JO@hIq1_+jH>UFM0Uu+zJ$VNFIFF z7O^h^UEOWKPCR_~T$;2x9)7*;rg{4e@r)1zwnK1Zef}f5^7LK&N@pF~`&1gINSTnR zvo2J1%Q9hR9|})e74cF*8SZ%fw0Oa+5BPSlEr=SNY1dpo$e-wp1A3G2@>f4Zmv_g2 z+RAz)RjR<2d&yvnR~_{H0yWzG^c0@?Efm)29H6e|Irxp#AqZLRkMXB6sQ=Bm=z*~R#A)CKzKuSg)UeTOx}Lc~~RAvCX(fD>B}ig%t@vm7?t0dA`H&>78= zxN}DjHXdWcy77K&$`zn%P2uqPLkbFS|0R;2ln*O*81PPK6!5_%?JT|Z1G(V-Pgsuc zho$rq){Gg3d%Pwx!(SI*(t>#$Q7eS9l||6jRtg)JYU6H|pD-fp2|NieVqTL~v0m>2 zxV6~^;`+5PNgTpY-`@h879OUfGjr(POLiht^Z8&MTZ+@ahvM}Y2#AGwJx)G=JJt-K zua4JcV)P0K7}W2&?+1h-%k?aL-zZlN&BK2FqKr zIqq$=Q4mbUbRVXHrg_ZU8OGNbTy9-R)O$kO!FM~0CSE<9kLy&po zlI85sK)CzPgx4zGiq@HD(s7w)_{ju2k^QNYEcC+&zN^KQn^*da_ACpbHWF*ts`dv^ z+fjt{iV2^!><@jJ+W*Ltmm^M8$7>`zI%~N{Y70OvtLnc?U-0>B22|HZ;8>4!tbDa69rsZd*{jIo)i!_N$c1nSd6|W~ zDiAfcIWO8+@w3q~XAu4U=|E*v+i-2MCxo2(3pbWIq3y(gte?9?bTnTFLZ>Z72kmds zJ-^n$iMN4x%(w9@>h^M0P$tRJ_BX@8867--!$BmP9|eZ5zoM~a+r{;-hQg)A$!N6P zKNu9^36^!G;MS0jkQ$i<=NHE#zwK^l#lA+^W*kWO3=1N3LJ(ckGScJx2(U63s<$llLW=q8p;N6LpZk_B?T4P=X_4cB8Q?EYYU?5%~T^p)NU)fHay7(PCJF8{TH1 zRcqE@xAyN?sxVcgtD_8ipp#^H?f?$4d_4t|(LS$huX?a=$e;HJc5~ul0a*Y$fj8zY$I?=%b5eo(i+fFl>{2k_OJ& z1ly;S3a%CZcW1*3zJb=porHQE`d+n(ZCh& zQ|Vii+sJX@Wk^@}hOC0sNq4|FW}B-HIX1gd$9-p7AF-D__dkgyIn0N^%4g!tRDakq zrAj1!+X_yH8-RGnD|*0Cm=R9g0uy{K(0B(C^i0v<4J)S6w6|e++og1@_wywL*C9H` zDw{rhSt33|g!3pP5x*Qi1rnG11AG27@hGejkDaz1$0wvgLf&iq=Y9hHbYCBo>-6c> z(o1aolMJ?ZZX=2^`-jB+N3ij=Cp5fE2H&Jvc;~OFc)Q#tIPS#Z`p;kBTe%#nS$+l8 znXR>`idUhdqsN2IUJcy&HHl5RGlyV@xu;Vkj;dl_-)SOuScVG1S#3~pkoYTW@DI#SR8GNw+ zk!as`MT{S2Ky&p2?C|%xXp2C4xskRD((j~0Nq{5y`&o%7-xSCp^Wu=wf00P=i-Mc$ z0Wxu6KAn@Rfotz5i3Rhac!q5p`kp(HuGhdqs*Dr;#UByxcHhQIVtznd=t6q2R~4%W z?&;fh3V81bKkB!7B&(YsW((&A!mCd`taDrpZhV%9<&`BsFJ?IN9#mk9e%1otpNm|Doy4icne5dnU8uSo1a>xa zaK@xEYSZY93p@^xQ67W%)!`s`E`JaF&v?RIZ*45I_Bfbi8^ZC2N8#vT5{zJ)O#MYG zEc?0)$F1H2b`H@v!PJ0B{+%k&sb<1k*9lbjy8<)bd$4hxwFJwzAq6FS;BWrBm1rlmUGg-@~l6VeD|A4*TWw6J~~82R5Nv zm}5VHU=t6RJ#93ccg=wK``*mQTLh`yrIyVlHON)U5%0{fWHMQ;>}0+JGdTD{9BpRE zG)s*rTX_vXF`2{kPAXxS2~$K)fr19Z-WhgFdJFr;5$qv90Sb!`;fa4YkihXNc%Ri_ zeC!qy)vBBlZ@je@>o#f8zOYjG@iLo)7Blo;b~751(F|jngYov_r$Xk-TEWKk9PRjnska5M!nDeHa zmQ{_b(2tw#g4gpaR-fN1B=FNm~`~PTWe~oDWge~}^e+FJBq<|Qpx#0b1I}s3f zLd1|5#e5nqL8G7@ks6Ao~8(3oYMtR6N7;93FQ0PeZb@Au@kg38T)IpoxW2AU)I< zg*vYRy3B6~&a<}1t8$*>RryHJo@JqK>0z)sa{`{R zUmNBPe8R?chjIG)a58bI1a(dLO4~=BqVbFTh4*p>zjikv15^PFQXL>DI|&}k=pZHG zo;Ymyh$Lkt6XW@V4CJT%RjHTvK;xVB+OOx zvm?=)`srxF*)Nvk=Np5yss^@mJtJPT=m+Zfbxdsf?Jrf>)`{jl*ai_CVz-VTLAKHV zAUOIa?QxBuh7CsGx^lVL`RhKKRDBX^hFGA_BNLJDI0-P1DWRGFslyAy3vi-yJ9=lC zD)K9BhUT6-Vo&V>k$$lrwe<~b+$Y4~6+OL6SKRo3;-(nk)@lin&qf&-V%}?MsH={b zKHP=+cwM3{KM`L4*GQ$Wv{(eH+YwS-C^peOghF(Fh#m6m;bD3iJP7_RuF$NX2ST5ThSK6#emzw^QKhz?r#+y;yW&xrI2J+bDTQF!6eO;~gP3L)sL zmD*jG6notm1y4R5L)Lwzf_||AG75@Bz6}%bop-UY^p-E)-)crH11{2T#rf!K&=~l6 zcfRPShCX_=*O&%$uf~q{nUuFyg7Z4dNOyb;Oc^yA-Q3epcV0U{^`?$Qu}P1xruRBl zIAI)G^PGwHl@}m|#tix|k69kM`y5RFREvghUIK=D4nok|)pTXt7@Rb%(c)+72pBH+ z07a?%q-U?Ti_GhH!~4HwROQlCI3I(FOvqSB%}heBr=`%Mwnm&*l?k3N!$9S36l7nu zL>V0p=oQg_jZB>3PfYK|(d|dtZZWb`K+?Vi2?G5Ogx@lSSL-o3ncd8)1}? z^0anhI9f0#k|mE+AYs>jgL8&eu&wEj7KnV&&koNc2G zKkVo|iF4G(;3k+%9f`5x8)WFB%x+!$0TyddVFPak9O~xmdmbrP_=w6_7*35n4I{?S6;d-I@aUX8dSlym__s+4$=uC?O96t;aE1!L zF?%e;ex3)z&)`Bl`kn=hK5;N6cNo_v8 zDUg^~|NDvFK4@gYfB(?P950x5DFMU=J>nrA1vIAqJ8b&n4M~=h;b}=MUWN}NJDL5o zRc@|@%nu4pt8YV~hY=h+nFv1x%c@wZ|7oqhMxhVEf2a4AbQir5G zaaX4nKilF3+UfpJz)Dg$zAxGv?|_Ibck-Bwx9HA zbz;fk#c*ZB^9J9KyW#DMekgo#1BM09peM;_JpY6oSH8HIn*C=^itk6lJ)Ib^>=fab zlM`Y5T~G0vKY$fn8c1nx3~n)CRMPDLY7(iC7f)}B|E(~fId7}j3Kc=8zSU9Wu%ich zMuijlQ(fdaUxOuCU50V)dF-K~9v|^?C``r8IN39s#Gide6)oz}knbn(lgi<^A$ttd z)$zue33B-P92q=)mhfHMoe}iNNo2&mzclD_0)Bkg1Pk9Hu}iL?SFOunx~Vxa$e6+G z`-{-L{=G=YHU{7M=t^#fdXvk4^Reih5)P7?MIMfh!r!YbV3GAtY(GQjRZ{ke4I574 z;N3ID5BVf)At6H&er3@4=||`nl}B_;uMN3el#TwJ9Nd;b3(6hAIfqP$cwSV zV5v~w9tao1{#Z-2w+*q!;vO_yVJY4*-VHA}H6FD+c#KpEd~CRwGx6`gEtU8N&=LKFpLyzg>=jM64ty8f}C7f|nu zBsB7UG2YSQMb_URhUYJg6+eElQLL!=2T~TSgOC5};7hbKNuHiF@e$Fba{MznrU5SA`H40}~oSpTM((61v)dKku6OQcxPVp$@{ zZ5nk}+k&pDA{#m?kBG|C_2tV$MUnPp*wwxj!wP+k$*gce5ELzui-tQ%9tr(BW3tLcD zOob@hbC9GZ$f3*&@#0UK!?E?RC_L(+0~)c_pS%|mvUtaPX#9r@c-;X9x-6{`3_TA) z(~_s`$*(dYU1l?Sp#GfxyeI?yYRMqvYNA;Qt`L7Z9QALbc;)+x;sYax<4hGPSW-O) z9Lv|>Aj1VDOWqWQj@<#z=X`}ozXZ5U7qdl+rnBS8Ul4n~8Z}k^fy>j=kZfZF>N|Qs z#O_$I^$!vuy1E;~?*d$zpM(#+Ye4&^T9L9dH4SqewBd`^d`RDXgk@Oj;aB$(**n); z;)H`7KA_*%I8V_82d~h;LETS5V%lTW`uPg+J-7`j{3+ElivlyD#{RKrGu%&YhX>f+7IUgj!Sg4!Qwk%~bWqaYjYyNOZ_61FSIg2E34G}-+tfLYOJ6E8xPZ_s6)Gc4-$Jwkp^nOl;MFS4td2b#Y<}eGS-r11!?$79reKw1@F2kjE=8I##Mp3zp z8VIS@h16|Za8k^5xU|;=EPWtpMt4++aw=aMqBOffLro!x5XM zh=o^)E%s}(pW~v@=ZQk9Y2H#c?sXmVS^HcZu~LFOMe4-kTpx^iZp1y3yh{5m;~;1F zFfcDQ1GBMzgwJp^_zpQjhnXp`7~k1&XRR9itj74}ynT&3m*|VE&L!YFn`|nRlZ`U6 zwb{ZeXK22H6e`}PM22`bfXac5@T3UiQLiG&&%2)JTI(3*|G*1hoY#z=fIGd?KLe5k z^5&6my;ydV5xThjF@73<7;F6f&8qr^`QX4ck;}AYm?zz6XZm93F8(c6a#)V<&f7z_ zM~Tso5PkAK`Vt+zeysTRdL`UBVjSN8`!pR{Hh@fSm*Mu`wAUMwI>Vt z^@nG{Vnazh>!g6Te)=;Tqj<7&p)Xmx*qKZ}(j%%x^M0M&z9Ar@h@TpdebM2t)0V!d@rJNUq51=n0DYLnN9H6&=XXJ z+4;tS6>z^K9-MR1n3HMJDC55wsyB$CSLQ8ZEnxw-SJ#p*M?Z2w@E-3G9bugNKhs`u zAJ)KRGAdR3(g|@hMOU6$BmF%c6+TL_!&Aqz7i#PwPurAyvU>$lHb;ohj?~(=2Xh5N zt|VUcE21A#MWXWHC{91@4ih|dft(I{#@!HzZS}9m!u*$CsLz~TT;AUkoL$8%gp^z? zysJw`o_k7fzv<-4ek<{B*!lGNlsm-kXcm+P-Q|`({6|_9j7g@hPi^3?a^^+zPa5;} z2(Em2zh-dP574^MPb6%5$-R|7$o(Cc$Vyj^=8O-3u!6mK$03D&IlRkUBdC{_9%&-) z{wRQyr~(q#%3@;kQX(PH$tqR%q8qyjBu7l9q_dte>KQ|9!vg5ccW-dhx0y82LKd#A z$Rua>PeEJND6-4Bj70p55Di~)R0u$tgYzX3an6w=3v18P4XqJ)BKQ;K#r4VGS^uyr z=mC+x7Kq7l^T_msM7BUq1^kyc!9Kl0+BDA?jJM>XY*G?bUb>FUhmC^Xl71+RNQKXV zZq!MA8gEh;PHev$;;=pf-e2-?lTRggugV%q*I(v}lx+p-&o|=y(vIBQ(oG+{`cA#L z`8esc4aA>v!`8tXA1Nx?tsS zR$HwX!A~7&lnbrc=u39T7~%WbyFgr1i|7>F@(GpG;JfcS`uArYDJnY;DhU_Cd)gkD zy5Khq7!A}$o-l>N0YPU!8o*fW^kzSJh63-?!qsS6krB9oOqyZ<2Vbh;K&~RryLO!B z!sM-bPn%+xIqobf4psKC@CBKNnb6At>vfB0r&M|*x<(P^yt)1I<#v%?mQBT zDf<1mSq#|hJKZX zkNO>OIQ|?nWmi78d)94;itOaPw)^7p{!wsxdL*^jQ$ZX5P3LYDj37o6UqHYM1sL0< z0q<}9r7;new0e{-EE@zP*kZ_hlpZ{9nihDS%y3!0II7xD(;Ru;@`6gZ6Qd6t_ z{We{A_apV{qtwp)Bx$~90r}qT)FRyk^OUEf*U5MIgiU1bcC|6rW{6>T^Jti)9zgbQ zRK@RaLy7lmRopjT7Bx!JXjHrpG4@d6qhG(p8{&rWR(l$LvuGzj#3#baYp$GfFeBtt z%6PFa9mKv$n>Py4gLA9DFcW3BpoR$*c06r%|-rdkpWpCK|G@PeMiY6M*O1*_&D~sKSp-k(Z4+%2emV()Kl&Gbft- zX-|PeX_2Jqg$~v@C_sR9IBf5x@bs4kH5@+*O&Gy*E4`3k<`7K|Zt`cl9~waTjKgeX zA}74}ZbUkEDg9+wg<@`gkn$vlY&SfCf$Fo#@X*nt763J`5p~w0`2%V}{(E&ck51c`Tnbq(k~Xsi0JM0xg$$OBb$sjALvhMFG=B zVm!*zYo3oO*{Xp4VdL28`xnDlgGksdT0#8Qo+U|PUm>*A78*pYp!YO{tkTY9gDV70 z?(IxccJc@3qbDlk zm=7Czsgs)=47<3Jz3`=qIFHK|=x-AEwAB*+UHXrk|1txz2A_xyyG6qH3oPt#PT+>^ zlOk)J=P;*&bI8zJ&~vClwu|Pt%O#GjM+IB-FaS6kX&MdB5Fvu|0M(b_fyH*#qjAdddyS8ZNFjEbSIK zZ1jW}i<@vG9XEomf)@HK6?2Q!cHn4JZ8*GefJVGZhrWhIWJOg5$qtSt7GKSo)sL2u z;#Mnu)c%P$+G#KE_C;t5J9&uk>OPEe$|U%kag<)nJVdO2=a97QIrPt(W>Q)zMRwZ` z(2~6dbj71>Ohn%qQjwNL=bLHZhO@;?_XkOQYrmOvD~!WPn>a3L!ddp8el*_M>Bnt3 zC`*Qm!tubdGE54Z!0zAdfO1D4GJE`!xN7ye@TEYKyJuE}_95S3+JRVz2$q2}4ieZi zyo@fnm=A*u3ALTut!T}xAxbVvl7>{F<@MP%YCP!cBk0lSRD>*~R^}Q~!+fC8=8~}O#3qnA{*j#NeN4TTchI#;;$1q_>J<6L;=Xdp= z=bb&vF*q}Uyp`(VxSb>T_rZ=(pDYFoPj#U}NgDHQ`*i%das!<2cfk1F#q5#v61wf@ zcJlI-I(|H(2nKB%$foo}5~c8!{?L2Ks|?$M4^KGLkypgvac&G~w^a!q>o=rj`Z%aK zH3bSgRpG+VR^pnjj!%z_0OLbZjHH@8>1)%4=(`c%UlRhUk>L=NI0H{syabs|9XNi% za{i=c4f*3|3!3B{b!x03OaIytE1AEjQMMK>g&ZKn=aBiu0{Jw=2R9xR8h_?xu)OPV(ykhU?}c4w;rl^c zwQB`_bT8y0{l_x0VU_stW)V*QucG#kauQi)dJ8;_i>SQjG+sUAFI^eXi#|)Tu^^B| zhgEyPdYdEY1cvcP?d|zst2E4Q&a5@I$%k5g4Zm31QD~}Jz>gIg59SO?@F@X~_<+sl z7B`6a$LEdsBl``AS8F&1scwf8>s;vis7#C!X^}6jo^br(YW(cdf_j6Ucwc=usIHcy zkMrh1#2a^h!M7%~6exfyLbl~!k%&)?yhC=TcJfBbo3JHI9KVcMk1Gm?^Je!BaW1cg zR?omTu49`8{_8P-Ntq8I{^~Y(Di=#;lxxsK7Fw*?+EYYlz)LW_u>Z(B8@&%w_5dmQgkfp^sFm-t(eVK1X9HMM-S;TEp^-hv%O16QMu*d3E zvBBW+>D;+Fog_l>?{Hy%>gxpWuhx;n?t4l9*?PL6 z@*)JC&f~^j`%O!1%COK_mOU3P%oc%Z+>FWnBGRb^VgjYU7~N5#qJ`<)`jt-2t|9YJ zI>MTWQdphY0g^V(=qT(ge7Btko$xq_Sz%7)h4ulhf%VKQ3nlhHJ$;`0Y6=Ukxu^0zoJcblYOmZz+rVIuOp(N`+yWSOri?GL77j5a7-(IL@XM$HO z)xq&`6`dnjLlYPAG;7~lXqlJ}pDetw&#;G?EW9t<>#E?3>PoKP*i^VPO#pj?i@f}c z`QR4q%a8kYgBObtzIPc})aZmcJIh)dEWd@}TB|ye>-~l`S_ZV|?F#z5K?=``_G9Pz zc~~)dB?c%@kA0x8)<6bkd`{`udVt0v|fAJoPvMT4?j{Frx zs9MANC-ZX=E3!^bb@;*#;q@gD@U~Xi!*EKu3S&3 zZCOSW|K6qF4tLQ7buYR6^Pe$xk~003oQ|8a2s(%v@ImW)m@)n`Fl60Cq{jYZ&p*&& zFPw>?AD2Go+>9dlfRC{-#YGj%whMIBk1~wUj5EwS*wE{D_IGf#;Ct~IcAaK49m0mpDyr5k!C!beMEAT<$I*L5 z+<}ToIIO^#^!X-1mbD|cMC7pKn z6LCB}7sNBBpq5&I;Ey;673)-Jio|GK>??ROB(CysK@H~h9K<(f$<(N|Ml|dE2zu$> zwA$^f%(=|sO+wS3e!UK;aX}wd)oaPF zU|HDPvy14a-=q(ll3~XtJMhf!0C`mxm?QYS_Gn7dBeJjH_CA88LZhX_+9fzeaV6Ke z<2=r^n#Gjyui$d?2T~=o8*doR#%t%?;mjT#kk}Fkhr$z4{aq9JvA-IJ-MP=mFOS87 z(WdA+Uj`NjTBFSD8DwJODKbvlhuWHkg7ZCX*!yje8oP!-%S9hfcbyu_d~Bv>$5lXk z%0sSI@GyC36d>EVoEbhsL$vL}dRQ9N&dGdohwS$(J$^_y6Pm5~@eemcL9a2hYNScc zh`WLYy4A$AOmjS4XE*g60Dz5M?-pJMaysXg3SRP+NT6q>T!WeE!c=Ex!y3P zZwpPxearZ@`!M@U*5LkGnK)xZd<9xL<}@6i{JLiOIWgM!qlZaShy$iMmrT@(fg5&pq`_JTt2byf z%iAmPj@~+~OsPcmX>#aeX~1^x5bg_K+{pczmVDM-5e;(vM7wY0z}%QS)WuJUZK(^z z^?r-_-KCkN?FWw)2b)ODOmU%A;|87m_asr$iet9052;I70ee^>kZ43t#+ZzqF!WQG zE3`iipC5ms6(et8?#kB~e(w=2>N z@%ZO;>};#kL_aZ|yw;}zla0p^Ea>Jm5m(8V zc*>xacH}-rv3sZSWVSKe<#3p&6|8}UXKsSxcWYMVmKL^|q`-i)FhlJ#!_8XJG_g?) z!tUgt#oAzW?&@3*g-HPmdlSHuTkmq|gt5Pk$ zsW=eSj!J#1_-|1!>+K2X7p4tcm|#d$7n;$}oxv>GJ}9x1=P%?PhXSwg+AE@+bc$Ut zJ+jM@ZyM8vd&it0k1re01v}bc(J5e{}{aY>cTj+gAcoTAAd-6ERlny&=Jl zn+Tciuqk;OzhmQR>N7rvSC&cUp3RLE$j2>gbn8muFL9l4{OovLpEI1yH8{; zNbt`0WZ|-o10A;R3aQv9Xzx2OfY^c-%>Gixo-voBL|h7f9WZ3AIBB$EgqcS@gAQFj z2;)yJ$B0Rjz;$3U?s<*OE`u|qTK^E8F~<@m4lKY)&kIoYj~&Vs8>8Lb4x%%8CYy52 zpHFeW$Zyh5CY(|@%-vJJ=`c+N;lA;Uto|xXwJLJRxSMIv;JAZ3 zI^rg*nKB=aA9ZKm59;%tYsxWn`2-8sbJ`$YJR8@d4mDK0h^DiK@%!g|$2DWO!MCqV zKs9<5pJU)i^*7jIO~r1g*11gH_h-~*j6Y3h6isAr&JCkUDPzDTX9Uc86bH?-*7Hi% z4fw}3lwO=34d-?@@&OkU(ffT8^P^Xh6pm`fHPx&5Rg1TSjAJEr@VO;Ql(XU#PHDlp zwzG7ur5HH;IE@nDbMc~1vp`t-hsjsMur;QHZB?8EMVVjO&5`!(Qo}Tqk{wG9Zrj7B z86@EKxI?5{{0-f3Y%7g7(qS{-BtWSRa*_%<`b0m#e`3ZLiy{8T%-!N%?&u9!Lpvs^$e>|ZJT&IeIikuEC^?irAH#150 z?H021*?;JfARV6Dl>nmzv7H2>nlJxr}Dj(w=i_4z0vPMv?z~ISFQcR$!A3<#Fe4zhzeQ z*(hR0!7Cw?`^dM8?98j=hQphWB51Tf_BWTfZyp0xH}8dA^qG#MnAL*=I+sig)ZkQDU`#}WHt6p+QI1F zW5GcBBRmw~`9FRluCYHMqLrplvdN2`H$n?G?TI7&3rptDvJf~F6d~*yRzaDpumfxS zOCR28!$_?te3kx;{Myq4U+*RI-~MXgGV6RWeppXm2J`SD>Jf9UI|Rb~mQqIjBYoig zOW?nZMDOovm@_t!1k5tSoj0V}dm%qSZq;)b{htSUWe~%D5q8sy@HG4T))5#!>nmB> zAkBw-R%IWz-=j+&uw?1wU})8PO~-}ifREx|=qO64U4@ZoQa%zs2TjGVXR7F|nY+PW zBLY|I?!@(`MO4No78@HEvT-L~v%i1Hvf07kMej0WAa9u+zj9j@P3V_o!~AB$)qnB= zJMbe~&8UQ>sa9BhWChrT$Fb|vWZ2|IZ(zINAK$0;A3Pe;X59jG=-fx!$hp?hVDo@t zQoR!Sutbg*@_(>lO%8Zwjv#Wu@8I!s9xhKjLC)LW!fmSR)Nf-gw^KNE-%ReMf8Q*| zvxcc8JG8AM<>rWBxe$iK~0^| zDt{*np0sZSvLqX78_xQ4o}fVuPPBBo34S#VhiPdNe9Om4 zkabm6fSp?4yu~>f)2)alFP3Bag0+0aA`1*NJ%YO|{b@r|954B}h(;dkuia@|M^5$6 zWpiE*ft=@PnmkpKk2vJVp9o=~^S`Ui&4usK|4%q-ZeK(W*E-Q(eSKupJC4&!@y6Wb za&rEgIKTD3{dl=?3dVasq+x~~U_R1>SKGdmc7M>o3D*?(_*!#p8~dFsdaD9Y?;pZp zf6}2rdJUWP`6yqLz6I;1jDgUYak%wF9`v8>r@IGT==+gQiunGV4LE$=A+~==8CN`t z;L8H;W16@+49)mJthG;56N_-1A6pLDH{;1SmsxC&eK^dWSHizZJ_=%f!7%yn1EN;F z2Tw1a2wQu2Ot7javosX>V!bm|B(zpN4;T#+f+kzzFa%{D8=)Ym8WINfgW0`$4Bhq* z+Oi*zlH5YPV8W20H>+TkWh_KZO~r$c^Ra8*Nx0_`gECeHjAw}wE~&4;DUbJ3H={t( zu)vR=gZ*HcwG@X1pQwHQpa33?UJ3i9-_Xei)}vC?HsCBn;NAClI1#ahu+3s1I`4}7 zD+@T&riC}Q_j0ygDm1F>DWTb&RLQ>>)g_*z*_1x~ZJI>t@0TJRPC@bhrLc4O9>Kr% z6vt|tVSiKyZaRMvZ*Hr{hBm=hrRWL@QE_B@S~W9%`hIR{;&?pyA(~kjbcxek+(dsk z?E{@ZduUju8qr}@u`m5PE)FaQsh=vWrr8VTgP@NZ-+dskn8cRD>k~D3 zXw1I)5)aK+&G_c~k$80BERbm12HRtIlAAw{kf$>>P;62?{5u#(^mav&p`)Mh!kSJp zx62+z&3j7aGY!GG@-&lLJ&5+qSWpSs51Vg(Vir2sLrZQdzccbMegCTz0`@16xIN?8 zl_ni@d)y!~lKO-Xe<^~a!BJ41bQ+gfw$Vz9EATVv9X-;XO<2Wb_mrUp7S!oTi+to$sAQ?q!>?a7zU08P`a#_aSzySSIQgYy;#9nPEoREld;_NK5Fm!!!o!?*a z*}PnCl&H{kDYK4BV4;lkWPXb~g>a_J+WcAF>wR z6*pO_`(1RZQ#FiOuE<|~HkCIS42R{SEU1W^OjDeP)}GrF~2h4}8!#u0Z^VW3$L^Fs=l>C=AVh8`u< zez6Q@Ih=+B;aSx$kzjJ>gko5gz-P5{A;wGdsg932TeIDq^EUfM%r664v`f+#Cvq7p z|14r4^_KqEL%`}*I=Og9o!{~*nvpbM7&Hq7@A4U_aAz+}(5xhfwYNg@qWSdQrwC4N zk|`N~U7nQPyaapgE12s)2>I~qELD7ei8^qTur1IV17Z4U*R*u)t8D*CsciS1LvYjTFzG)z2C_C!Mu}b#E1j;(-d>eYWyLc{+?A6+J0mc$uZnJ$ z4PoXiRpOV;5$2vd2H5X8g50|yL94xt!CT&s%&lu=$_>V$Q#%W?$McEnt;uw4*6iAx zBkq{(9zvXo)mX1lA!Ny>68bJ@HvPO=pWe7z4W~XdV&^wU=ED4g%-fDXbp1O+^2F#r zkh+>iqL?MPWyWQyU=j?`8&|h%JCQaf?aMzZx187l(bvrC`+Ek$lwM5cJA_ zON#j`IQ>xrZYr4x{$;v+s^{2C-ydlJ_dyC};-!K3Uq!#HR&MyCMReouL~4B66>FD% z!0TPkaH=K=8*Gv*$68756SwPhxGKdkDS4= zqg2Yhl=@z>gBdTkldDt4(l49eu#@y8Vdk4-7}4zlo0@a+?axo3?wQM`3r%_%+6koA zp%PQSgpzV|AI9bAFxGpf1p7#4JYBpr-F(VHV=UJBK-Jcqg;1{AI$o#CdV4YR|dSnpk9lXV7MYqh#==Dh{6e?@~wmZ8}f0jcC^< z(OJ54Fv+8zE|i)G!Yzz#{6*2DB9j`s9AmCl#bD-vqe2|a9x7XAz)BNU+I90K{nvM1 z*zuNO`RZW2T)B@1P5%r-d3S4z>%yR>?-3SHNaiotMWU*OBbw82a6c{%pVs8yf6LEe zZ2NDfZG9wT-RTde_1<{xx&#_Nw8G?(&zO#31~4|~32m{b?NUGjk|t&3^!xK^awmSc9^SfPDb6TZ36CBD%~u3vyUMr0}V! z$UN36C7x@Y*kOA9nA28GHdmI>%Nplt)JijIK5+p%e}g9L$#j!^-DOn#j04&Tp8J1& zrmRHvT{?a1H=69JCUAC#gPZCNVzuTR<78}%>c`teAM!VXhT|E09FtBGO`ej1_p0cg zKAQfNSw|*k$-}Xh{j5$O!RCWgS--8zdF|P9uuWh&O1pn1dV*Ize9eCFACp08x!5JE zAIBIY9WQ!(g8`LuE1+Me7ecC_(R9!F30n2f>DoOPL8UQ*Mw+-lcfKJTscj54io$Mr zya77(42TTH4}i^hOKd6NXhu!%iG;I*ap>}i}6O+#W2WkG#_yLFPGD3ga_dho|`DbAF^)j@0&s<_;!D;oJ^WT%lh;10Iy(#hxUQRF06-n(!EmC;#SNY)U0< zF%$SXnQH7nU?eEAi}>7VaoDpl9yRWj;Z`yo5)Z2IN8@ezjAz%F-LsZKxMmKl`jY~K zURF3?X_CM~EaPm&i;30YHfXF*BYRBr;k@`%T$7~>(+tkA4FY>>+QX+r(IblPFI~@8 z`KjUk=@P6*Vmdqa{4{Fv){*29N-occr)y@rLE9~$Lk~aD5%rB2eNBS*xG)EI+%KXV z7Zh{8qm`-Nzydr~xPz4I9H>q6h{HzJ4^-A@I_DEe$*5d&QuxdhdGYD&=^5Sh`YwIg zG|Uy#R=k1tiw=^Xc>=Ha(Rcb|nc#uSS;n0zcgDW&7WB#0k#y!+2lnlpXSGhMH*wM9 zOuXb_0fT$X&>`^&`I5E?mz7hxtG$bOMSF7Nq^d~$^Ve9snb0vVx#Xw)2sSB*VuM#M zcu5rF_NqXu)r=aImd+tQE}EfU$5jH9ZC==jbN!Y)b=-dZh1wVPi_ zm{tZFFB=b}L>jetZxXo32CjM5K}GTqZSPj(cM9IR*%g&=^=3E*?5l&{@fsLBT$g&S zi^1D^$)e=EW8_e}G|sC&1o^ApqW=m*bX#3do3>x2(|p3<3uBMDW4geRx96o!ou&t; z+vAZP*RgT(Pf_joF4E$V%0Hdh#zLJX-Fv8*N#-N*`o994Wg*Gl_z=i@UtfgbBivy3 zv}jzIlZ#))-XzV#)mWbmM?ouoJ^OQqB43a&9v)O4B~0vbeEmj__ho;8c5?x1o%4vg^{ilj zKX*i>vI5N)T;lgy zo8fwX8_KwxWt=9;vE@y>so5Y$dIyHMe{Od{Ph~Ct>SZzA-;$1UmD%P7USEkvh83^8 zb{_g~(8Bzp3(Wf&(ah(ySK;H%6jT=Sn=?`)=n9FI;5s{>s?QT+*OZSWuIntY(0(se z4Ozmvxt*-Z;Bq>@UJg}yR9JTAF?#s+bAIOgE1*5>8}B|$XodV!F8EOi8=;_pPu&-h zBEjRbx8WlFB{u^yy2QxWrV0>26igf$Ni4r@WL}wMk?9(zU^!PwmpY{r&wza-Qsy>u z@84-0x|NAf_<jzsTd|Vb~*jXr3}HnW+dc0Di?~=6d*D8Wo-iPj)>emjlJvupeLP zv-KvFIDZw{m>iWHk@O!8i3@@QaOO zRkv+~iL3@xTR8KFXP+WNT~V~leG(ZS@QJs{`bY{N?8ZxSm%#qR2u5-5a612)8!T`b z4db)Q=(HhAzVmx0t*o2}VbON%BmOtJQyBo!sTxGt`XZ5+I!AWKhtaIquVmW2NDMT| zWX&thaMHzzuv?got?nJ*j~sWv<1fa8(w_C?+TOYR;l>Jxe4I&pmud44uAhLGk>l7S zU;e_<^1U?ZhbHT%3{>g0Jgk47O+VW-!a9E)oGbj7V@j={PU$xvG_)8yw*oqSjsctW zjqIcA$+Ys3h|E)+2(1BEX_u$)`-&dJN2y=504+&<2xAcWRb*gNDD@-BQ>P?XrUzP)^AOAz>Tma@ySfhJAZb|hbj1cbUZ)!z@Fw^9tAR2mU4El^O@hT zRKUToh!LCejhm39$(|5=3|9`RVXL?+dHCNo^7P?=x#5aD$_!V9o~GlZ!!8^iRJV~k z(d(dd>MR&Ky$t_cR|RVFyT@-G^2qcS3@mCC;{tXPvfG!*rWbr0V<= zR99Mvoukf>bwO)begAv-H_C&S&npEtjR^W);0VpCp23cfHYdZ^RFJ8(gPXnoJH2wy z5}qk6<%%xdrk4a@(O%t`kg8mQkIP?TdSD-fPngU{zmX>0F}-wkeI_(6pT*Ct$OBH} zD4$ZZ5L#g*d~5KaAx*th*1r=|D;K~V)6v-cp#xWsdrh~Ngrl{02>dAZq*p8#(+LOl zpgF0Y-W9z5>c^vTKuZ@`ooMWKuO=s~)!FK#W$c$$O<3I@CfH;}=pptDRNN&6mfB`8 zwpM^)r-li9)fMDx#%*FB`;Pm)<|TbsodShV?_mC-W@2|Xi`>?0A)-)k*gH7^if(0~ zzTG3bWsZ!+m63_C@IoGI^J6Km?aPzj2Wf_hqP{+AIa>_0e3fe3hb@A~$DXoXI+LN^Vm2uMS`NSR z!g1q31}NQFhSwe}rztUIWZ@!Up9g2tCv7TZ)b<3F`zyn{=!*j#Q$sY3OhiQoHDSKH zI;*aK96mc;f!O{Ta51hJdJe}i@4tEqdGN*jfm&%|y6+I&o4S#t8)UL-leO@5`g~mO zvPsx4m-0F`<9Y8bnJ|!^LT+5!#+LYvL*<(yu1Gr{`ocexc%}DPET_z_>xrkkPxLca z&78=;X>MG%%W<~WZ{`NIJguHLVzTx}Mnn7D3z6*?4E|K6c!Q zd~o}<0rsCAiN~Y}ir(Bnoe{(M_JXT)cYPLE+1Il+3NP@;yKu;rR^*3{H}bYe(jd(w z3dP+@VNIPr(fX#z9=rd8p56G3woM#G3f`DO!1M8-w&5tkgApKGLg7dJU2d54BXsO; zV12J%p}}{z@j?qIy?5pif2XzsT`M+I?v@aX$`O!jl{?wsIwi1nD#lgH zQ|UrYo{qOajNCd?X3L-Nj8@rp%!==$%(7@ml%%BdsMB=iMH+dPIng;kKHZAE>|dT~!l8X5&8;=0fM zyAgz#%)3Oo0Te zS(B|k<48j3S{xs+0+^>g!do?K}x%#-E4rT->ZL zgTXNRU?{um+ea$DZ6xfoJjIL|IK+>Bx)H{N$g}STqCq}lBW!73hi}alA)&MgudVgu zCG&mp_rPWn{##%m>7T>anFsMdM|tu!R0;ol^oB9{O=Q=+4ibL&3QS9y4H`6aw6IuHwuQ=w?5pI7{Ix#-lP9NDKwc1{cJ6qjYwx@v< z-8aVDyKK>PrXMNYFido{bR5)0JZArjpXB11AQCMSI6&puDDK`(FMWtbFEv@by*!^% zw^(c#Ih?z-;wahBJ3v3T+JIY&61}`&JHCmmfsrnr;8Ab@vi?lPVx<$9JJgDP8^SO? zT`2XuR0;*@!kz2#GvYq^6^u7+W6Ga@C*pys{QA?c$o)A>(YkaS)6*M(t0pUBs98Dn zotnh{C>f0j+$oyAH4OXsuV{EXgdPzX%{DV@&F@dt)Chh(fF-oe+_N zd;ELQ`x%g0fq``7;}^`V%cmJW86?1tBQH}HvKe>*GBShlsZJQ(RFzJZjAU?;(82Ss z_cacM{(^N9rg-gQF)m)I14?i0xZCgU!Qu>MqWohVzWMl{D8zddCP~V3jr2V}^6sSj zNCwQCH-Vp-I1hV2m|;qmKPv9`fgdH0$o$vK@v3|rXsFC)a-7ED)xIHWV^PN`k8(uC zF>}nTt8!qha{wsHRl&AiPkJ)Pl{?#ehoPmh}hrEwe>=) z`RsWR9n(Vp3??zPy@1upCy9&cWSCmcz&4M1-mt2Kc=d%q#HDKTR^Fh&m<4cR z;%j(6qm0UZi6ow@uTV`RX_A1B?6xsegme24$*$T?H~UT`vp0K^!}6N+w(&K3JUowE z@+u!jHTF;=BUAL5yN@Wm3F2?byd#bMw?tM8N0KE245?l|3LbIx_(D9ES}&`iKTBH4 z=*%mkt~nui>3s%hY%?AaKqn{C_G8g)5{Z`^~{t8@f zoC{UI&SKyi6TDz1!muL?Fyy=ryDOp_636Qj{%lDF2U@?Xjjp{TMi+&-(x=AbNY!&s2nuWBr5+w7K6S1{^Q;z= zx|(6*Cq(1x%O%iTb{S^)6_G7Z{m@#nfPNgg39nAN0WBTRss4f6bnOcXa8x}&dutmg zz6d69fBK2&{12)e9>R~3%*S}Qg*Z;QYi2&&$hZBBr1G0oQSZT1=#ig?y59s|u7eV) z&o9FE<^Ig^vxPWsYb5@gZU7~hGQiT_oZcbXr0jS){%9y7FW#IYwp~5c{m*Y=kT{$7 zm>vOB;^biGteGInoK9lah5)bKLUO*X0{?X{iO9)_U3anq+eQN&jCQ5pg>SJ~LLhVH z!$kV@!bvh=nI)}1ITns~o}!T@PXvZgBPRY}SaU_Nx&_mmuM~)lKXUP_U3~^2A%!huN*K4%4UbiN!7ZxD)&e{`|ZY z-aYFNx4U&Nb4VwV_AK)yojP(fvULG*EzxH;q;)g7m5zLdsE+m|Z9``FTe52YIQUs& zk0GVQP_$JJR+O2u`|d9wStpFy4c4EbdRPg`H)&b$TH8Fp;kWx{wIM|-pTfT!gJ{d5Sg`j273cq^ki%NJkwUKSV$ z067=);B~_IIEmIiY{1SSd>_MmTHIt~7 z2k|%h>E>j6T2PgW2CipmLaQTwp7acdzDyA1%}m0_8{dGqWHRGm-Gkz9UCC~*v1Co4 zJX#Bx(Fu*pNRt=C;8kmsJrayfg^I#{=Q>p|%_FrV^@-WuXkN|rIGjH&&+Xke3J0bG z?G?^`#{UHWd^o~r=@xKui%LN+{4z$Qd_tz=8FPI93##5rKNd+ti|_OsjQuohi>HcFp=QZGk$1nIdmAdQ|#}Uvma~S00-9?x%0zZbVKsKS4ZkeeFK^A+l zrdb?4b~RC%4>tvlV;XnYM;7!#k}%OV8X6WPiL35 zP<=}&BeTdTTOyQ763%^{6iGuxY44>%G>j&{^ZO5;*LgkXdG7nVKA-oyc2c4clk{e3 z&OdK3e*BfZoVXY+*?Gdj%n5L5;waAL^?5nk7VbZN9 zoKB_-=h0e7y!KZz;k&-jxK-J-XJ#E9_u4@UBrX%~w+|W#d+yO8 zFoD51j!ickj`tQ$CC=xC-9uFq{%(3eHH7p2$*>-7x?CeOuF)7b9h*cOvW|&lI&X3} zW8_Hlg;ngpUt^j-R3!RPSs*aj@?f=m7B%;LBk*G#*-t{ZXwJ?&qCf8#`X=wfW3w;w zBbBEU!zF6`i7%~CrBy>@gI!s1d2!s>Zw$6?D8$T&gz4>pu%K1Q7<5Adx@7~JvEqoh1{v{rxohoandtItn<^xr{8siJz*sN zC0TgBFOwcrauDf?e?;Y9#kgXe4F5j%im=O3B_>b(z)|p7X_Eg{x;sw59C zo3pZe7qT?3zyxDyZHlFwpc<~yF9(G&zdXAE)nsh8}=zwIDh;6W`u+{NWf zX0fq*4r678Ko4l~V&%7#(lx_OQM=0x-(RUEOGfLk-!{gAM0lLQRMTcFrVq!Z$%je8 zxdiZGGnu;%?bs_WB1ic*7pHIlN-t?E6G_sXWBY=Yiyh zwFO+*_MS{!Tt}iV*ztGf-a^R~bvR;hj4_XqC4))hF>bCYv^lq6C_9|Kinhi3Iwkn+ z*M4Sr=^^%waL4<1`WieFdJX+|ST5Bl4LC6e?Cv#WYcgE;60_k#9=3&gMoIHuKAp#e zfG3#!+8!@Tp8-Ra26AKm1)N>~oLKaAfnMnf=EWX=PF-Pmy7l68dEV?Hzee- z)7O&_^RcMZ(?bIy@wI%)B z31Mb5xP;;3YSWn?H9K(Wy3;sswKU3V6RPGL4bI0Ds7_!9eLYi}I0Z9QQo4FR^bJlt_c*<^T3*JjFh0mB9ct1 zNu?{vWBjk%1ZpiNVfngAZ1xFZ?+`u(>Rnt=>qs-C{C&y1JQ7cTlng>s*>c!;z7YB5 z=XCIyGtGTdgHn4vNS8z*WgTw_a{wnYq^$sHlRhvbJM3_I;XnHOLpR)!TnjQ=W%*Z8 zd!V?|l3%{x0n64a(TGFRY{RiwQDJAZ$na<>4o)|Mg_*5byJ<8mTDu*`{_-V0<6pw$ z8xzSUg+a2Utdl(cz)`C&gTy8GD9-<-!Q1~5KF;HG=KB3&{(FifYcK0gdgglY_y35K zypzGOzIG0;X`scw{O>Pox4|Fp>PVws*E?>v`EtzV-{AhmrI;}(3S;vF3HxFuyw7T9 z#_xPVtY>HPc1xGx`!Uy1E%_XMF?0~~j>__@2lo&omoez%Ex|wBHIM&RR1a%Lk0xK4 zfAArED!W;dA})JN*4FtmpKdqdi0LZ4^zYR;ui=cqAsB{??*+VRScleUywNje7Y0<` zp&rdIv0il#F23)`$IaSIH-3GGp20=r)tWK3`>do{(a8X6tbZOqew)IlIE26lrx^Tx z^A6tB{7x;F9LKU*iFmfNjjH@rfnEPfXi4WK5_#O1Z;hAcmlbv3vQ-6mr}83hwaiC@ zn|9Q6lP6!d*MMKDZcd&)8=?!orNO8pQ~5W)w(=h?Xt4{!P5F_nY3Q{wjn~y0Pkc-c zlBS8C{O4*OFE=2{-TgMcDuZ5blx&*6fFC?41#EaeR^^Bh-H;m6 zIQcrS@o)jJb0LAR)cHb9GzluGBlErRFgt0YD!<3jfDTzP*jhFpN30%!%IqxK)u;$R zG6MMn=Nq^^k}<6AmPxgR#?PqhqouS+H5kTyH-d~chEVU_L1K>XVgs%U4qUByaO`n^owi!ZR=x(q-JQGK~F%h~n8-x_8Cb{`88%t8xq13Tw zWVhQrny{xHx44E=rmuu|`~8!)T)9E!|23mAx+x@H=NMTYJr(v(8%b9z7MN(i9-v9y zI}|nb(73R8NFOl^4Lsiq@6aZy@l77M&5x+$_j>rCp+JY_HNgQ}SANFOd(xU#Vq>+_ z8r3G{2xo?9(0g^Blpb(~Nq+H+e~S!g?QJGkZBl9f&6!{;qem_bI$@rf3CulTL+h1( z32yi_)Kg2vdV}NifpN3QcE33Lp;_o786APw|LsPDz5XD!(T<&z@rM7sKrp1C38f&ejVM)S|^czNboE3SwwkNCM2MlVdiGyhC1` zT0xAVj)uF=IPrO#o#enM-wd=&>2$#JE``dJR;WAIV(Ot;-PJliIS-S2- zEK#^(gHL|wbNhbD3LRb%mC89tRc?e4u-Sr*=Zxr_++xn^ku~~+s1mb^*PPma!d!4? z5QfNP!2DcAoE}m@?Y8p7qE(5=jr_qGs&C<9xA&2X+;vQYO%mN-8%|xwMiOeQ$d(P2 z(0sGQbf%Lk8dW4yh4ge%S5r&#H=dx9{qO0+m#)}&S{rtYWN`L~dH8J6WWGSv6%Svz zPF)j)etX3g^2Irpr0IBrq5Bfr-l;+&O?=Qv$jPcSrE}clHcFdrQr&F@xZzZ?bA)RnldK_~hY^=aPbx0t3a%iq^_0htU1qSP`G){poEflaYg^^`qq)$d`f zujqh8%XvI?ScSCPmQZ)C34Fh%7~R87F{+()3hPWI=^AN#DgqXD`o?18lsAY{N4rcL-k--V8%cKk!`?=wBR@ntEl zP&$+!@`dI%fp|x!l@6_S;4&t?gF~)`X!lc}r{ltDvmS$LuS1B* z<7c?VMc_n;)JgeBIo2)j0&EeYHmr)3^@2y5uPqpgCv=`F!1sLSrTSR zMqP{n<(V18uLo)N)nm99Tj-jT75tirYxLgn-So_?D4PDEo?d<{IJnqRY>ECudg4|B zJF3J0W*+V#%X?RYsjCrNo#=(r$3)^@aXY;DW(~sUW0>sHMS?B=kYSHbaAGa#bi%}b zs-ky@Ss`6PZ|9Y90pmk-lp+;XTPGiYnP64)nLkS(nqxORdJ z4()WNACnK`lzk(>Oy?DjTXcysBRAAu6ko#cyfG2>)LS#F0z*ia+Z{PLgm|8U{JP}DrNcNPL);ke77-rOMb%FiACf*^9FKu_R%3DH3-`zk8{4i#kNqxn;=bhe?pu!aVXzzlcoXgVA^3DS91MqjSvH)B3zL zDsI=wjoXrhmR*U&`rdX(kmTT}iv<7H>;)5es|S_Gv@jyDWtV4-z`stZ7&6LF*da~= z-5cq6AYnOX+V3K#mf51T_h`Odc?F3P?(QMJA~=86w)UBVI)u0{C30&G`Jm`(W=TvS z`+j#BBWczRB@ZolK6^2C?b-}&v%2w}>;#DXa*>>0T|qave-<+7qv1xvEv~`sAc?qQ z!a1HA$$VNbhY3H=oknGb&ON`r3$MTh~X!34ZNHE2*hoA5fTd7EC~hZx%URT z-zakJYkZ+}SOoTd(W0wOj*!euM>N@$!BtN@OK&8ff;qW{?B6~~SiR^r+4z%i<}^y= z|C$Q9u>BZ0w}@T}j1$hE$&m3Y5Z_exat&I|81nNfY0f>!b*|Y3;|_Py9B+Mid(9Xd ztfldN_aa_?qc5J39Se3}1g??lLUfV&LhJv8a`RMf;uxK0T!`Bio8B)GWaWh6xWrZx zqYD6^EN`bjDw0KG{>0F6Z%ybu<5Vi9dl4^o?-a5N?!;$i3o#DrCr>mP_`LcvHfG=C z-c&Z%B>%QXAuLbj7q_9O(jR*I=t^kr_(ozs@bL3hB;-uLz`V7SVdW!@FyOuu-tam@ z9?yP;z7Rp{j9c)^j$&?;OCbhz<#1wd!>P3}f47);8_&;iK+i=FITfg);cBL&q^*{I zTTo4tKQ+*V=GAoSpElxlGC|-_Tl0V4+#?}ECMfP-KDoZf9{+8bM$W0Xk;W${a7A|? zW^B=>&bb0R^pz>Td@z>v9rFi9cd)SeuLa|IT{ydG*^$|wUNA?+&(lXSKdF? z&Ca#_%nhF$4V50l;O7D_aG5#`Px|k&VL%$X_DiFQ+<6jygcWA}_iCnpj6mgLdlFo> zpY52x9;Ma?(*{okvc{_z*Q*}IudTiG?H)?oq{bunX8`vz!}#X~Df!{s<`TP0}U8v$7v$QT=br2V`BN-Ud=!;DYU zNX@;}_-;1!Gb_Oxg;p54@((E*6Uywf9>GUeo8#WAiBwWij+Kr(NRD6aqebnnX@Bip zNZMRLTu>LII|s$e>O0}X2J!ubDqX01yrX%!gHmapGnXGsk;a@K@d zbBg(<^#<%dwV4=QJDY}mI|46-UE03SQ~9~c5lr>zJ)~ydJ51Ns$Mup*)THP$26RXA z4_IZi;v`^XKt6d^bOBcxM&U|pD>@H^d%!XmJbFL}e_qdoO+I;0RpyLVUrsTqZXC{! zRpYmOEn$aBG;zV;3wUBLoGp~5VpJ${#{R)%LhdM-T6hFaYf6dpr{{F;Z{eJMu~alC zJDo0jB{=Cuw=hpCS|RaJ7|!>fMfU$&f_i`0BiU!( z=_LH61~fI~la#RU{P`3w@cW$#R}%_hb^IM}^kjcBp*9M7{{mK8w9>Mz z3WPdcfZc0~z}UiE^f-AvzR2*QdY8;uJ?B#DIBEttc}L(QlKG@Vxd?*(G=XTAI0SW% z#^Fo1kSx99#5-^<*>YOM{u!7GZT}{~gbxY$YWZ`%`N(#dJ@q6k-KGJJKhF@gJTb74 z3+8$fUUEA(P2e+vZL!PDnS>16NwStHu)__zVfsQ%A%hWx>vV2Xli*>n#8{8m>kQH3 z0ylQohD`c!Yb*?e27zSeTYk3bGIC1VlbLC^lV+GJv(E4Kz*yKvS2b#4uKIb>d2I>( zlA=xbWu}lasRQt3PCI??Hj+1u^TU%%l51x_%_ff5qfy-S8l4#tM)S25>BkRt_`Yf{ z9W&5|75zGF&OT3C`d}BBrWvA|s=%KLU(d+M55vsze9ZV6LpHrVj9T*3$=V6WxIwXM zdgYQWLoPXUHXl_<^V|uTrm_lB9L4CS4_DFiQye)SH=H{5YEFRZ}+og6NhT1mfLyFr6r`lIvON__W3@RS#K!hhBu z$jKmckl4EzlVq#tr}2SAdii1M@G^%O?0LbMBpiUaW#ceGx17eVKSYi`+QNJ8MucB( zc>A+Fyed%PyH+hj^$*_QlcWTz#`hD0pXHF`$zb%aWw_#+3KKSbN9~6FLZ;@(dn~jW zi?`l0P_^C@W}M5zm;V+J#z&KG_G_VbPc_(|q5CkqdlUI*WJJEr9|>P;Y)IJJ&G7Y` z9NYAE6pn1ZMfuQLawc*je{^UfUpG4g&5rky$)jRvzsnQhce)l!+Wkn6_b56>!5n6V zY{EsA54e_V)zIsC6jEGALV}e#FYj~(6ZiRo_ZKhGeB(NNtD8ajIht&YsXB-}7Gmje zYtcHpBPjMzlRnQ6hU;G*lAOI~&|~u`Xq#ijURxGUB&PyRnm!fD^fU;_>STN_?jvN? zLvpVE5YC(bgDP9?Vy;@(kQcjK=*4Ru;M$joPbVltrCJ=lv{uM3zvv-zB{U%7w;EZV zcS7VpwyXBao7+TGj?7oVOPj?vmEbs++F!~oMv(Q41USqs${)?)v zDWhK;Uc=h_y`obwJX+LilcS4Ii_U*dAh~|x+_~U3(jfGnrtV3=NGjZYzi+Ie2Ud~2 zrk8MFgBV|}e4A4ryB{*r;_%_M7P{|YImvYs!K)sNTCIAe+9R7Sv2gZ7GJkv^eh;GL zvM7oq53%zFagqw(goG2j_GK)#>Xfkj@@?8?3vVw1iZFE)BXOo}ShS;@iG5!TFW zR}a`F@B}sOBS}#@N19nB)?aal&HF%saXn{0o^6uCqL5E;&Yl#Hn^qSfns;P>bmg0ozXOcFZ(K3^;#^1dk80U|;Q8ZiXm{@~EeN;8`e|j*;#&tA zm%_-^#;de)D2*DAyoF-JW2orbc-$#_AA9}AleyRI(7oy--JMcQ{2dxF>g@`8ac&Li z?BltBb82wQ+aJ$eT*&>4wWaajY%$u2z~hr{@N1qfo;N=ZYYk4a4-DL?i+L#i4H%@~ ze+FW!|7`3H3n$cg6uR6kqmTO%QAN0C=C+nHoa$7zS>{ zCt1u_(SE!YF6iJ_9mKGSN_5T zENM0jU5lZQ>~pDH&lcY2-(`H+Tg=2rC*tUU1@y@FWU|D~60-TH_-<+nmF|i|xzR^( z%poO^l#`@h_oJ};5pwCx-{_!o3|E!3yf$Xida~`JB|2WqBGxf+)Otr9DB2CH)jD>a zE|*CV5F9S>ar8S}n==x1k3OKrT0_Df$z5PSs?k3UYjAgN6r`7*CPO8)?D5wT*i%(T zx7wd2p;_Hb)@Bnr^+7U3XRJdzp_{yNo)YP?YzO&m_N*|3BSwu3{+IEA`(62yTVa#O z92hdDt-W=yC}}y~?n^@hZa-Z_5%u>xwTyIbZif$M^5Nru6+`@ zap(iR@=FVUSv@9<|8LU%V-l>%8IH#XzSENl+A#Z`G25(_L&~f65f!tm8F$gY=mXQj!{qjUGUR17Gi}i^UmrVa`B)U z%zdRrTx zd6ngjVt?Ifwa;VfG3_LCsDB^%Io<@x8b8_}yAlkgl<{6h801aQ2YzobR$sUd+uz5K z?1vpRGDp~LzTZkl_z3aqv?CZ9+XM4YiVn@^FnG=im9r-5&|7f70$qT;m@x^&@i)O@;w$^~Vy zb~68gzV|`YIIaT$Q~tsI=4ePgvlc_1$AH2JDVV(E6!Gq_pnkqAICBSx%rh@spuC!W zH~uN;znux0&fmea^egTKff1}z!ASIGV^sA_YE!%kS<^IxZ?bs*?hWd(W+YBgH3jLT zi{O!IKKW~E$;V49We2onaMFSVGNvMxTm0iXN(mkAC_PyWdy$HDiS1c~mlI8awW#|giWzxGyf;kr3A=t~hcU3vtDrDJH-0S7Q#w}>@m zqxr{cR}h`HV=#dnpvm(z&^7Ej>KObcT>>xLW5@>T^$2^vL7DEkegxOgBg}!cUv&O1 zE6mieBTb!|bW!25+IwdY5@=L{KbT4zKiIPIb@Mrcx2k;G3sqk4-D~WcA`bVyXP~2n z9^{4O!|}xhxG1y@{tNO3x-t^VP8M*3se<0!W*YRVzr&`iad5jmlxTJz1`X$>@F@JU z=+&*$V08TgIr}vVY|P%_6nTLqBD~{oLLS1~`Srr*u?6qF0&`>0Eg++fY43uactKRj zIcN>gC&#?eai$pz>)FpFMVpDX{b?tyL1WmwN+;+LIkA_TgGt(V3t=beN|p>pahBWa zaO|aWS}yF_4=pQ!Cq|p`wEJmvlu?9qHA}A3^abG-mExki_ORPt5zZgl5B`BWu+%J` zzB{=PHM~o3x6v7r(wxUBy@;j`wvEInb5bH48L4(7}m zpc4NULQLBxJaZxnR(529zQGAB6gbi;^G*`ijxq47U6O4R3&4v~kLdkP%50*p3LIS) z!t4>U8|s3?J$}+Y8Wko7-&zPN8f?Jw3F_E+_C7p$D+T+$mokN8X5*P_Uogv3g3p^f zik@2%PZz2xUD8`J8Yee>;{4Am5|Q72%1+b*zHJ-Tp=0=MWxA*}B*G*U9lqTmL&(-5 zwMsfGcrJY)Je0@vToPlFS_LEbFXXj!qOfb#WPbcTasK!m4>(gkoAj88vl{I&--;l3b&RDxA8x4~MXKF_U zep3HV-YuRfbooDVs;>(;`&vboHMmVQRwj`o^()-M2kv;IEdqAOsKS*i6Zo@9x%j8x zF_-?JfYk`oC2tp$;*UEnSf0C;?a`e=7v0XsQzzP~-*7pRoAgKSQD`Jicv4Otr<;P5 zx{x89PpPzI8cOaC}bc*~5Z`;K-a%nXJdrWsV3X=m)B>@e@S1F_wsNVa?v zV|Qnp;GN`t?rvoT_3?0kRqjjK>KCD)&}K@~uMfwNj9jek3qsTA6Ev*3n=$sSL?3F% zzno#h&8a&_&p9PP<@ZnbkvMGSk z4W!O|rS8}3@#*nzjF^EYQPlK7F7PP+mR7}^(>6h%l)wl#^kTgyZxdX$73AANGyK#v z20z#o(0=P%F#ml!b9l2RZc&{E8&t1>fm$LooNeYNIVTchPq{Ghy7f>W`Y^>S@sQeF;Ri~(O1Cn=5c6}tHA(xp~y(KX4RQkasV6gU@Nf0-BySM1}IbX<8 z5qhn{BT#!-L5?*FS)Z3`tdzbgjJ6BHNFnbe4gf9fHPk05btK6aGnkFAV>L~6{oR67mjgYyW zQpMs_*jul}viFkd)VO?_QD4VA)Sb^o{n$&NZ1Thf2?qsE>kqPc)HWR29Rp&UPSRS( z!+5d73LktJhuR7}-K)I>W0tO``vaVCOq&O0Z8idLmu)1+y&9B$M3C6a^=NfWogF;) zKvY@05C(4DgM2#)PIFM1HILd(0&jMa+s`(@j{|MkUUG^Qyptnk@fYdtm0nPgS_2Pv zbs-}-s6(uNk%cFw*rdp*v7_qmkcG9;WLwH;e!eic_f@$-XZd(CRSiwp*sF}5_pVak zvxf!Gq796QF~FHyCy+Z+{P1ddHIAD;AKkNsd)djUByy9$Qt*06CT>+DMdniQ`C2fO zZ=Qj_y(u5teTAv&HQ`TvO<{-E*OR3_8O+)DrPL}dhXncwv-lIy=%25|y55xJ>^=-| zYuV|+7Ov!BeC>mc27@lf%ThrE|-$veY1dj0STx}?DXe(ny0 z|Nd&wC6iE;GcN#-jq>Nlvj-Wu>fbhcv6&EVo=?XlT&FAZR#CCK7Cb-y7}+yn9RC+5 z)1GWEOt~V02W$*i=W_yHkDf+$^?nf?%(L-D_z+!p&mOJB3+W!U2D+wAn%-D)gD9Bj zqT_+T)S=f5&xRbNE5yA~r!N{V6{#|q=>eI53KJsLSrhgs22X>I$P+90Q`^qg8P zCvmwH_1_5IfDBoDqOlrf*GGbb&?_ z5o;%I>-W)+eTPMxoKBHV~5 zNNjmTH~uQb*KVuPk$S+zk|hxT>?k(uauzyyGFbjd4kjrLXN#ky*du05Oq_omlloW4 z)E`9@n=P=WPl}-ShBX#kJC9Wk)>k7}rOvncQyvJFTW$`2B?&2DO$yOE7LJwb) z<%dhl(u0F~;4!5Imkt->^l!|hV}|MBg@6#6fAclH*l?4XKK(JhAT0}XS|ixJnbSaT z^?01Kt&8@lUIew#AozBpHttD}i>PgC*NZ40Uq zpg?3b!{Fe(yNs)u4#;R9s$Mr@g0MeJLP>o;jJoE`zqUq^L&|CaJznK0o|6uq%$Bpy357GARN(POXBbMSVAZ!d>|!LErKMoKCl5Ri7qaX9sEV+fH(n>Z655 z8vj+XnsP?VVO*~*p4O`%t8dg$=dIQ_^;?KHv+#qM_Oy{UPJ0JQ-Zmt!TAU0Hz2Gw$ifgs~h+>x< z8&Z(RRa{*P9=G>F>7G1XN>-44+ihsd^gB!my(JnJ-Hh4Wd&%zF0g_$JWB9XQ%-N4~ zncmmUv{>gPxV+p`@ar zCckVY(c2n@|E0R&rW7g2__2pydul4m?pV$$z3ZS;l~c(A<^%o52@dhX2~hRUikxwo zN(`-)IqS%Sq;2juI4Yi-xE7f?zz^ z$`^hwAoCvmAeH|)lcP(*xJKDqbb@gK*(NVb<;oNB&N(-jGq;?2d&=-9Of697)kA7? zCX=K%KZcJguDtQOenvHa2YKPWnY1O%VBe1&&LLydT{ngVtS#kx$CY8q?ETdFW*$tsY$bF`R^pA1T6DzH)z}eM&Ckh7zy$RuK4fho z>lz?Wy30Z^ZS@WGmH9(ewnt(487u6bC3LuVCev3V573E@Ir!{p61a-(KvnrpJh%A? z=RQ0fOyTbNAetZmD*c&K$+=Ra+?$5>+?sU zRE6+veF}v3=?n3MS0?`1ca$b49Y*(G_2^_#$|!GmLuF-3(c7S%-25wzA~PRu#tK8h zRT)R$h?-H6vbaGh4HtBl;W;ZF4W6GzS@V?;RC}4Ky~x9T)0be)@5i|N$OWAD>LX{D zEXgZ&Z$gjE1a$j*j~}+rnh%*`MsG=JQ+hBI8WYq&B_*0K+Lgt$S;dhmJdJsc8~Fa| z7m3ZIeoS=@<;?>f(OWAHUkD7-NFjrNwPOI!s;;KWpW<;%^ksbbRfGTIpbI6!^RSV- z%AJ#x;WwvkjF=M!jX%h5Rur}PXI!Tit ziEduAnHnU?kgksPWZh5zT1ksTW5fg^{@*;oBlwkc8{eV(H3pd5HO5T$xNz{Q--L(y zpU|OyIgoI8IF9wxW`B&8Vx5ep@Wtd1o#Z+Wp55O_a_3wp8G|or%&&9M8Bs*WM%^Zi zY%DqP(}nD~P=GJHFMyO;0QYC7TWv#C1={9SqIhZvy>dGq$kBFkDVMUBk1RxgF=0j^ zurZ!4HH4dSKGZLD8ExTXN!wi|2-J*%DH@aU!ICa&vT!lnU1EnHCZFPVZFj+W&daG! z|5!9{b$~h6*`o26W}x0Cb@I^tGfaEcTbq-TMszy0L1K>rsNRm{Uv9ZB+&6eMIX@hy zRve=RX)(C??h|HjaW#}5^`X}jB)E4rOKYt@-K4=wl=!7i#qjUDh|ErK#O!M;;ltDz z2;964{)p|t5qoEm(c-pPp>Yx?oR>i#VP;=06H3Fje`h{#aI6;lyMS#AErw6IKq8f+ z$>EkYxXd${L}<#g;)2^Rho;fm5mQKnj~X^K+2U{wSCn=iwEGC}&z#SIO_+Ql@a$j+`@LVo1 z4ORM(JMfnIbhqF*Q_%WFP?1yy*L$=G4sO3$`kNG!X+X-L!R5Ybt%pFY9&R>U(@=kDq