Skip to content

Latest commit

 

History

History
207 lines (134 loc) · 11.1 KB

README.md

File metadata and controls

207 lines (134 loc) · 11.1 KB

harry-muzart.github.io

My Website: https://harry-muzart.github.io/ My Website: https://harry-muzart.github.io/ My Website: https://harry-muzart.github.io/

My Website: https://harry-muzart.github.io/

My Website: https://harry-muzart.github.io/

My Website: https://harry-muzart.github.io/

Enable cookies and Flash.

Display & functionalities will be different slightly in different browsers: try G Chrome, MS Edge, Mozilla Firefox, Int Explorer, Opera, Safari, Silk.

Also see: https://www.bioneurotech.com/iwa-dml

This is a project I am currently working on (Jan - Aug 2018)

I build Deep Convolutional Neural Networks, train those with large datasets, and use them for objection recognition in scenes of people and other items. The visual field is screened with sets of 2-dim matrices. First, simple edges are detected, then these are combined into more complex shapes with deeper layers. The objects are classified into their respective labels. The positional x,y information for the labels, as well as the %age confidence, is calculated and outputted.

The packaged libraries, module dependencies, code, and datasets, have been adapted from: https://github.com/thtrieu/darkflow (using Python 3.6 ; AnacondaCmd/Cloud ; TensorFlow1.py ; OpenCVis3 ; numpy ; cython/darknet ; YOLO cfg weights ; ImageNet ; .json )

I then relate those to NeuroImaging data - based on research literature.

I will also be using ML on neuroimaging data itself - with nilearn (Python) and 3D Slicer (Python/C++), using openfmri.com nii files.

This can then be used to generate models of Biological Neural Networks.

#1 --- Deep Convolutional Neural Networks, train those with large datasets, and use them for Objection Recognition in scenes of people and other items. The visual field is screened with sets of 2-dim matrices. First, simple edges are detected, then these are combined into more complex shapes with deeper layers. The objects are classified into their respective labels. The positional x,y information for the labels, as well as the %age confidence, is calculated and outputted. The packaged libraries, module dependencies, code, and datasets, have been adapted from: https://github.com/pjreddie/darknet (using Python 3.6 ; AnacondaCmd/Cloud ; TensorFlow1.py ; OpenCVis3 ; numpy ; cython/darknet ; YOLO cfg weights ; ImageNet ; .json ) .
Here will be a user-based interactive web-based Machine Learning application which will use deep convolutional neural networks to classify fMRI & dtMRI hippocampus-neocortex data in connectivity-strength groups with inference, as a computational model for clinical decisions, the source code will be made open-source on GitHub with push/pull commits welcome.
I then relate those to NeuroImaging data - based on research literature. I will also be using ML on neuroimaging data itself - with nilearn (Python) and 3D Slicer (Python/C++), using openfmri.com nii files. This can then be used to generate models of Biological Neural Networks.

Code source from: https://github.com/tensorflow/tensorflow https://github.com/conda/conda https://github.com/python/cpython https://github.com/numpy/numpy https://github.com/opencv/opencv https://github.com/pjreddie/darknet https://github.com/thtrieu/darkflow https://github.com/llSourcell/YOLO_Object_Detection https://github.com/fizyr/keras-retinanet https://github.com/facebookresearch/detectron

Since 2005 - ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton (Uni Toronto, Canada; Google DeepMind) Advances in Neural Information Processing Systems 25 (NIPS 2012) http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

https://arxiv.org/pdf/1706.03762.pdf Attention Is All You Need complex recurrent or convolutional neural networks that include an encoder and a decoder

VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION Karen Simonyan & Andrew Zisserman (Engin Uni Oxford, UK) ICLR 2015 https://arxiv.org/pdf/1409.1556.pdf

You Only Look Once: Unified, Real-Time Object Detection Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi (Uni Washington, WA, USA; Allen Institute for AI, USA; Facebook AI Research) CVPR 2016 https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Redmon_You_Only_Look_CVPR_2016_paper.pdf

Browser-based ConvNetJS https://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html

Large-scale Video Classification with Convolutional Neural Networks Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, Li Fei-Fei (Stanford, CA, USA; Google Brain) Proceedings of International Computer Vision and Pattern Recognition (CVPR 2014), IEEE https://ai.google/research/pubs/pub42455

Focal Loss for Dense Object Detection Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, Piotr Dollár (Facebook AI Research) arXiv Feb 2018 arXiv:1708.02002 [cs.CV]

==========================

I also used the Google Cloud ML Vision API for Emotional Faces, and PosNet (via TF.js) for Head-Limb relational movements detection, and DeepMind lab for agent-based self-learning 3D paradigms.

https://cloud.google.com/products/machine-learning/ https://github.com/tensorflow/tfjs-models/tree/master/posenet https://github.com/deepmind/lab

NB: x

===========================

The general-purpose AI can be extended horizontally in its functions: eg. I will also be setting up a internet-browser-based user-input-based NLP (natural language processing) system for Sentiment Analysis - that is the emotionality of speech content. (Using Python, PythonAnywhere, Flask, SQLite, PythonML, PHP, MS Access, SKLearn, Html5/Css, G-WForms, Tensorflow.js )

See http://harrymuzart02.pythonanywhere.com/

#2 --- Neural Networks for Sentiment Analysis Classification of Linguistic Textual Info. The general-purpose AI can be extended horizontally in its functions: eg. I will also be setting up a internet-browser-based user-input-based NLP (natural language processing) system - that is the emotionality of speech content. (Using Python, PythonAnywhere, Flask, SQLite, PythonML, PHP, MS Access, SKLearn, Html5/Css, G-WForms, Tensorflow.js ). There will be a system for linguistic sentiment analysis similar to https://github.com/rasbt/python-machine-learning-book/tree/master/code/ch09 http://raschkas.pythonanywhere.com/results. My PythonAnywhere scripts, will be linked to Plesk, PHP/SQL scripts and the Flask micro-framework system. Google Cloud Platform TPUs (tensor processing units) will be used to train and test neuroimaging data

https://github.com/adeshpande3/LSTM-Sentiment-Analysis https://github.com/thisandagain/sentiment https://github.com/vivekn/sentiment

https://github.com/rasbt/python-machine-learning-book/tree/master/code/ch07 https://github.com/rasbt/python-machine-learning-book/tree/master/code/ch08
https://github.com/rasbt/python-machine-learning-book/tree/master/code/ch09 Sebastian Raschka, 2015

https://github.com/python https://github.com/pallets/flask https://github.com/pythonanywhere https://github.com/php https://github.com/scikit-learn/scikit-learn https://github.com/tensorflow/tfjs

images sentiment analysis neural network embeddings, recurrent, convolutional, filters, pooling, lateral chaining of nodes in hidden layer supervised

Google Cloud Machine Learning Platform

Sejnowski, Dolan, Sutton, et al. Work.

Chomsky Linguistic Hierarchy Neural Net

Intelligent Opinion Mining and Sentiment Analysis Using Artificial Neural Networks ; Conference Paper Nov 2015 International Conference on Neural Information Proc Fig. Proposed intelligent system for opinion mining and sentiment analysis

Learning Word Vectors for Sentiment Analysis Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts 2011 Conference: The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA

SemEval-2017 task 4: Sentiment analysis in Twitter S Rosenthal, N Farra, P Nakov - … of the 11th International Workshop on …, 2017 - aclweb.org This paper describes the fifth year of the Sentiment Analysis in Twitter task. SemEval-2017 ; Task 4 continues with a rerun of the subtasks of SemEval-2016 Task 4, which include identifying the overall sentiment of the tweet, sentiment towards a topic with classification on … Cited by 276 Related articles All 18 versions

Opinion mining and sentiment analysis B Pang, L Lee - Foundations and Trends® in Information …, 2008 - nowpublishers.com An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as … Cited by 7244 Related articles All 38 versions

Recognizing contextual polarity in phrase-level sentiment analysis T Wilson, J Wiebe, P Hoffmann - … of the conference on human language …, 2005 - dl.acm.org This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual … Cited by 2645 Related articles All 23 versions

arxiv.org A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts B Pang, L Lee - Proceedings of the 42nd annual meeting on …, 2004 - dl.acm.org Sentiment analysis seeks to identify the viewpoint (s) underlying a text span; an example application is classifying a movie review as" thumbs up" or" thumbs down". To determine this sentiment polarity, we propose a novel machine-learning method that applies text Cited by 2711 Related articles All 28 versions

=========================

Below, Some Examples: (see https://harry-muzart.github.io/ for my other visual demos)

[{"label": "person", "confidence": 0.92, "topleft": {"x": 82, "y": 30}, "bottomright": {"x": 733, "y": 525}}, {"label": "tie", "confidence": 0.91, "topleft": {"x": 331, "y": 411}, "bottomright": {"x": 421, "y": 525}}] [{"label": "person", "confidence": 0.92, "topleft": {"x": 82, "y": 30}, "bottomright": {"x": 733, "y": 525}}, {"label": "tie", "confidence": 0.91, "topleft": {"x": 331, "y": 411}, "bottomright": {"x": 421, "y": 525}}] [{"label": "person", "confidence": 0.92, "topleft": {"x": 82, "y": 30}, "bottomright": {"x": 733, "y": 525}}, {"label": "tie", "confidence": 0.91, "topleft": {"x": 331, "y": 411}, "bottomright": {"x": 421, "y": 525}}]

<video src = "video - Copy (6).avi"controls>

<video src = "video-brain-23e.mp4"controls>