Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

Commit

Permalink
Merge pull request #4179 from NervanaSystems/cyphers/m28
Browse files Browse the repository at this point in the history
Cyphers/m28
  • Loading branch information
diyessi authored Jan 14, 2020
2 parents ac27139 + bbe4e08 commit d2cd873
Show file tree
Hide file tree
Showing 81 changed files with 4,031 additions and 774 deletions.
7 changes: 2 additions & 5 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ option(NGRAPH_USE_LEGACY_MKLDNN "Use legacy MKLDNN" FALSE)
option(NGRAPH_MLIR_ENABLE "Control the building of MLIR backend" FALSE)
option(NGRAPH_INTERPRETER_ENABLE "Control the building of the INTERPRETER backend" TRUE)
option(NGRAPH_NOP_ENABLE "Control the building of the NOP backend" TRUE)
option(NGRAPH_GENERIC_CPU_ENABLE "Enable build nGraph for generic CPU backend" FALSE)
option(NGRAPH_GENERIC_CPU_ENABLE "Enable build nGraph for generic CPU backend" TRUE)
option(NGRAPH_DEBUG_ENABLE "Enable output for NGRAPH_DEBUG statements" FALSE)
option(NGRAPH_DEPRECATED_ENABLE "Enable compiler deprecation pragmas for deprecated APIs (recommended only for development use)" FALSE)
option(NGRAPH_ONNX_IMPORT_ENABLE "Enable ONNX importer" FALSE)
Expand Down Expand Up @@ -199,10 +199,7 @@ if (NGRAPH_STATIC_LIB_ENABLE)
set(NGRAPH_EXPORT_TARGETS_ENABLE OFF)
endif()

if (NGRAPH_CPU_ENABLE
AND
(NOT NGRAPH_GENERIC_CPU_ENABLE)
)
if (NGRAPH_CPU_ENABLE)
set(NGRAPH_INTEL_CPU_ONLY_ENABLE ON)
endif()

Expand Down
5 changes: 3 additions & 2 deletions doc/sphinx/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,11 +73,11 @@
# built documents.
#
# The short X.Y version.
version = '0.27'
version = '0.28'

# The Documentation full version, including alpha/beta/rc tags. Some features
# available in the latest code will not necessarily be documented first
release = '0.27.1'
release = '0.28.0'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down Expand Up @@ -143,6 +143,7 @@
]
}

html_last_updated_fmt= ''

# -- Options for HTMLHelp output ------------------------------------------

Expand Down
2 changes: 1 addition & 1 deletion doc/sphinx/source/ops/constant.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,6 @@ Outputs
C++ Interface
=============

.. doxygenclass:: ngraph::op::Constant
.. doxygenclass:: ngraph::op::v0::Constant
:project: ngraph
:members:
8 changes: 5 additions & 3 deletions doc/sphinx/source/ops/parameter.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Parameter
#########

.. code-block: cpp
.. code-block:: cpp
Parameter // A function parameter.
Expand Down Expand Up @@ -38,7 +38,9 @@ Outputs
| ``output`` | ``element_type`` | ``shape`` |
+------------+------------------+------------+

A ``Parameter`` produces the value of the tensor passed to the function in the position of the parameter in the function's arguments. The passed tensor must have the element type and shape specified by the parameter.
A ``Parameter`` produces the value of the tensor passed to the function
in the position of the parameter in the function's arguments. The passed
tensor must have the element type and shape specified by the parameter.

Backprop
========
Expand All @@ -51,6 +53,6 @@ Backprop
C++ Interface
=============

.. doxygenclass:: ngraph::op::Parameter
.. doxygenclass:: ngraph::op::v0::Parameter
:project: ngraph
:members:
2 changes: 1 addition & 1 deletion doc/sphinx/source/ops/result.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,6 @@ Mathematical Definition
C++ Interface
=============

.. doxygenclass:: ngraph::op::Result
.. doxygenclass:: ngraph::op::v0::Result
:project: ngraph
:members:
22 changes: 14 additions & 8 deletions doc/sphinx/source/project/release-notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,19 +21,15 @@ We are pleased to announce the release of version |version|.
Core updates for |version|
--------------------------

+ New ops
+ Provenance improvements from 0.25.1
+ More dynamic shape ops
+ More informative errors



Latest documentation updates
----------------------------

+ Additional details on quantization
+ Index updates
+ API updates
+ Dynamic Shapes and APIs
+ Provenance
+ Add linkages and overview for quantization APIs
+ New ngraph.ai themed illustrations

.. important:: Pre-releases (``-rc-0.*``) have newer features, and are less stable.

Expand All @@ -42,6 +38,16 @@ Latest documentation updates
Changelog on Previous Releases
==============================

0.27.1

+ Fixes broken serializer for Sum and Product
+ New ops
+ Provenance improvements from 0.25.1
+ More dynamic shape ops
+ More informative errors
+ Additional details on quantization
+ Index updates
+ API updates
+ All ops support ``Output<Node>`` arguments
+ Additional ops
+ ONNX handling unknown domains
Expand Down
6 changes: 5 additions & 1 deletion doc/sphinx/source/training/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,14 @@
Distributed Training
####################

.. important:: Distributed training for CPU backend is not supported. Distributed
training support is provided only with the Intel® Nervana™ Neural Network Processor
for Training (NNP-T).

.. toctree::
:maxdepth: 1

overview.rst
data_ingest.rst



27 changes: 2 additions & 25 deletions doc/sphinx/source/training/overview.rst
Original file line number Diff line number Diff line change
@@ -1,32 +1,9 @@
:orphan:

.. training/overview.rst:
.. _overview:

Basic Concepts
==============

.. important:: Distributed training is not officially supported as of version
|version|; however, some configuration options have worked for nGraph
devices in testing environments.


Data scientists with locally-scalable rack or cloud-based resources will likely
find it worthwhile to experiment with different modes or variations of
distributed training. Deployments using nGraph Library with supported backends
can be configured to train with data parallelism and will soon work with model
parallelism. Distributing workloads is increasingly important, as more data and
bigger models mean the ability to :doc:`../core/constructing-graphs/distribute-train`
work with larger and larger datasets, or to work with models having many layers
that aren't designed to fit to a single device.

Distributed training with data parallelism splits the data and each worker
node has the same model; during each iteration, the gradients are aggregated
across all workers with an op that performs "allreduce", and applied to update
the weights.

Using multiple machines helps to scale and speed up deep learning. With large
mini-batch training, one could train ResNet-50 with Imagenet-1k data to the
*Top 5* classifier in minutes using thousands of CPU nodes. See
`arxiv.org/abs/1709.05011`_.

.. _arxiv.org/abs/1709.05011: https://arxiv.org/format/1709.05011
11 changes: 10 additions & 1 deletion python/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
PYNGRAPH_ROOT_DIR = os.path.abspath(os.path.dirname(__file__))
NGRAPH_DEFAULT_INSTALL_DIR = os.environ.get('HOME')
NGRAPH_ONNX_IMPORT_ENABLE = os.environ.get('NGRAPH_ONNX_IMPORT_ENABLE')
NGRAPH_PYTHON_DEBUG = os.environ.get('NGRAPH_PYTHON_DEBUG')


def find_ngraph_dist_dir():
Expand Down Expand Up @@ -367,6 +368,13 @@ def _add_extra_compile_arg(self, flag, compile_args):
return True
return False

def add_debug_or_release_flags(self):
"""Return compiler flags for Release and Debug build types."""
if NGRAPH_PYTHON_DEBUG in ['TRUE', 'ON', True]:
return ['-O0', '-g']
else:
return ['-O2', '-D_FORTIFY_SOURCE=2']

def build_extensions(self):
"""Build extension providing extra compiler flags."""
if sys.platform == 'win32':
Expand All @@ -388,7 +396,8 @@ def build_extensions(self):
add_platform_specific_link_args(ext.extra_link_args)

ext.extra_compile_args += ['-Wformat', '-Wformat-security']
ext.extra_compile_args += ['-O2', '-D_FORTIFY_SOURCE=2']
ext.extra_compile_args += self.add_debug_or_release_flags()

if sys.platform == 'darwin':
ext.extra_compile_args += ['-stdlib=libc++']
build_ext.build_extensions(self)
Expand Down
2 changes: 1 addition & 1 deletion src/contrib/mlir/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ add_subdirectory(tools/ngraph-opt)
set(SRC
backend/cpu/cpu_backend.cpp
backend/pass/affine_lowerer.cpp
backend/pass/memory_optimization.cpp
backend/analysis/memory_analysis.cpp
core/compiler.cpp
core/ngraph_dialect/dialect.cpp
core/ngraph_dialect/type.cpp
Expand Down
Loading

0 comments on commit d2cd873

Please sign in to comment.