Skip to content
This repository was archived by the owner on Nov 7, 2024. It is now read-only.

fix: typo spelling grammar #930

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/block_sparse_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ notable difference to numpy arrays. For example, while reshaping of
`a1` into a shape `(19,2,10,21)` would be possible if `a1` was a
dense `numpy.ndarray`, it is no longer possible for
`BlockSparseTensor` because we don't have the neccessary information
to split up `i1` into two seperate legs. If you try anyway, we'll
to split up `i1` into two separate legs. If you try anyway, we'll
raise a `ValueError`:

.. code-block:: python3
Expand Down
12 changes: 6 additions & 6 deletions docs/copy_contract.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ Using a contraction algorithm is very easy.

.. figure:: _static/contractors.png

We have several contraction algorithms avaliable as of April 2020.
We have several contraction algorithms available as of April 2020.

- `optimal`: Find the true optimal path via brute force. It can be extremly slow for more than ~10 nodes.
- `optimal`: Find the true optimal path via brute force. It can be extremely slow for more than ~10 nodes.
- `greedy`: Continuously do the cheapest contraction possible. Works well as a default for networks with many nodes.
- `branch`: Brute search, but only check the top `n` possiblities per step.
- `branch`: Brute search, but only check the top `n` possibilities per step.
- `auto`: Automatically decide which of the above 3 algorithms to use.

When contracting a network with more than one dangling leg, you must specify the output order of the dangling legs.
Expand All @@ -31,11 +31,11 @@ When contracting a network with more than one dangling leg, you must specify the

.. figure:: _static/dangling_contract.png

If you do not care about the final output order (for instance, if you are only doing a partial network contraction and the intermidiate order doesn't matter), then you can set `ignore_edge_order=True` and you won't need to supply an `output_edge_order`.
If you do not care about the final output order (for instance, if you are only doing a partial network contraction and the intermediate order doesn't matter), then you can set `ignore_edge_order=True` and you won't need to supply an `output_edge_order`.

Contracting subgraph
---------------------
There are many instances when you want to contract only a subset of your network. Perhaps you know good intermidiate states, but not how to get there efficiently. You can still very easily get a good contraction order by using the subnetwork contraction feature of the `contractors`.
There are many instances when you want to contract only a subset of your network. Perhaps you know good intermediate states, but not how to get there efficiently. You can still very easily get a good contraction order by using the subnetwork contraction feature of the `contractors`.

.. code-block:: python3

Expand All @@ -54,7 +54,7 @@ When building tensor networks, it's very common to want to use a single tensorne

.. code-block:: python3

# Calcualte the inner product of two MPS/Product state networks.
# Calculate the inner product of two MPS/Product state networks.
def inner_product(x: List[tn.Node], y: List[tn.Node]) -> tn.Node:
for a, b in zip(x, y)
# Assume all of the dangling edges are mapped to the name "dangling"
Expand Down
2 changes: 1 addition & 1 deletion tensornetwork/backends/symmetric/symmetric_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -488,7 +488,7 @@ def gmres(self,#pylint: disable=arguments-differ
Reshaping to and from a matrix problem is handled internally.

The numpy backend version of GMRES is simply an interface to
`scipy.sparse.linalg.gmres`, itself an interace to ARPACK.
`scipy.sparse.linalg.gmres`, itself an interface to ARPACK.
SciPy 1.1.0 or newer (May 05 2018) is required.

Args:
Expand Down
2 changes: 1 addition & 1 deletion tensornetwork/block_sparse/linalg.py
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ def qr(matrix: BlockSparseTensor, mode: Text = 'reduced') -> Any:
if mode not in ('reduced', 'complete', 'raw', 'r'):
raise ValueError('unknown value {} for input `mode`'.format(mode))
if mode == 'raw':
raise NotImplementedError('mode `raw` currenntly not supported')
raise NotImplementedError('mode `raw` currently not supported')

flat_charges = matrix._charges
flat_flows = matrix._flows
Expand Down
4 changes: 2 additions & 2 deletions tensornetwork/linalg/initialization.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ def ones_like(tensor: Union[Any],
backend: Optional[Union[Text, AbstractBackend]] = None) -> Tensor:
"""Return a Tensor shape full of ones the same shape as input
Args:
tensor : Object to recieve shape from
tensor : Object to receive shape from
dtype (optional) : dtype of object
backend(optional): The backend or its name."""
if backend is None:
Expand Down Expand Up @@ -139,7 +139,7 @@ def zeros_like(tensor: Union[Any],
AbstractBackend]] = None) -> Tensor:
"""Return a Tensor shape full of zeros the same shape as input
Args:
tensor : Object to recieve shape from
tensor : Object to receive shape from
dtype (optional) : dtype of object
backend(optional): The backend or its name."""
if backend is None:
Expand Down
2 changes: 1 addition & 1 deletion tensornetwork/network_components.py
Original file line number Diff line number Diff line change
Expand Up @@ -2078,7 +2078,7 @@ def contract_between(
np.mean(node1_output_axes) > np.mean(node2_output_axes)):
node1, node2 = node2, node1
axes1, axes2 = axes2, axes1
# Sorting the indicies improves performance.
# Sorting the indices improves performance.
ind_sort = [axes1.index(l) for l in sorted(axes1)]
axes1 = [axes1[i] for i in ind_sort]
axes2 = [axes2[i] for i in ind_sort]
Expand Down
2 changes: 1 addition & 1 deletion tensornetwork/network_operations.py
Original file line number Diff line number Diff line change
Expand Up @@ -800,7 +800,7 @@ def switch_backend(nodes: Iterable[AbstractNode], new_backend: Text) -> None:
nodes: iterable of nodes
new_backend (str): The new backend.
dtype (datatype): The dtype of the backend. If `None`,
a defautl dtype according to config.py will be chosen.
a default dtype according to config.py will be chosen.

Returns:
None
Expand Down
2 changes: 1 addition & 1 deletion tensornetwork/tn_keras/dense.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def __init__(self,
**kwargs) -> None:

# Allow specification of input_dim instead of input_shape,
# for compatability with Keras layers that support this
# for compatibility with Keras layers that support this
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)

Expand Down
2 changes: 1 addition & 1 deletion tensornetwork/tn_keras/mpo.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def __init__(self,
**kwargs) -> None:

# Allow specification of input_dim instead of input_shape,
# for compatability with Keras layers that support this
# for compatibility with Keras layers that support this
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)

Expand Down