Skip to content

Conversation

@kenjimarshall
Copy link
Contributor

Primary change is adjusting the memory workflow structure to work with the parametrization resname='MSSNetwork'. To achieve this, some small changes were made to the memory_capacity API and the return type of MemristiveReservoir.simulate was slightly altered to align more closely with EchoStateNetwork.simulate. Also, some small stability changes in MSSNetwork were made.

Sample output from running the workflow is shown below.

python_zi8HD0dkXw

A_II = A[np.ix_(self._I, self._I)]
# print(matrix_rank(A_II, hermitian=check_symmetric(A_II)))
A_II_inv = inv(A_II)
A_II_inv = pinv(A_II)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was getting errors from trying to invert a singular matrix (maybe multiple rows of all zeroes?). So I switched to the pseudoinverse to avoid this problem.

Comment on lines 448 to 451
V = np.zeros((len(Vi), self._n_nodes))
V[:, self._I] = np.asarray(Vi)
V[:, self._E] = Vext
return V
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EchoStateNetwork returns a matrix that is size (n_times, n_nodes), while MemristiveReservoir was returning (n_times, n_internal_nodes). This messes up the indexing for extracting readout nodes, for example, because those indexes were created relative to all nodes that exist, not just internal nodes.

Comment on lines +623 to +626
self._Ga = mask(self, np.divide(self.Woff, self.NMSS,
where=self.NMSS != 0)) # constant
self._Gb = mask(self, np.divide(self.Won, self.NMSS,
where=self.NMSS != 0)) # constant
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was getting some divide by zero errors and infinites in the resulting matrices because of this. I added a condition to avoid dividing by zero here, and in some other places.


def memory_capacity(conn, input_nodes, output_nodes, readout_modules=None,

def memory_capacity(conn, input_nodes, output_nodes, rsn_mapping=None,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed this to take in the raw mappings and index with the output nodes later. This is because output nodes will sometimes get adjusted in this method when randomly selecting the ground node.

readout_nodes = np.setdiff1d(readout_nodes, gr_nodes)

# second dimension should be along the input nodes
x = np.tile(x, (1, len(input_nodes)))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MemristiveReservoir.simulate expects an inputs that is size (n_examples, n_external_nodes). This is different than EchoStateNetwork which only requires the input size second dimension align with the first dimension of w_in.


# simulate reservoir states; select only output nodes
rs = network.simulate(ext_input=x)[:,output_nodes]
rs = network.simulate(x)[:, output_nodes]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In MSSNetwork, this input is called Vext not ext_input. I'm not sure if it's better to just avoid explicitly passing a parameter-value pair, or to change the API of MSSNetwork to be the same as EchoStateNetwork. I chose the former for now. Also, there's no ability to choose 'forward' or 'backward' right now. Let me know if this is important to add.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant