Skip to content

memory leak in executeAsync? #151

Open
@shamoqianting

Description

@shamoqianting

"dependencies": {
"@tensorflow/tfjs-backend-cpu": "^3.5.0",
"@tensorflow/tfjs-backend-webgl": "^3.5.0",
"@tensorflow/tfjs-converter": "^3.5.0",
"@tensorflow/tfjs-core": "^3.5.0",
"fetch-wechat": "^0.0.3",
"regenerator-runtime": "^0.14.1"
},
"plugins": {
"tfjsPlugin": {
"version": "0.2.0",
"provider": "wx6afed118d9e81df9"
}
},
WeChat version: 8.0.50
WeChat base API version: 3.5.0
WeChat IDE version: 1.06.2405020 win32-x64

I am trying to use a model with dynamic axis to adapt to different size of input, like [1, 3, 255,255] or [1,3,127,127]. So I have to use executeAsync to get model inference, since there is error using 'predict' method:
nano_tracker.js:36 Preheat failed: Error: This execution contains the node 'StatefulPartitionedCall/assert_equal_5/Assert/AssertGuard/branch_executed/_114', which has the dynamic op 'Merge'. Please use model.executeAsync() instead. Alternatively, to avoid the dynamic ops, specify the inputs [Identity] at e.compile (index.js:17) at e.execute (index.js:17) at e.execute (index.js:17) at e.predict (index.js:17) at ModelBuilder._callee3$ (model_builder.js:54) at s (regeneratorRuntime.js:1) at Generator.<anonymous> (regeneratorRuntime.js:1) at Generator.next (regeneratorRuntime.js:1) at asyncGeneratorStep (asyncToGenerator.js:1) at c (asyncToGenerator.js:1)(env: Windows,mp,1.06.2405020; lib: 3.5.3)

Using executeAsync seems to get correct model inference, while memory increasing with every invoke until it fails.

Is there anyone knows how to fix this? Thank you so much

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions