We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
服务启动时的全量同步有调用e_pipeline,但运行期间mongo数据修改后仅仅是同步了指定的m_collectionname,没有执行e_pipeline。
PS。我在自定义e_pipeline里对m_collectionname里的指定属性做了多个拷贝并正则替换成新文档,然而在自动同步过程中这些字段都没被更新 bulkDataAndPip 里的日志:
--启动时的bulk
[ { "index":{ "_index":"corpus", "_type":"contents", "_id":"ImQs6IdHp" } }, { "title":"doc2019-03-24-2", "comments":"11111" } ]
--更新时的bulk
[ { "update":{ "_index":"corpus", "_type":"contents", "_id":"ImQs6IdHp" } }, { "doc":{ "title":"doc2019-03-24-2", "comments":"22222" } } ]
The text was updated successfully, but these errors were encountered:
解决方案。修改getUpdateMasterDocBulk :
return new Promise(function (resolve, reject) { var bulk = []; var item = {}; item.doc = opDoc; bulk.push({ index: { _index: watcher.Content.elasticsearch.e_index, _type: watcher.Content.elasticsearch.e_type, _id: id } }, opDoc); return resolve(bulk); });
Sorry, something went wrong.
这个方法主要是为了做mongodb的原子更新用的,因为oplog里面返回的数据不一定是全部的,改成上面那种方法就是全量替换elasticsearch里面的数据,如果你的应该场景是每次更新操作oplog返回的数据是全量的,可以改成上面的那种,否则你会发现elasticsearch里面的数据会丢节点
No branches or pull requests
服务启动时的全量同步有调用e_pipeline,但运行期间mongo数据修改后仅仅是同步了指定的m_collectionname,没有执行e_pipeline。
PS。我在自定义e_pipeline里对m_collectionname里的指定属性做了多个拷贝并正则替换成新文档,然而在自动同步过程中这些字段都没被更新
bulkDataAndPip 里的日志:
--启动时的bulk
--更新时的bulk
The text was updated successfully, but these errors were encountered: