npm @chat21/[email protected]
available on:
- debug version
- debug version
- queues are always "durable". "DURABLE_ENABLED" option removed.
- renamed CHAT21OBSERVER_CACHE_ENABLED, CHAT21OBSERVER_REDIS_HOST, CHAT21OBSERVER_REDIS_PORT, CHAT21OBSERVER_REDIS_PASSWORD
- presence fix
- debug version
- presence is back!
- DURABLE_ENABLED fixed
- DISABLED ch.prefetch(prefetch_messages);
- presence fully disabled
- amqplib updated v0.8.0 => v0.10.3
- persistent: true on publish()
- Refactored testing (webhooks tests separated from conversations tests)
-
BUG FIX: Webhooks now use PREFETCH_MESSAGES setup from .env
-
Introduced DURABLE_ENABLED: true|false in .env
-
persistent: false
-
Improved perfomance management. To better scale now you can:
-
Create an instance to only process webhooks queue:
ACTIVE_QUEUES=none PRESENCE_ENABLED=false WEBHOOK_ENABLED=true node chatservermq.js
- Create an instance to only process "messages" queues (there are many):
ACTIVE_QUEUES=messages PRESENCE_ENABLED=false WEBHOOK_ENABLED=false node chatservermq.js
- Create an instance to only process 'persist' queue:
ACTIVE_QUEUES=persist PRESENCE_ENABLED=false WEBHOOK_ENABLED=false node chatservermq.js
- Create an instance to only process presence:
ACTIVE_QUEUES=none PRESENCE_ENABLED=true WEBHOOK_ENABLED=false node chatservermq.js
Or everything in a single process:
ACTIVE_QUEUES=messages,persist PRESENCE_ENABLED=true WEBHOOK_ENABLED=true node chatservermq.js
- durable: false on every queue
- persistent: true in publish AGAIN
- noAck: true
- persistent: false in publish NOW ONLINE
- persistent: false in publish
- Adds log info for Prefetch messages
- Adds support to disable Presence
- Log defaults to INFO level
- presence webhook and observer.webhooks...presence introduced
- updated chat21client.js => v0.1.12.4 with 'presence' publish on.connect()
- updated chat21client.js => v0.1.12.4 with ImHere()
- RESTORED if (savedMessage.attributes && savedMessage.attributes.updateconversation == false) {update_conversation = false}. See v0.2.26
- always setting/forcing creation of index { 'timelineOf': 1, 'conversWith': 1 }, { unique: 1 } on "conversations" collection
- removed env options: UNIQUE_CONVERSATIONS_INDEX and UNIQUE_AND_DROP_DUPS_CONVERSATIONS_INDEX
- introduced UNIQUE_AND_DROP_DUPS_CONVERSATIONS_INDEX=1 in .env to enable the "unique" index forcing removing duplicates. Please follow the instructions 'enable the "unique" index' at the end of CHANGELOG.md to correctly enable this feature.
- introduced unique index on conversations collection to fix the duplication of conversations. Added UNIQUE_CONVERSATIONS_INDEX=1 in .env to enable the "unique" index. Please follow the instructions 'enable the "unique" index' at the end of CHANGELOG.md to correctly enable this feature.
- removed if (savedMessage.attributes && savedMessage.attributes.updateconversation == false) {update_conversation = false}. Now conversations are always updated. Same modification also on chat21client.js
- all negative acks removed. All callback(false) => callback(true) to avoid queue blocks
- minor fixes
- updated dev dependency: "@chat21/chat21-http-server": "^0.2.15",
- start_all.sh logging from ERROR to INFO
- logging fixes
- ver updtd "version": "0.2.24"
- log updates
- Docker image update:16
- amqplib ^0.7.1 => ^0.8.0
- node 12 => 16.17.1
- added process.exit(0) on "[AMQP] channel error". It lets the server to silently restart on blocking AMQP errors.
- ack in sendMessageToGroupMembers() sent immediately.
- added group_id as memebr of inlineGroup.
- Added inlineGroup management. You can create on the fly group just sending a message with the "group.members" attribute.
- "ack" management improvements
- Deployed new version
- Added check on "routingKey invalid length (> 255). Publish canceled."
- added logs for better debug "routingKey" error
- archive-conversation payload now publishes on MQTT the full conversation data, not only the conversation patch
- added test #17 - conversation/archivedConversation detail
- minor fixes on testing: added assert.fail() in test #16
- added test #16 for testing that only webhook "message-delivered" event willreceive history notifications && "message-sent" to NEVER receive history messages notifications
- added support for the new outgoing path apps.appId.outgoing
- Webhooks: moved process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0" in .env
- Webhooks: process.env.WEBHOOK_ENDPOINTS separator "," now support spaces
- Testing: added configuration in .env. See 'example.env' for a complete list of test properties (starting with TEST_)
- Testing: bug fix
- replaced uuidv4 with uuid
- removed process.exit(1) from "close" event in observer's AMQP connection handlers
- refactored testing
- added test 14, 15 for webhooks
- added multiple webhooks support
- added selective queues for performance improvements. E.g. start the observer with command: "ACTIVE_QUEUES=messages node chatservermq.js MSG" to only enable "messages" queue.
- removed function: function joinGroup()
- exported logger from observer.js
- bugfix: this: if (inbox_of === outgoing_message.sender) { became: if (inbox_of === group.uid) { logger.debug("inbox_of === outgoing_message.sender. status=SENT system YES?", inbox_of); outgoing_message.status = MessageConstants.CHAT_MESSAGE_STATUS_CODE.SENT; } // choosing one member, the group ("volatile" member), for the "status=SENT", used by the "message-sent" webhook // achived changing the delivered status to SENT "on-the-fly" when I deliver the message to the group id. This will trigger the webhookSentOrDelivered to "Sent" only
If "system" sends info messages and he is not member of the group, webhooks are never called. The "message-sent" webhook is called only once: when, iterating all the members, the selected one is the same as the group. This because the "message-sent" must be called only once per message. The "sender" can't be used, because the "sender" not always is a group's member (ex. info messages by system while system is not always a member of the group).
UNIQUE INDEX FOR CONVERSATIONS
The "Query" to get all the duplicated conversations:
db.getCollection('conversations').aggregate([ { "$group": { "_id": { "timelineOf": "$timelineOf", "conversWith": "$conversWith" }, "uniqueIds": { "$addToSet": "$_id" }, "count": { "$sum": 1 } } }, { "$match": { "count": { "$gt": 1 } } } ])
From uniqueIds get one of the two Object('id')
Delete one of the N duoplicates with the following query:
db.getCollection('conversations').deleteOne({ "_id": ObjectId("636e7c11035d0b0599563f87") } )
After you deleted all the duplicated conversations based on the unique index you can run the server with the following option in .env:
UNIQUE_CONVERSATIONS_INDEX=1
The unique index is created and you will no more have duplicated conversations.
UNIQUE_AND_DROP_DUPS_CONVERSATIONS_INDEX=1
Will also automatically remove duplicates (with the simple rule: keep the first, delete all the others)
Well done.