Replies: 1 comment
-
|
Thanks for your contribution and insights. I think the current way to express are service-level-objectives in data contracts, so you can specify the non-functional guarantees that you can give for your data product. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
First of all, thanks for the great work on https://www.datamesh-architecture.com/ ! Really cool stuff :-)
As you call out for contributions on tech stack used, and I slightly disagree on the points made about "Ingest", I thought I would tell a bit about the tech stack I use and what I've done with it...
Essentially, I am working on a data product which is an integrated Knowledge Graph of medical data (something like CKG). This product is implemented using Stardog which a data fabric solution. In order to scale and have up to date information most of the graph is virtualised over the data coming from the different other domains feeding it. As such, I do not agree with your point about data mesh being not fit for low-latency data requirements. The product I built using Stardog is just as slow as the slowest data domain feeding it - and can be quite low-latency if everyone serves data in a low-latency way. So I would say that data mesh is compatible with low-latency use-cases if data virtualisation is considered. To this point I was surprised not to see any mention of it in the part about data ingest but I understand from the FAQ question on fabric versus mesh that this could have been a choice to split the two concepts apart.
All in all, I think it's possible to leverage data fabrics inside a data mesh as low-latency approaches to provide the integrated semantic layer (the part about entities and events). The result is then a data product which provides integrated data ready to use.
Beta Was this translation helpful? Give feedback.
All reactions