Decomposing the System #5
Replies: 5 comments 20 replies
-
(1) This warrants a wider discussion, certainly. My opinion, and I'm willing to be convinced otherwise, is that if we do not start with hubs we're headed down a path that will be difficult to get off of. Changing course a few months later as we add in more diverse usecases may be making "future us" holding a lot of tech debt. I'd love to see us build a minimal hub from which we can build upon in terms of functionality and decentralization (running on different platforms, service providers, languages) from the get go to avoid later headache. At the same time, I see risk in coupling protocols (tbDex) to specific technology implementations (hubs) and it's a useful exercise to discuss how they can be sufficiently decoupled, and how each can be strengthened by the use of one another. (2) brings up a really good point that i'd like to get a bit deeper into (either here or another discussion). What is decentralization? Where do we want to be on the spectrum of decentralization for each component of our infrastructure/protocols? Can we start less decentralized and go more decentralized? How can we map/visualize/agree on our decentralization, the pros/cons and areas we may change as we progress? (3) I'm super interested in this one. I'm not certain, but as soon as we have a wide array of use cases for credentials and their accompanying infrastructure we can move towards building a more robust system and testing our assumptions. |
Beta Was this translation helpful? Give feedback.
-
I'm going to answer this by starting fully centralized and then introduce means to achieve decentralization incrementally. Keep in mind that decentralization is not binary. It's a spectrum. How much can decentralization can we achieve without hubs? Here's what I imagine to be fully centralized: This is very close to the concept of a "marketplace" where:
OK, let's start to decentralize. What's the main centralizing aspect here? It's that Block/TBD is the necessary centralized intermediary. We need to remove that and ensure that:
How can we achieve this? DIDs 💡 A DID can be looked at as a decentralized routing table. Concretely, {
"service": [{
"id": "pfi1#service1",
"type": "tbDEX/PFI",
"serviceEndpoint": "https://tbdex.pfi1.com"
}]
} Every PFI has a swagger endpoint like this and they're all running the same API. Wallet developers refer to this PFI API documentation to figure out what endpoints to hit and in what order. The same would have to be the case for Issuers. Again, All of them have to be exposing the same API endpoints for wallet developers to know what endpoints to hit. Different API endpoints per Issuer or PFI would be a nightmare for wallet developers. In order to support a new PFI or Issuer, wallet developer would have to learn and implement support for that PFI's or Issuer's REST API. Ok, so we're using DIDs, what's the diagram look like now? Note: I didn't include PFI/Issuer Discovery in this diagram. We're going to assume that discovery happens prior to any tbDEX messages being sent which i'm considering as out-of-band. Ok. That's progress. We've removed the centralized intermediary. Alice can reach PFIs and Issuers directly. But wait...how do PFIs/Issuers respond to Alice? There are two major points to note before we get into potential approaches:
let's walk through a few potential approaches: PollingPolling is the simplest approach to get responses back to Alice "asynchronously." The general idea would be:
Here are some sequence diagrams to illustrate Request/Reply flows: Apply / Issue VC FlowsequenceDiagram
participant A as Alice
participant W as Wallet
participant I as Issuer
W ->> I: What VCs can I Apply for?
I ->> W: Cred Manifest
W ->> I: Send me the form for VC X
I ->> W: JSON Form
W ->> W: Render Form
A ->> W: Fill out form and submit
W ->> I: Filled out Form
I ->> W: Request ID + JWT
loop every 10s
W ->> I: Do you have a reply for REQ ID?
I ->> W: REPLY || Still Processing
end
ASK / BID FlowsequenceDiagram
participant AL as Alice
participant W as Wallet
participant A as PFI A
participant B as PFI B
participant C as PFI C
AL ->> W: Fill out ASK form
W ->> A: ASK
A ->> W: REQ ID + JWT
W ->> B: ASK
B ->> W: REQ ID + JWT
W ->> C: ASK
C ->> W: REQ ID + JWT
loop every 10s
W ->> A: Do you have a reply for REQ ID X?
A ->> W: REPLY || STILL PROCESSING
end
loop every 10s
W ->> B: Do you have a reply for REQ ID X?
B ->> W: REPLY || STILL PROCESSING
end
loop every 10s
W ->> C: Do you have a reply for REQ ID X?
C ->> W: REPLY || STILL PROCESSING
end
There's two major drawbacks to this approach:
Let's evaluate how decentralized this approach is:
This approach seems to be sufficiently decentralized. Application data adhering to a standards-based format reduces portability friction. Portability friction however, does not outright prevent someone from accessing the financial utility provided by tbDEX. I don't think we're willing to give up push notifications given the crucial role they play for UX, so we need to modify this approach in order to support them. A push notification can only be delivered to a mobile device by the application developer. In this case it's the wallet developer. You can read about how push notifications work for iOS specifically, here. The gist is that the app developer can opt to receive a user's device token which is a unique token specific to a user's device AND the application. Bare minimum, the wallet would need to store a mapping of So, this means that PFIs / Issuers will have to send a request to some RESTful API hosted by the wallet developer indicating that a reply is ready. Here's how the flow would look:
Here's a diagram illustrating the above flow: sequenceDiagram
participant A as Alice
participant WA as Wallet App
participant WS as Wallet Server
participant P as PFI
A ->> WA: Fill out Ask and Hit Submit
WA ->> PFI: ASK
PFI ->> WA: Req ID + JWT
PFI ->> PFI: process
PFI ->> WS: Reply ready
WS ->> WA: Push Notification
A ->> WA: Open App
WA ->> PFI: Give me the reply for REQ ID
PFI ->> WA: Reply
WA ->> WA: Render
🎉 We now have push notifications and we also don't have to violently poll anymore.🎉 Note: This response is already absurdly long so I'm intentionally leaving out things we'll need to consider like "Wait, can't anyone just Notification bomb any user now?" Let's evaluate how decentralized this modified approach is in the same way we did earlier:
Alice's reliance on a wallet developer as a means to inform her that a reply is waiting for her opens up room for censorship. A wallet could decide to only inform users when they receive replies from PFIs that they prefer. Moreover, A wallet developer could choose to not inform a user of any replies at all. This, however is not due to the architecture described. It's the unavoidable cost to support push notifications. Any architecture will run into the same potential for censorship. It becomes a trade-off between UX and decentralization. We already addressed the benefits of data adhering to a standards-based format. The point still stands that this is more a quality of life matter, not a means of restraint. So, what's lacking here? What do Identity Hubs provide that this approach doesn't?
Orthogonal to tbDEX specifically, Identity Hubs bring with them a whole world of interoperability potential by allowing individuals to directly access or request access to any data stored in Alice's Identity Hub. this could be anything from Netflix preferences and music playlists to Alice's home address or employment status. Again, having or not having this potential does not directly impact tbDEX as it currently stands. This also doesn't discredit the potential that Identity Hubs could bring to the table if they gain adoption. The last question that comes to mind then is: Well, if we started by introducing tbDEX without hubs, could we eventually move to a hub-based architecture? Introducing any change, however small, to a decentralized system is significantly difficult. Despite having incentives to do so, upgrades to blockchain miner/validator software takes months to years to gain widespread adoption. Practically speaking, a full-sweep migration to Identity Hubs for all wallets, PFIs, and Issuers seems unlikely if there's an approach that's already gained widespread use. From this perspective, not using Identity Hubs from the beginning has more of a likelihood to negatively impact the adoption of hubs than anything else. Admittedly, this take is fully speculative. I intended to lay out two additional non-hub approaches but I've likely already lost 80% of readers by now so I think it's safe to leave those for a different time because the approach discussed above is the simplest and seems to achieve sufficient decentralization. In summary:
Kind of. not sure about the 'centralized set' aspect.
I believe you may have meant 'sufficiently decentralized' instead of 'necesarily decentralized'. If so, IMO, yep absolutely.
deferring to @csuwildcat for plugs Footnotes
|
Beta Was this translation helpful? Give feedback.
-
Moe's decomposition of approaches is an important exercise. I noticed there were a few considerations that could be fleshed out further, which leads to the higher-level question of what functionality, components, and system guarantees we must support regardless of what we output. To that end I have put together the following set of ecosystem promises and technical requirements we must deliver on to produce a viable output we can safely, confidently expose to users. Promises to the Ecosystem
Technical RequirementsAbsent any prescription for the How, the following table represents the What our deliverables must account for:
Resource AssessmentMoe, Gabe: please reflect on the promises and technical requirements detailed above to assess the approximate development time required for a standardized personal data storage/relay approach vs a 'traditional' use-case-specific server implementation. Please also highlight the timing diff between the them, to the best of your ability. |
Beta Was this translation helpful? Give feedback.
-
We've already identified multiple things that would need new standards and the same/greater effort if done as a random REST service, but there are a few more that haven't been fully discussed yet: encryption, signing, and authorization. Various data in the system must be signed and/or encrypted with the DIDs of participants to ensure it is verifiabily linked to a given DID and/or only accessible to designated parties. This signing and encryption model must be message-level and composable so the construction of the objects aren't dictated by random provider formats. If you don't use Hubs you will be required to produce a new standard that provides for all 4 message-level data states: raw, signed, encrypted, signed + encrypted. Hubs of course already defines a data model that meets these requirements (https://identity.foundation/identity-hub/spec/#messages). Messages, and the data model used, must also incorporate composable authorization based on DID-signed capabilities. The authorization mechanism must integrate into the same data model used for the signing and encryption components. This has already been fleshed out for Hubs and we have PRs imminent. I could prepare folks for the mountain of effort it will take to secure a standard Working Group item and repeat this work to meet the requirements, but this is simply the wrong course that will harm the larger effort, confuse the community, and cost us a significant amount of additional time, so it's not something I will support. |
Beta Was this translation helpful? Give feedback.
-
Appreciate the care and thought you've all put into this so far. We're getting closer. This looks like a good point to check in with the main goals of this discussion and see where we're at: Our first use case requires currency exchange, decentralization, and data portability. It does not require identity. This is why I encourage us to decompose the system. In my first post, I asked if we could make a:
From @csuwildcat, I'm hearing: "nope, that ain't a good idea":
And I've understood your reasoning is: the Identity Platform should bring decentralization and data portability to tbDEX. Got it. And @decentralgabe advocates for our avoiding tech debt by starting with a small hubs model:
Gabe:
What I've understood from @mistermoe: there are other ways to bring decentralization and data portability to tbDEX, for instance the DIDs and push notification flow. Does this approach make our job tougher later? That leaves me to balance two competing ideals:
Remember: this isn't about whether to make Hubs. We're invested and it's happening. This is about how we focus our near-term efforts. How much we can scope down to deliver smaller things faster? I hope we all understand why I'm encouraging us to question ourselves. We have use cases for currency exchange but not identity. I don't want to require folks bring in a full identity platform to get that done. That's weird. Like, let's all admit how weird that is, even if it's what we decide to get decentralization and portability. It's super weird. The Big Question left for us, which will inform how we focus our time in the near term:
@csuwildcat's sentiment has either got to be shared by the team or we evolve it together:
Let's commit to an answer on these by Friday, the 4th. |
Beta Was this translation helpful? Give feedback.
-
We've designed a powerful system. It lets users talk to financial institutions (PFIs) and identity providers (VC issuers). Where we once would have had to draw direct, custom connections from the user to PFIs/VC issuers, now the paradigm is inverted. PFIs and VC issuers receive a common data format over a common protocol, and the user can talk to any compliant participant in the system.
It looks a little like this, with some handwaving to simplify the picture:
User > Wallet > User's Identity Hub > PFI Identity Hub > PFI Implementation > PFI's business systems
I sense we're all generally on board with this design. For PFIs and VC Issuers, it provides a clean interface connecting them to users. For users, it's a smooth experience for exchanging value and proving identity.
I'd like us to consider how we may pull this apart. Reasoning: let the system come together in smaller pieces without having to wait on the whole thing to be built end to end. None of this changes the importance of the full system. This is only about our focus and timing. I prefer iterative improvement over Big Bang releases.
In concrete terms, I think this means:
If we can decouple these work streams, I believe we may get the following benefits:
But we have some open questions to get there. Could we talk through:
S,
ALR
Beta Was this translation helpful? Give feedback.
All reactions