Replies: 6 comments 6 replies
-
I made a few notes in my annotated copy. Probably not all that interesting, but I'm sharing here anyway. |
Beta Was this translation helpful? Give feedback.
-
Not sure how my annotations will translate to this upload. @dckc it could be cool to spend some time discussing some of the paper together if you have any spare time in the coming days. |
Beta Was this translation helpful? Give feedback.
-
March 15 discussion recording: raw materialsedited transcript below... |
Beta Was this translation helpful? Give feedback.
-
edited transcript, part 1, thru 25:03
Intro: Agoric Dev Office Hours, Robust CompositionDan Connolly: Hi, everybody! This is Dan Connolly, with Mark Miller and company. We're here at office hours, and especially today we get to talk about some foundational stuff from the Agoric architecture -- in particular, Mark's thesis, Robust Composition, about access control and concurrency control. Dan Connolly: MarkM is at slightly diminished capacity due to weather and such in California. So advance notice in case something weird happens with this connectivity. For that reason, we did not use camera video. Dan Connolly: Let's see. There was another discussion I heard about where folks ask you a little bit about history, Mark. I don't know how much of that is useful in this setting... Thomas or Patrick any interest in the historical context? Thomas Greco: Definitely a lot of interest. MarkM: yeah, I'm: I'm: certainly very happy to talk about the history. I care a lot about the history. Dan Connolly: Well, now is the time! time: 01:26 KeyKOS Operating System meets Agoric Open Systems PapersMarkM: Okay. So the general history is that there was a golden age of operating system architecture pretty much through the late '60s and 1970s. The most famous research operating system, Multics, happened around that time. The first capability operating system proposal was 1965 by Dennis and van Horn. That was implemented approximately as laid out in the paper in 1969 on the PDP-1, called the supervisor. And KeyKOS, which is really the the one that I learned the most from and that most inspired the perspectives that Dean and I had -- Dean and I have -- that was started as GNOSIS in 1973... "GNOSIS" standing for a Great New Operating System In the Sky. And the three main architects of that which were Norm Hardy, Bill Frantz and Charlie Landau. Charlie Landau had also worked on the PDP-1 supervisor. All of this happened well before I came into the picture. I saw a computer for the first time in '75. The Agoric Open Systems papers, the papers that our company, of course, is named after, Eric Drexler and I started working on those in '83, and those were published in '88, and in that we lay out a vision of what we would today call a cryptographic, decentralized, permissionless, worldwide commerce. And it was presuming -- it was assuming -- that the world was about to build a distributed, cryptographic capability, actor language based fabric because those ideas were out there, and of course they made the most sense. That was when I was still a sort of technological optimist that just believed that whatever made the most sense is what the world would clearly then do next. And it was those papers... Norm Hardy, chief architect of KeyKOS found the papers, saw us state what we saw as the requirements for a foundation for building that kind of system. And that we stated, as was true, that we knew of no system that met all those requirements, and we didn't even know if those requirements could be met simultaneously. So Norm calls us up and said, "We've built a system" -- he's referring to KeyKOS -- he goes, "that solves all the problems simultaneously." We met him over lunch, and he convinced us that yeah, actually KeyKOS was a beautiful architecture that met all of those requirements simultaneously. So that really influenced how Dean and I were thinking about about building out this decentralized, cryptographic actor language based architectures for doing this stuff. time: 05:57 Agorics, Electric Communities, and the Joule programming languageMarkM: Norm Hardy then started working with us, and in the mid '90s, we formed a startup named "Agorics" -- with an "s" on the end. And we started working on Agoric systems -- market based, distributed, language based, cryptographic computational systems, for Sun Labs and for Electric Communities. We designed the Joule language, which they were using to build Electric Communities Habitats which was a decentralized, graphical, social, virtual reality system. Due to a contract dispute with the management of Agorics at the time, suddenly, Electric Communities, having built up all of these ideas and plans around the Joule language architecture for how they were going to build their world where Agorics was providing Joule language support -- due to the contract dispute, Electric Communities was then left without the language support from Agorics. Joule was open source, but but Electric Communities did not want to go into the language business themselves. So that's when a really miraculous event happened, which was... time: 08:00 Concurrency Control Approaches: Threads vs. Fine Grain Actors/CLPMarkM: Up till then -- this is on the concurrency control side -- up till this point there were basically 2 dominant approaches to concurrency that we were aware of. There was the multi-threaded approach -- shared memory, multi-threading locks... -- that brilliant, brilliant people like Tony Hoare and Edsger Dijkstra had done tremendous work creating those systems... From '85 through '88 I was at Xerox PARC together with Dean -- Dean was already there when I joined in 88. I don't know when he started. -- But at Xerox PARC, both of these families of concurrency were represented. There was the Cedar language, which is also shared memory, fine-grain, multi-threading that had many brilliant people working on it. And the other kind of concurrency architect that we were aware of was the fine-grained architecture like Actors, that Joule then later, was very much in the tradition of fine grained actors, and also concurrent logic programming. Flat Concurrent Prologue is the particular language that was most inspiring us. At Xerox PARC, we put together a language called Vulcan, which was logical actors, mixing logic programming and actor computation. That was the ancestor of the Joule project that Dean was the main architect of. And the thing about the dichotomy between those languages is that the actor languages and the concurrent logic programming languages started off without sequential programming; it said forget sequential computation; start off with an inherently fine-grained concurrent world and build all of that up from there and then sequence is just a special case of concurrency. And those languages were beautiful. -- And in the Joule language we mixed all of those with all of our ideas about decentralized capability computation for security as well. -- And we had a big adoption and explanation problem with those languages, because, of course almost everybody first learns to program in a sequential language. And what you're asking them to do in switching into a fine-grained concurrent language is to basically learn to program from scratch all over again. You basically he can take very little of what you already know into the new world. And we thought we were kind of stuck with this dichotomy. The the way we were thinking is: well, if you start from sequential programming and add concurrency, then even brilliant people like Tony Hoare, Edsger Djikstra, and all the people in the Cedar group... You end up with this mess.. every one of these shared memory, multi-threading with fine-grained locking, every one of those languages, despite all this brilliant effort was just terrible: it was a terrible concurrency experience. Nobody knew how to program these things reliably. So we felt like we had to start over by giving up on sequentiality to get to good concurrency. time: 12:07 E: Good Concurrency does not mean giving up sequential programmingMarkM: Then Electric Communities, once they had appreciated all of the virtues of Joule, but were then left without the Joule support, were kind of stuck, because they knew they needed what was good about Joule. So Doug Barnes at Electric Communities, you know, closes his office door for 2 weeks and wrote a preprocessor for the language that at the time, of course, was called E, but which we now refer to as original-E. That was the the first embodiment of the E ideas; and that was basically taking the concurrency constructs from Joule, and just adding them on top of the sequential subset of Java. And it worked. And it was beautiful, and it completely falsified the hypothesis that we all had, which is, that in order to get to good concurrency, you have to give up on sequentiality. That turned out to just be completely wrong. The issue was not whether you started with sequentiality. The issue was only what concurrency paradigm you're adding to the sequentiality to get concurrency. And shared memory multi-threading was -- and remains, I would say; you know, nobody has made that paradigm work -- remains a disaster with regard to reliable programming. But Original-E was the first language that had a sequential base, the sequential subset of Java, added the the the actor-style message passing on top of that, and did it with a local capability-security using language basis and with distributed cryptographic capability with the CapTP protocol, Capability Transport Protocol, which has since become a family of protocols. We're still using essentially that that same protocol design today for doing the language-friendly distributed cryptographic capabilities with actor-style promise-based message passing with promise pipelining. See also: E's History So a year later, due to a different contract dispute, this time between myself and the management of Agorics, I then left Agorics and ended up joining Electric Communities... took over being the main architect of the Original-E language, together with Dan Bornstein. And after a bunch of experience with original E, I then realized there is a possibility of a much much simpler language that's not a superset of Java, but just a from scratch language in its own right, but still a curly bracket language in kind of the C syntactic tradition that is true to sort of the C programmer's understanding of sequentiality, but that brings in capability based objects. So I got very excited about all this, and and created that language, originally called EZ. Then Electric Communities changed direction -- This was later sort of mid to late nineties, '97, '98 -- Electric Communities changed direction. They gave up on doing any further work on the on the original E or EZ languages. I begged them to open source EZ, and the board agreed to sort of the second miracle from my own personal perspective. And also we agreed that we'd gotten some uptake of the name E. The Board was not willing, for the normal, short-sighted reasons, to open source the the Original-E language, which was at the time called E so I convinced them to let me rename EZ to E and to in all in all of our discussions going forward for all of us to refer to what to what was then the E language as Original-E. That was the the renaming. And thus the E language was born. And so I left Electric Communities to turn into an open source project and the open source community. That formed the erights.org website, which I see somebody is projecting. And that was quite a vibrant, open source community where we were brought in many other interested people from across the Internet, and we had great discussions. And the archived e-lang and cap-talk mailing lists still remain a great treasure curve with great formative discussions. And so retracing a bit... time: 18:28 E, Smart Contracts, and Cypherpunks in the open and undergroundMarkM: The Agoric papers were before we had met Nick Szabo. But in retrospect you would also say that the Agoric papers -- the economic level of those was basically an in-depth exploration of many smart contracts. They were the first significant papers on smart contracting, showing various language based smart contracting ideas. The purpose of the of the smart contracting there was computational resources -- memory, space, processor time, network bandwidth -- to apply the the invisible hand to the decentralized allocation decisions of computational resources. Then in the early '90s... maybe very late '80s... I'm not sure, but at least in the early '90s we met Nick Szabo. There was also the the work on AMIX, the AMerican Information eXchange, which was the first smart contracting system where computers smart contracts were used to intermediate between people. So smart contracting just grew and grew in importance in our head. The work for with Sun Labs that Agorics was doing was doing smart contracts on top of our distributed capability infrastructure. So then, when I left Electric Communities to form erights.org, the community around E... E is a general purpose distributed cryptographic actor-based programming language. But even though it's a general purpose security, the motivating application from the begining was smart contracting. And we really started exploring how to express smart contracts well in a programming language. And we did great work, and my first paper on that was published in the year 2000, which was Capability Based Financial Instruments. And that paper, I'll say, holds up very, very well today. And that got some very nice attention within the cypherpunk / crypto finance communities; it was presented at Financial Cryptography 2000 on Anguilla. Export controls had ended in 1998. So all of the cypherpunk energy suddenly became very open, and could be pursued in a mainstream way. And there was this brief window between the ending of export controls in 1998, and the Patriot Act that came in response to 9-11 in 2001... a brief window of like 3 years where all of the cypherpunk energy... we were all you know, openly dreaming of simply, technically building out and going forward this world that we've been imagining without impediment, without, you know, not necessarily having any government resistance, since there wasn't any in during that 3 year window. And then, after 9-11, the Patriot Act happened, and the government got very aggressive at shutting down anything that they felt smelled suspicious of anything, without much due process or constitutional constraints. The head of E-Gold was convicted of enabling money laundering, which was a completely ridiculous charge. There was no anonymity in E-Gold. Everything was tracked. They were actually working to help report anything that was... I mean... It was just a completely ridiculous charge. But under the Patriot Act, and after 9-11 they ended up going after him and convicting him. And all of this cypherpunk work went underground. We all continued working on it as we had before the export controls were lifted, but finding ways to do it without necessarily being obvious that that's what we were doing. time: 23:25 Capability-based vs. Identity-based Access Control in AcademiaMarkM: So in any case, my paper comes out in 2000, and in promoting the ideas in the academic literature, I really become quite aware of the degree to which capabilities as a paradigm had come to be dismissed in academia, dismissed by the the computer security theorists. It was viewed as a failed paradigm. I had been aware of this conflict through the 1990s, and it just was so obvious to me that the capabilities were on the right foot, and all of our work in the capability of paradigm did so much to build great systems, and verify for us that capabilities were the right approach. And our experiences with ACLs were quite awful. That is: with the the dominant security paradigm, access control lists, or, more generally, identity based access control. And Norm had already written his famous Confused Ddeputy paper, laying out kind of the fundamental, irreducible problem with the whole identity based paradigm... time: 25:03 |
Beta Was this translation helpful? Give feedback.
-
edited transcript, part 2 of 2, thru 1:01:45.
The part after MarkM was excused remains unedited. Origins of MarkM's Agoric Systems VisionThomas Greco: I have a question. Coming back to your early work, it's really amazing to learn that it was 1975 when you touched your first computer, and then by the mid '80s you are already able to envision really the future that we're trying to build today. And more importantly, building this out securely. So i'm wondering... do you think that it was KeyKOS? Or to what do you attribute your ability to view the world through this lens differently than how the software engineering sector progressed since that time towards using access control lists? Because, as somebody who began programming 10 plus years or so ago... there's so much information out there, and I think access control lists have just become the default... and capabilities is so not widely known which is so surprising when you see the clear benefits. And so, yeah, I have a loaded question there. But my the first part was what do you attribute... How are you able to envision... like, predict the future? MarkM: Okay. So I love this question... Several key elements that were were quite formative in my being able to to see forward in these ways... One thing is really very generic, but I think should not be underestimated in its importance, which is, I was a big reader of Science Fiction, and taking very seriously, from having been exposed to so much science fiction, that technological progress shapes the future more so than anything else, which I still believe, and that it can take the future in surprising directions, and that the future can be very, very different than the present, and it can happen fast. So just sort of generally approaching all these matters with that seriousness about technology and progress and a change of the future. Computer Lib, XanaduMarkM: And in 1975, a friend of mine handed me Ted Nelson's book Computer Lib/Dream Machines, which was Ted Nelson, a really great visionary, his his dreams about the future of computing, which was largely focused on the future of interactive systems. But it it wasn't a shallow focus on the screen. It was a really quite a deep focus, driven by the by the notion of people interacting with information on computer screens. He was one of the 2 great hypertext visionaries. Brief, very brief history of hypertext, biased towards highlighting how it influences the the main story I'm telling today: Hypertext itself started in some sense with Vannevar Bush's 1945 paper, As We May Think. Vannevar Bush was President Eisenhower's Science adviser. He published a article in the Atlantic imagining what we would now call a user experience of hypertext documents and linked documents. But he was imagining it all. He was painting the picture as the future with microfilm and microfiche technology, because that's all that they had to imagine it with. Doug Engelbart and Ted Nelson both remember reading that article as children. They grew up in the '60s to form the two great hypertext visions. Doug Engelbart, the augment system, NLS, which by the way... All of the KeyKOS design documents were done in augment and a lot of the beauty of KeyKOS they attribute to some of what was different in the design process that was possible to them because they were writing their documents in augment. And then also Ted Nelson in the early '60s, form the Xanadu vision, and start building that out, and then really explained it very, very well in Computer Lib/Dream Machines. When I read Computer Lib/Dream Machines, I was really grabbed by the whole Xanadu vision and ended up deciding that this whole vision of computation that's laid out here was very much the the computation that I wanted to learn about. So I ended up... In the summers between between school I apprenticed myself for free to Ted Nelson. I contacted Ted Nelson, but apprentice myself to him and ended up working on the Xanadu project, becoming the chief architect of what became known as the Xanadu Green architecture. Part of Ted's vision was -- and and this is why the book is called "Computer Lib" -- is computers and decentralized networks and worldwide hypertext publishing as a liberating force. We were both -- Ted and I were quite terrified of the vision of laid out by George Orwell in "1984". But by being Ted's apprentice, I very much absorbed this notion of the future as a choice: we can either end up in a world of 1984 style oppression or we can end up in this world of hypertext publishing, of electronic publishing on computer networks, if we can figure out how to build those things out in a way that was free of censorship, free of modifying of history -- which was one of the main forms of oppression from 1984 -- and free of monitoring, you know, something that was inherently private. time: 33:10 Privacy Aspirations, Xanadu, and the WebMarkM: One of the things that was an essential part of the Xanadu architecture was for nobody to be able to know what you're reading. That's the thing that I found most bizarre about the hypertext system that the world that then took over the world, the web, which is, with the web architecture, not only can the server that serves the document know what document you're reading, but with, you know, the web 2 architecture, they can know where you are in the document, how much time you're spending, looking at which paragraph, how your mouse is moving... The the degree of violation of privacy of people reading things to take in information is just something that just not in my wildest nightmares would I ever have thought that masses of public would have accepted that kind of loss of privacy. So in any case, I very much absorbed from Ted. This notion that we're at a choice point, that that humanity is at a choice point, that these electronic networks are coming, and it could be a 1984-style nightmare, or it could be a great liberating force. And I also very much took up the idea that it was our responsibility to figure out how to build the great liberating force; that that how it turns out, depends on what we build. Thomas Greco: How do you think we... you said never in your wildest dreams would you imagine the current state of web 2, because you guys had such such great visions right for building this private MarkM: So part of why it went awry was actually, I'll say my fault, or primarily my fault... which is in 1989, Dean and I both left Xerox Park to form the newly funded Xanadu -- Xanadu had been going on unfunded on a shoestring all of this time until 1989, when Autodesk decided to fund it. We then formed this really wonderful startup to build out to to really build the Xanadu a hypertext system. And there were 2 things we got wrong. One was: we had a notion of what features you needed simultaneously to have a hypertext system that will create good social emergent effects. There were basically 7 fundamental requirements... and this is laid out in my paper, The Open Society and Its Media, is what those 7 fundamental requirements were. And we built a system that that did those, but it took us longer than was expected. We kept having these triage meetings where we tried to figure out if we could drop something in order to get to market faster. And we kept talking ourselves into the fact that well, if you drop these things you get social pathologies. So you really need all 7 in order to get the kind of beneficial emergent social effects on the evolution of society's knowledge that we were looking for that were motivating us. And as a result, the project went long enough that Autodesk, our funder, ran out of patience. Well, partially due to a management change at Autodesk. We were already in Beta with the with the product. It was, it was not well, not not in Beta. We already had the features working. We were demonstrating. We had the features working, all 7 features... but it was quite a long way from something that was a a commercially viable system at that time, by the time we ran out of funding. The other thing we got wrong is: none of us appreciated what we would now call open source, which was then called free software, because Richard Stallman's way of explaining the virtues of free software just didn't make sense to us. It wasn't until later, with the open source movement that we really came to understand the power of it. But the result of those 2 things is that we built something where the technology itself was quite intricate and needed a bunch of work before it could be used commercially. And it was proprietary. So, without funding, it was hard to figure out how to to continue to advance the system. The web came out with 2 and a half of our 7 elements, and took the world... and was a simple architecture, and was open source... simple enough, with text-based protocols -- which is still kind of insane -- so that you could, so that people are able to put together web servers on on with very, very little software, which which itself is, you know, quite wonderful that you could do it so simply. But with 2 and a half of the 7 elements, that proceeded to have all of the social pathologies that we were worried about. And in my paper, the Open Society and Its Media, there's 2 paragraphs in that paper that I would say, really explain pretty damn well what we then later came to call filter bubbles and echo chambers and and all of those things. So that's that's how Xanadu died, and it was in light of the death of Xanadu that Dean and I then left Xanadu and formed, with with Norm Hardy and others, Agorics, in the mid '90s, dropping the hypertext part, but taking forward the liberation goals. The decentralized cryptographic software systems, but now general purpose, computational systems that could support hypertext and other things could support just general purpose decentralized, secure, permissionless programming. If you can support that in general than any particular decentralized application like hypertext, you can, of course, build out of that. So that was one of the formative elements. time: 41:38 Programming and Economics: Encapsulation and Property RightsMarkM: Another formative element is... what was the timeframe here... I'd say probably early '80s... I was a really big fan of object going to programming, and then of actors, and separately -- we were thinking it was completely separate -- I was also becoming very, very interested in economics, and and in particular in the in the writings of an economist Frederick Hayek. And I was explaining -- I think around 1983 -- I was explaining to my friend Eric Drexler about the virtues of encapsulation in object oriented programming -- how for every piece for every piece of data that it's encapsulated within an object, so there's only a finite amount of code that you have to look at to know whether the invariants are being maintained of in the data... that this coupling of encapsulated data, together with code. And Eric said, "Oh. that's like Hayek's explanation of why property rights are important." And that was the big Aha experience that brought the thinking about computation and the thinking about economics together in my head... realizing that they were, at an abstract enough level, solving the same set of problems in the same sets of ways. The way I've been putting it more recently, is: both large object programs and the human economy as a whole are networks of entities making requests of other entities. And that the nature of the request making is the request contains both an informational component and a rights and incentive component... that there is the transfer of information in the request, but also the transfer of rights. So this really led to... so it's from '83 to '88 that Eric and I worked on the Agoric Open Systems papers published in 88. And that very much caused me to see the foundations of the economic issues just very directly in language-based terms. And that's what led to the appreciation of capabilities, which is: capabilities are the rights that are transferred in a request from one sovereign entity to another sovereign entity. And seeing that from the object oriented perspective, and seeing that from the economic perspective, the only the only kind of economics that you can map to object-oriented computation is one that recognizes the object reference as the right. So it very much led us to much deeper create appreciation of the capability perspective as sort of the only perspective that supports decentralized rights, ownership and rights transfer as the foundation of computation. Thomas Greco: very interesting. a lot to a lot of aha moments clearly over the last few decades. On the topic of property rights, is it just because they're more fine-grained or could you elaborate? Why they fit so well as as opposed to like just regular contractual obligations? MarkM: So the way we saw it is that the capabilities were a foundational rights theory, and that there were all sorts of other forms of rights that you needed to build in order to get economic phenomena, or to to build smart contracts and economic institutions. But every time we tried to figure out how to build those starting from capabilities as the foundation, we ended up happy. And every time we tried to build them on any other foundation we ended up sad. So it just kept reinforcing for us that capabilities were a great foundational rights theory from which to build out the others. And then, once we met Norm and understood the KeyKOS system really took this foundational property rights perspective to computational resources themselves... the first system to really do that in your principled manner... rights to memory, rights to processor time, ... all of these things were managed as capability based rights. One of the exercises in the course for KeyKOS introductory programming class was to build a money system. The mint-maker that I made famous in capability based for financial instruments is kind of inspired partially by the KeyKOS meter abstraction for CPU time, and partially by the way in which that exercise had you build money out of capabilities as a KeyKOS programming exercise. So by the time I got to capability based financial instruments in 2000, I had this notion that there were a bunch of different dimensions on which rights could differ... capabilities themselves are rights that are transferred by sharing: when you send the right, the sender still has the right, and the receiver has to assume that the sender still has the right. So there's whether the rights are share or exclusive, whether the rights are specific or fungible -- specific we took as the base case, and fungible was the was the the weird one that had to be explained. So the idea of calling the specific "non fungible", taking fungible is the normal case, always seems like odd terminology to me -- symbolic versus exercisable... Rights generally are something where, if you have the right, it's the right to do something. Money is peculiar in that it's just the right to transfer the right. Holding the right does not enable you to do anything; it just enables you to transfer it. So in any case, at that point, with capability based financial instruments, we realized that there was this whole taxonomy by which rights could differ.
But we were very satisfied that the right foundation for building out the taxonomy was to make capabilities the foundation, then to do all of the others at higher levels of abstraction. And that that remains the case, and that's what we're doing. time: 50:13 Blockchain: Shared High Integrity Compute Infrastructure for Smart ContractsMarkM: I should mention: all of this was without blockchain. Blockchain was not part of our thinking We were not anticipating anything like blockchain. This whole paradigm of decentralized, permissionless smart contracts was decentralized in the sense that the web is decentralized: just lots of little sovereign parties interacting with each other securely through cryptographic protocols. And blockchains when they did come about were just kind of bizarre to me. And then, when Ethereum came out, I realized that blockchain based smart contracts was actually very much along the lines of the paradigm that Nick Szabo had been trying to explain to me in the '90s... that I just was never able to understand what he was trying to explain to me until I saw Ethereum. But even after that I then couldn't figure out how to bring together my way of thinking about smart contracts with the smart contracts on Ethereum. I just couldn't for how to reconcile those 2. And the Ethereum-based smart contracts were then taking off... It was a Jorge Lopez, who was very much in the blockchain world, but who all had also discovered my papers and was reading up on that, and he had come up with his own vision of just how to combine our non-blockchain based vision of decentralized smart contracts with blockchain-based smart contracts... he found me and explained to me that combination... and it was really that, more than anything that led me to realize that there was a commercial opportunity for me to leave Google and form a blockchain based startup... to bring my kind of language based decentralized, smart contracting into this new growing world of blockchain based smart contracting. And that's what led to the creation of Agoric. Thomas Greco: There you go. Amazing. time: 52:54 Scalable Threat ModellingThomas Greco: I know our time is a little bit short here, but I do just have 2 questions. One about blockchain. So in order for a secure multi-party interaction, like true rich composability to occur, is is this going to require that all systems adopt an ocap architecture? MarkM: So so I never want to claim that no other architectural basis is possible. It's always possible that we discover yet another paradigm that's superior to ocaps and not built on caps. But ocaps have been with us from '65 till today, and has remained superior to all of the paradigms that have gotten invented since then. Dan Connolly: Which which part of the thesis talks about reasoning about parties far away is just reasoning about objects? MarkM: Yeah, you can reason about all suspicion, as if you are suspicious only of objects. That's a really key thing about that was not anticipated in '65 when the capabilities were invented. That was really realization by Jed Donnelly in the late '70s, I think... realizing that the capability security notion transparently distributes over a network with this property... The way I explain it now is that if you're suspicious of another machine, but you're interacting with the other machine over a capability based protocol, then the other machine might be misbehaving. It might be speaking the protocol badly, or it might be running the object the capability based objects on -- let's say you're doing a language based local capability system on that on that machine -- it might be violating the capability rules at that location. So it might be misbehaving all sorts of ways, but due to Jed Donnelly's insight, it became clear that if you had the right kind of cryptographic capability protocol, that the total damage to me of a remote misbehaving machine is equivalent to the damage that could be done by a correctly behaving machine -- correctly in the sense of speaking the protocol correctly and correctly in the sense of running its own local object capability language or operating system correctly -- is equivalent to a correctly functioning platform running a malicious group of objects... that if I'm ignorant of what the actual object object graph is that's running on the the correct platform, that the dangers to me of the that group of malicious objects is equivalent to the dangers to me of the machine that those objects are running on itself being malicious. And as a result I can model my worries... I can do all of my reasoning about my hazards from MarkM: suspicion of other things as if it's only object misbehavior that I'm worried about MarkM: now, There's a whole bunch of qualifiers I need to put on that. The main one is that those statements are really specifically about integrity. It's not about confidentiality, and it's not about availability. It's not about resource use. Those things need to be revisited, and see how close we can get. But the the strong statements is only a statement about about integrity. And it's that insight from Jed Donnelly that really led to the nature of the CapTP protocol family that we're currently building out on. Dan Connolly: That was kind of what's interesting to me is that when you look at things through this messyou can talk about just objects being suspicious of each other. Inside the same javascript. runtime, or across Javascript runtimes, or across unix processes, or across machines, or across blockchains, or whatever. Michael FIG: And what I like so much about the properties, too, is that you think of in terms of what damage can be caused to you. The only damage that can be caused to if somebody sends a message to one of your objects. So if you expose an object to some other object, then they can send a message to. MarkM: Using that I can explain the this equivalence in a fairly direct manner, I think. which is the cryptographic capability protocol says that a a misbehaving platform can still only exercise capabilities remote to itself, that is: can only send messages to objects remote to itself that are capabilities that had been granted by a message passed to that machine, which meant that somebody thought they were sending a message to some object on the machine. So the worst case risk to a misbehaving machine in terms of what messages it consent in terms of, or of who it can send messages to is the union of all capabilities that have been sent to any of the alleged objects on the machine. And if you just have a misbehaving object graph running on a correct platform, then the worst case misbehaving object graph is, it's just a big grand conspiracy where they all pool all the capabilities sent to any of the objects That should be a fairly straightforward, intuitive explanations to Thomas Greco: Yeah, that that was very helpful. time: 01:00:43 Conclusion of Robust Composition discussionMarkM: I could do this for many many hours, but I do have a meeting that I should not be more late to... Dan Connolly: right. Okay. Well, we should do something like this one week, or N weeks hence, or something like that. It's been interesting listening to all the history and stuff. So I'll let you go, Mark, and then i'll see if anybody else has any questions that I should take care of for other stuff. Thomas Greco: Thank you. Mark. MarkM: Yeah, thank you. I just love being able to discuss this, having people who are interested in this. So so I really appreciate it. JD: Yeah, thanks for your time, Mark. Fred Radford: Thanks for your time, Mark. I appreciate it. carlos : Thank you. |
Beta Was this translation helpful? Give feedback.
-
@dckc here is the video from our office hours conversation. YouTube has tried to censor us but we won't be held back! Kidding, however, we did have to deal with our original channel being taken down for reasons unbeknownst to us. A really great discussion can be heard in the video below: And here is a link to our playlist from this session: https://youtube.com/playlist?list=PLe0HNKSqPxRmUgFR2ZjEQ1CRECwN6GYIP Stay tuned as we have some more great content planned for release in the coming weeks. 🙂 |
Beta Was this translation helpful? Give feedback.
-
Many of us in the Agoric team are starting a review of foundational works, and I invite others in the community to read along with us.
This week, we're starting with
thru (and including) 5.7 A Practical Standard for Defensive Programming
That's pages numbered 1-35, which is pages 19-53 of the PDF.
Beta Was this translation helpful? Give feedback.
All reactions