Very large amounts of data, but and so hopefully this kind of conversation has, has shown how the, the amount of verification that you might need to do changes with, with the amount of data that parties are trying to store and, and the amount of rest of that that that means. And again, it's very clear that, you know, you can trust a lot of people for very small amounts of data. But once you start getting into a lot of data, then you start getting into the high risk zone. So, who verifies these clients, so this is like a key, a key question. And so this is where we can look at other systems. So internet wide systems today that are sort of foundational things like DNS and TLS and so on. Use already registries registrar's services and so on, to provide a bunch of different kinds of of decentralized services that requires certain kinds of verification. Now, those might be too burdensome. So those kinds of verifications have limitations. They're not programming friendly and so on. And so maybe perhaps we don't quite want to do that, but but then we can look at things like stake oriented systems where there are blockchain systems that use proof of stake for voting or even the consensus, or or even groups like this have, you know, a full, you know on chain governance system that has proof of stake around chain upgrades so then perhaps there's some some some work that could be done around this. And then, and then we have other kinds of web three systems, things like maker and Aragon in DNS is one that use certain smart contracts, and then delegate someone to the verification to two specific parties. So you can look at things like Aragon court, and open bazaar and so on, which form these networks of decentralized verifiers and jurors to decide on outcomes of events. And so this is a very interesting way of mixing in some amount of human verification in the system to validate certain things to prevent civil civil attacks, and so on. In a very kind of web three native way. And one of the important things about all these systems is that they all usually rely on a few well chosen parties to recover from whole system failures, as soon as you start adding humans in the loop, you might need to add more humans or remove them or something like that. And at that point you enter into a world where you need a good governance process for being able to show that a party should stop being part of that of that set of verifiers or something like that, and you need a way to kind of as a community come to a decision that that party should no longer be via the via verify or something like that. So, for example, DNS uses a F set of root key holders for the contract where if something goes wrong, the community has a whole I process for alerting on this and making decisions on having to change the contract. And if the L E community decides to change the contract, then these, these parties are sort of authorized to go and C make that change, and the change they make is purely execution oriented they're just told what to O do this transaction and then do it now if those parties try to go and do something rogue like change I the change the contracts, without checking with a community and so on, then that would be N immediately obvious, and then there would be a major governance problem in that system, and it 。
would force it to change very quickly. And we've seen seen cases like this already in the blockchain C system. So look, all of this stuff is not ideal, but it is pragmatic web three stuff that works today. And it N tends to be a stepping stone to some other more algorithmic solution in the future. And so we're looking at systems like this to solve this like very really critical problem without having to spend a lot more time doing technical development and finding more algorithmic deeper solutions, but that would push out the network loss, a product right so we want to launch five minutes in as possible. And so we need to have some kind of pragmatic solution here to solve this, this, this problem in the short term. So, in the long term we need a decentralized and fully distributed network of verifiers and you know my start small, we might it might start even smaller than this. But in the long term we want something that is properly geographically distributed. And actually, it has a lot of verifier so ideally we would get to a network that has no kind of in the order of 10s to hundreds.

Unknown 5:15
But in the long term we want something that is properly geographically distributed. And actually, it has a lot of verifier so ideally we would get to a network that has no county in the order of 10s to hundreds of verifiers maybe 100 is a really good name. Number. But, but that's probably not in the short term there's probably more more long term. And we also want these verifiers to be able to understand different kinds of use cases for for five coins so things like this, being able to kind of match verifiers for certain use cases to understand how to verify those clients that, that would be useful. And of course, this can expand over time. And, you know, it's important to these verifiers are not bribable, that they're really trustable and dependable parties in the network that where the reputation is worth more than, more than drives. So these are the kinds of criteria that we might we might choose. But one very important piece here is that again we can apply the same principle that we have with clients to verifiers we can cap how much we trust them for. So again, the key question is not, do I trust this verifier, but rather how much do I trust the verifier and do I know what kind of error rate, do they have they have. So they have some sort of like 5% or 10% error rate, hey maybe that's fine you can trust them to be able to verify parties up to petabytes, but if they're kind of loose and they maybe have like a 50% error rate. Maybe you can only you only let them verify parties in small amounts, and maybe you only give them half a petabyte or something like that to verify. And so this kind of principle of capping the rest is what makes this entire system system work. The last final question is you know who who verifies the verifiers, and it's really you, the whole point here is that this requires network governance. This requires discussions and forums evidences of breach or things like that. And then at that point every decision is made in a, in a well governed way, then you know kind of a group of executives can carry out a decision to remove a verify. That's kind of the way of thinking, thinking about it. And this is similar to how DNS and di and other systems work. You can sort of think about it like this. There are kind of these verifiers that are verifying clients and distinguishing them between unverified clients and verify clients. We want most of the verified Muslim verification to be really easy and so clients that have a lot of storage can just be become verified. And then these are the kind of transactions that can happen. Now of course a lot of points may still be unverified especially small smaller clients or some large client clients that are programs, who can't go through the verification process, and that's totally fine too. It's just a matter of that we would like to at least be able to bound the risk of what what is a potentially malicious deal or not. Great, so I'm out of time for today. More details coming soon, hopefully, that helped answer some questions we'll turn this into something that you can digest and will then later on, turn it into into documents. Thank you very much.