The blockchain - Bitcoin.com Developer Platform

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

HPB (High-Performance Blockchain) Whitepaper breakdown

If you'd like to read the first article I published on reddit on HPB, please take a look here
https://redd.it/7qt54x
 
People often skim over white papers as they simply cannot be bothered to read through them. Let’s be honest, most of them are as dull as dishwater and even more so when full of technical blockchain related buzzwords that most people new to cryptocurrencies simply don’t understand.
 
Well as someone now invested in High-Performance Blockchain (HPB), I want to know and understand what the company is trying to achieve, so I’ve spent some time dissecting the white paper and actually gathering the information behind the buzzwords to determine if the company offers real key differentiators and unique selling points that allow the proposal to stand separately from the competition.
 
So here is my breakdown of some of the key sections from the soon-to-be-updated HPB whitepaper
 
TPS
 
Ok so TPS stands for “transactions per second” and is reasonably well recognised in the world of blockchain but often misunderstood or under-appreciated. Essentially HPB are stating in their white paper that TPS is a bottleneck for all current blockchain solutions and this bottleneck restricts development and simply will not meet future business needs.
 
So let’s just explore this for a minute. Anyone who knows Bitcoin and Ethereum and have tried to transfer their coins from a wallet to an exchange or vice-versa, may at some point have experienced slow transfer or “transaction” times. This is usually when the network is congested, and transactions which usually take a few minutes, are suddenly slowed down considerably. Let's say you are transferring some Eth to an online exchange to buy another coin as you’ve noticed that this other coins price is dropping, and you want to catch the low price to buy in before the bounce……so you setup the transfer, increase your Ether Gwei to 50 to get things moving quicker, and then you wait for your 12 block confirmations to be confirmed before the Eth appears in your exchange wallet. You wait 10-15 minutes and the Eth suddenly appears, only to find the price as already bounced on the coin you wanted to buy and it’s already up 10% on what it happened to be 15 minutes ago! That delay has just cost you $500!
 
Delay can be extremely frustrating, and can often be expensive. Now whilst individuals tend to tolerate slight delays on occasion, (for now!) It will simply be unacceptable moving forward. Imagine typing in your pin at a cashpoint/ATM and having to wait 4-5 minutes to get your money! No way!
 
So TPS is important….in fact it’s more than important, it’s fundamental to the success of blockchain technology that TPS speeds improve, and blockchain networks scale accordingly! So how fast are current TPS rates of the big crypto’s?
 
Here is the estimated TPS of the Top 10 cryptos. I should point out that this is the CURRENT TPS speed. Almost all of the cryptos mentioned have plans in the pipeline to scale up and improve TPS using various ingenious solutions, but as of TODAY this is the average speed.
 
  1. Bitcoin ~7 TPS
  2. Ethereum ~15 TPS
  3. Ripple ~1000 TPS
  4. Bitcoin Cash ~40 TPS
  5. Cardano ~10 TPS
  6. Litecoin ~56
  7. Stellar ~3700
  8. NEM ~4 TPS
  9. EOS ~0 TPS
  10. NEO ~1000 TPS
 
Like I say, almost all of these have plans to increase transaction speed and plans to address scalability, but these are the numbers I have researched as of this particular moment in time.
 
Let’s compare this to Visa, the global payment processor, which has an “average” daily peak of around 4,500 TPS and is capable of 56,000 TPS.
 
Some of you may say, “Well that doesn’t matter, as in a few months’ time [insert crypto I own here] will be releasing [insert scalability plan of my crypto here] which means it will be capable of running [insert claimed future TPS speed of my crypto here] so my crypto will be the best in the world!”
 
But this isn’t the whole story….. far from it. You see this doesn’t address a fundamental element of blockchain…..and that is the PHYSICAL transference of information from one node to another to allow for block validation and consensus. You know….the point where the data processed moves up and down the OSI stack and hits the physical layer on the network card and gets transported through the physical Ethernet cable or fibre that takes it off to somewhere else.
 
Also, you have to factor in the actual transaction size (measured in bytes or kilobytes) that is being transferred. VISA transactions vary in size from about 0.2 kilobytes to a little over 1 kilobyte. In order to maintain 4500 TPS, and if we use an average of 0.5kb (512bytes) per transaction, then you need to be physically transporting approximately 2.25mb of data per second. OK so this seems tiny! We all have 100mb broadband at home and the NIC network cards in your computers are capable of running 10gb ….so 2.25mb is nothing…… for now!
 
If we go back to actual blocks on the blockchain, let’s first look at bitcoin. It has a fixed 1mb block size (1,000,000 bytes) so if bitcoin TPS is at around 7 TPS, then we need to be physically transporting 6.83mb per second per block. Still pretty small and easy to cope with….Well if that’s the case then why is bitcoin so slow?
 
Well if you consider the millions of transactions being requested every day, and that you can only fit 1mb of data into a single block, then if you imagine the first block in the queue gets processed first (max 1mb of data), but the rest of the transactions have to wait, to see if they hopefully are in the next block, or maybe the next one? Or maybe the next one? Or the next one?
 
Now the whole point of “decentralization” is that every node on the blockchain network is in agreement that the block is valid…this consensus typically takes around 10 minutes for the blockchain network to fully “sync” on the broadcasted block. Once the entire network is in agreement, they start to “sync” the next block. Unfortunately if your transaction isn’t at the front of the queue, then you can see how it might take a while for your transaction to get processed. So is there a way of moving to the front of the queue, similar to the way you can get a “queue jump pass” at a theme park? Sure there is….you can pay as higher-than-average transaction fee to get prioritized….but if the transaction fees are relative to the cryptocurrency itself, then the greater the value of the crypto becomes (i.e. the more popular it becomes), the higher the transaction fee becomes in order to allow you to make your transactions.
 
Once again using the cashpoint ATM analogy, it’s like going to withdraw your money, and being presented with some options on screen similar to that of, “You can have your money in less around 10 minutes for $50, or you can wait 20 minutes for $20, or you can camp out on the street and wait until tomorrow and get your money for $5”
 
So it’s clear to see the issue…..as blockchain scales up and more people use it, the value of it rises, the cost to use it goes up, and the speed of actually using it gets slower. This is not progress, and will not be acceptable as more people and businesses use blockchain to transact with money, information, data, whatever.
 
So what can be done? …Well you could increase the block size……more data held in a block means that you have a greater chance of being in a block at the front of the queue……Well that kind of works, but then you still have to factor in how long it takes for all nodes on the blockchain network to “sync” and reach consensus.
 
The more data per block, the more data there is that needs to be fully distributed.
 
I used visa as an example earlier as this processes very small amounts of transactional data. Essentially this average 512 bytes will hold the following information: transaction amount, transaction number, transaction date and time, transaction type (deposits, withdrawal, purchase or refund), type of account being debited or credited, card number, identity of the card acceptor (organization/store address) as well as the identity of the terminal (company name from which the machine operates). That’s pretty much all there is to a transaction. I’m sure you will agree that it’s a very small amount of data.
 
Moving forward, as more people and businesses use block-chain technology, the information transacted across blockchain will grow.
 
Let’s say, (just for a very simplistic example) that a blockchain network is being used to store details on a property deed via an Ethereum Dapp, and there is the equivalent of 32 pages of information in the deed. Well one ascii character stored as data represents one byte.
 
This “A” right here is one byte.
 
So if an A4 page holds let’s say 4000 ascii characters, then that’s 4000 bytes per page, or 4000x32= 128,000 bytes of data. Now if a 1mb block size can hold 1,000,000 bytes of data, then my single document alone has just consumed (128,000/1,000,000)*100 = 12.8% of a 1mb block!
 
Now going further, what if 50,000 people globally decide to transfer their mortgage deeds? Alongside those are another 50,000 people transferring their Will in another Dapp, alongside 50,000 other people transferring their sale-of-business documents in another Dapp, alongside 50,000 people transferring other “lengthy” documents in yet another Dapp? All of a sudden the network comes to a complete and utter standstill! That’s not to mention all the other “big data” being thrown around from company to company, city to city, and continent to continent!
 
Ok in some respects that's not really a fair example, as I mentioned the 1mb block limit with bitcoin, and we know that bitcoin was never designed to be anything more than a currency.
 
But that’s Bitcoin. Other blockchains are hoping/expecting people to embrace and adopt their network for all of their decentralized needs, and as time goes by and more data is sent around, then many (if not all) of the suggested scalability solutions will not be able to cope…..why?
 
Because sooner or later we won’t be talking about megabytes of data….we’ll be talking about GB of data….possibly TB of data on the blockchain network! Now at this stage, addressing this level of scalability will definitely not be purely a software issue….we have to go back to hardware!
 
So…finally coming to my point about TPS…… as time goes by, in order for block chains to truly succeed, the networking HARDWARE needs to be developed to physically move the data quickly enough to be able to cope with the processing of the transactions…..and quite frankly this is something that has not been addressed…..it’s just been swept under the carpet.
 
That is, until now. High-Performance Blockchain (HPB) want to address this issue…..they want to give blockchain the opportunity to scale up to meet customer demand, which may not be there right at this moment, but is likely to be there soon.
 
According to this website from just over a year ago, more data will be produced in 2017, then in the entire history of civilization spanning 5000 years!
https://appdevelopermagazine.com/4773/2016/12/23/more-data-will-be-created-in-2017-than-the-previous-5,000-years-of-humanity-/
That puts things into perspective when it comes to data generation and expected data transference.
 
Ok so visa can handle 56,000 TINY transactions per second….Will that be enough for block chain TPS in 5 years’ time? Well I’ll simply leave that for you to decide.
 
So what are HPB doing about this? They have been developing a specialist hardware accelerated network card known as a TOE card (TOE stands for TCP/IP Offload Engine) which is capable of supporting MILLIONS of transactions per second. Now there are plenty of blockchains out there looking to address speed and scaling, and some of them are truly fascinating, and they will most likely address scalability in the short term….but at some point HARDWARE will still be the bottleneck and this will still need to be addressed like the bad smell in the room that won’t go away. As far as I know (and I am honestly happy to stand corrected here) HPB are the ONLY Company right now who see hardware acceleration as fundamental to blockchain scalability.
 
No doubt more companies will follow over time, but if you appreciate “first mover advantage” you will see how critical this is from a crypto investment perspective.
 
Here are some images of the HBP board
HPB board
HPB board running
Wang Xiaoming holding the HPB board
 
GVM (General Virtual Machine mechanism)
The HPB General virtual machine is currently being developed to allow the HPB solution to work with other blockchains to enhance them and help them to scale. Currently the GVM is being developed for the NEOVM (NEO Virtual Machine) and The EVM (Ethereum Virtual Machine) with others planned for the future.
 
Now a lot of people feel that if Ethereum were not hampered with scalability issues, then it would be THE de-facto blockchain globally (possibly outside of Asia due to things like Chinese regulation) and that NEO is the “Ethereum of China” developed specifically to accommodate things like Chinese regulation. So if HPB is working on a hardware solution to help both Ethereum and NEO, then in my opinion this could add serious value to both blockchains.
 
Claim of Union Pay partnership
To quote directly (verbatim) from the whitepaper:
After listening to the design concept of HPB, China's largest financial data company UnionPay has joined as a partner with HPB, with the common goal of technological practice and exploration of financial big data and high-performance blockchain platform. UnionPay Wisdom currently handles 80% of China's banking transaction data, with an annual turnover of 80 trillion yuan. HPB will join hands with China UnionPay to serve all industry partners, including large banks, insurance, retail enterprises, fintech companies and so on.
 
Why is this significant? Have a read of this webpage to get an idea of the scale of this company:
http://usa.chinadaily.com.cn/business/2017-10/10/content_33060535.htm
 
Now some people will say, there’s no proof of this alliance, and trust me I am one of the biggest sceptics you will come across….I question everything!
 
Now at this stage I have no concrete evidence to support HPB’s claim, however let me offer you my train of thought. Whilst HPB hasn’t really been marketed in the West (a good thing in my opinion!) The leader of HPB Wang Xiaoming is literally attending every single major Asian blockchain event to personally present his solution to major audiences. HPB also has the backing of NEO, who angel invested the project.
 
Take a look at this YouTube video of Da Hongfei talking about NEO, and bringing up a slide at the recent “BlockChain Revolution Conference” on January 18th 2018 – If you don’t want to watch the entire video (it’s obviously all about NEO) then skip forward to exactly 9m13s into the video and take a look at the slide he brings up. You will see it shows HPB. Do you honestly thing Da Hongfei, the leader of NEO, would bring up details of a company that he felt to be untrustworthy to share with a global audience?
Blockchain Revolution 2018 video
 
Here are further pictures of numerous events that HPB’s very own Wang Xiaoming has presented HPB…..in the blockchain world he is very respected after releasing multiple whitepapers and publishing several books over the years on blockchain technology. This is a “techie” with a very public profile…..this is not some guy who knows nothing about blockchain looking to scam people with a dodgy website full of lies and exaggerations!
Wang Xiao Ming presentation at Lujiazui Blockchain event
Wang Xiao Ming presenting at the BTAS2017 summit
Wang Xiao Ming Blockchain presentation
 
I won’t go into some of the other “dubious” altcoins on the markets who claim to be in bed with companies like IBM, Huwawei, Apple etc, but when you do some digging they have a registered address at a drop-mail and you can only find 3-4 baidu links about the company on the internet, you have to question their trustworthiness
 
So do I believe in HPB…..very much so :-)
 
Currently the HPB price sits at $6.00 on www.bibox.com and isn’t really moving. I believe this is due to a number of factors.
 
Firstly, the entire crypto market has gone bonkers this last week or so, although this apparently happens every January.
 
Secondly the coin is still on relatively obscure exchanges that most people have never heard of.
 
Thirdly, because of the current lack of expose, the coin trades at low volume, which means (in my opinion...I can’t actually prove it) that crypto “bots” are effectively controlling the price range as it goes up to around $9.00 and then back down to $6.00, then back up to $9.00, then back down to $6.00 and over and over again.
 
Finally the testnet proof of concept hasn’t been launched yet. We’ve been told that it’s Q1 this year, so it’s imminent, and as soon as it launches I think the company will get a lot more press coverage.
 
UPDATE - It has now been officially confirmed that HPB will be listed on Kucoin
The tentative date is February 5th
 
So, for the investors out there….. It’s trading at $6.00 per coin, and with a circulating supply of 28 million coins, it gives the company an mcap of $168,000,000
 
So what could the price go to? I have no idea as unfortunately I do not have a crystal ball in my posession….however some are referring to HPB as the EOS of China (only HPB has an actual working, hardware-focussed product as opposed to plans for the future) and EOS currently has an mcap of $8.30 billion dollars…… so for HPB to match that mcap, the price of HPB would have to effectively increase almost 50-fold to $296.4 - Now that’s obviously on the optimistic side, but even still, it shows its potential. :-)
 
I believe hardware acceleration alongside software optimization is the key to blockchain success moving forward. I guess it’s up to you to decide if you agree or disagree with me.
 
Whatever you do though…..remember that Most importantly of all…… DYOR!
 
My wallet address, if you found this useful and would like to donate is: 0xd7FAbB675D9401931CefE9E633Ef525BfBa7a139
submitted by jpowell79 to u/jpowell79 [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Details:
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
https://np.reddit.com/btc/comments/54afm7/buip024_extension_blocks_with_address_sharding/
Why aren't we as a community talking about Sharding as a scaling solution?
https://np.reddit.com/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
https://np.reddit.com/btc/comments/3v070a/brainstorming_could_bitcoin_ever_scale_like/
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
https://np.reddit.com/btc/comments/3wtwa7/brainstorming_lets_fork_smarter_not_harder_can_we/
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
  • (DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
https://youtu.be/uzahKc_ukfM?t=1101
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
MapReduce
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Conclusion
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

Subreddit Stats: btc posts from 2018-05-14 to 2018-05-19 12:59 PDT

Period: 5.31 days
Submissions Comments
Total 783 12622
Rate (per day) 147.47 2006.25
Unique Redditors 432 1955
Combined Score 23860 47871

Top Submitters' Top Submissions

  1. 1470 points, 7 submissions: hunk_quark
    1. Purse.io is paying its employees in Bitcoin Cash. (441 points, 63 comments)
    2. Forbes Author Frances Coppola takes blockstream to task. (359 points, 35 comments)
    3. Purse CEO Andrew Lee confirms they are paying employees in BCH and native BCH integration update will be coming soon! (334 points, 43 comments)
    4. After today's BCH Upgrade, longer posts are now enabled on memo.cash! (245 points, 31 comments)
    5. Bitcoin cash fund is providing cashback and prizes for using Bitcoin (BCH) on purse.io next month. (76 points, 4 comments)
    6. As an existential threat to his criminal enterprise Wells Fargo, Bitcoin is rat poison for Warren Buffet. (15 points, 1 comment)
    7. Craig Wright in Rwanda- "I've got more money than your country". With advocates like these, no wonder BCH has a PR problem. (0 points, 6 comments)
  2. 1419 points, 6 submissions: tralxz
    1. Breaking News: Winklevoss Brothers Bitcoin Exchange Adds Bitcoin Cash support! (510 points, 115 comments)
    2. Jihan Wu was asked "Why are the miners still supporting Bitcoin Core? Is it just a short term profitability play?", he answered: "Yes, exactly." (273 points, 214 comments)
    3. Cobra:"That feeling when Blockstream, [...] release Liquid, a completely centralized sidechain run only by trusted nodes and designed for banks, financial institutions and exchanges." (240 points, 145 comments)
    4. Jihan Wu on Bloomberg predicting Bitcoin Cash at $100,000 USD in 5 years. (169 points, 65 comments)
    5. CNBC's Fast Money: Ran NeuNer says he would HODL Bitcoin Cash and sell Bitcoin Core. (168 points, 58 comments)
    6. Coindesk: "Florida Tax Collector to Accept Bitcoin, Bitcoin Cash Payments" (59 points, 8 comments)
  3. 1221 points, 14 submissions: Kain_niaK
    1. I am getting flashbacks from when I tried to close my Bank of America account ... (348 points, 155 comments)
    2. moneybutton.com is a configurable client-side Bitcoin Cash (BCH) wallet in an iframe. When the user makes a payment, a webhook URL is called allowing your app to respond to the payment, such as displaying content behind a pay wall. (189 points, 37 comments)
    3. Bitcoin Cash can turn in to the biggest non violent protest against the establishment ever : "We simply stop using their money." Which is a great way of getting edgy teenagers to join us. There is an almost infinite supply of edgy teenagers in the world. (153 points, 42 comments)
    4. Purse.io at the Coingeek conference in HK just announced native BCH support!!! They are also launching a new software implementation called "bcash" (111 points, 6 comments)
    5. Who is all doing stuff like this on Reddit? Do we realize that we can make the Bitcoin Cash economy easily 10 times as big just by getting Reddit users on board? All they need is a good first user experience. Bitcoin needs to be experienced above everything else before you even talk about it. (109 points, 53 comments)
    6. /cryptocurrency in meltdown (88 points, 16 comments)
    7. Ryan X Charles from Yours.org had an amazing to the point presentation about the future of content creation on the internet. (85 points, 12 comments)
    8. So now that we have had tippr and chaintip for a while, what are you guys favourite and why? Or do you use both? (43 points, 25 comments)
    9. John Moriarty about why you can't separate Bitcoin from Blockchain. (37 points, 12 comments)
    10. The next wave of attack will be all the big internet giants supporting Bitcoin Core and LN. Facebook, Microsoft, Twitter, I bet you that the more successful Bitcoin Cash becomes the more you will see big cooperation’s be forced to go with compromised Bitcoin. (25 points, 28 comments)
  4. 623 points, 5 submissions: BitcoinXio
    1. Frances Coppola on Twitter: “Congratulations, Blockstream, you have just reinvented the interbank lending market.” (414 points, 139 comments)
    2. We have a new alternative public mod logs (101 points, 35 comments)
    3. Bitcoin Cash (BCH) sponsored Mei Yamaguchi's championship fight will be live on YouTube in an hour or so (2 fights left before hers - Livestream) (53 points, 22 comments)
    4. Uncensored: /t/Bitcoin (reddit without the censorship) (49 points, 43 comments)
    5. Information post about the recent suspension and re-activation of publicmodlogs (Update) (6 points, 0 comments)
  5. 582 points, 1 submission: VanquishAudio
    1. Can’t believe this was available. My new license plate.. (582 points, 113 comments)
  6. 493 points, 8 submissions: MemoryDealers
    1. Bitcoin Cash supporting Bitmain is leading a $110M investment in Circle. This is super bullish for BCH on Circle! (122 points, 24 comments)
    2. Bitcoin Core supporter who scammed his way into consensus without a ticket is busy calling Bitcoin.com and others scammers at the event. (98 points, 140 comments)
    3. I see lots of people coming here every day asking why we think Bitcoin is BCH. Here is why I think so: (79 points, 73 comments)
    4. The Bitcoin.com CTO made a fun little transaction puzzle with one of the new op-codes: (79 points, 11 comments)
    5. Bitcoin Cash is the fighter that everyone loves. (42 points, 86 comments)
    6. This graphic aged well over the last 3 months. (34 points, 64 comments)
    7. An example of the sophisticated arguments BTC supporters use against BCH supporters. (20 points, 12 comments)
    8. Tired of staying up all night looking at CoinMarket Cap? Give Bitcoin.com's Satoshi Pulse a try in night mode! (19 points, 11 comments)
  7. 475 points, 4 submissions: rdar1999
    1. Consensus 2018 sucked hard. Superficial talks, ridiculous ticket price, overcrowded venue. (235 points, 78 comments)
    2. See in this twitter thread Luke Jr actually arguing that PayPal is cheaper than BCH!! Is this guy in full delirium? Or just spouts misinformation on purpose? (173 points, 227 comments)
    3. Upgrade completed at height 530356! (59 points, 2 comments)
    4. On decentralization and archival nodes. (8 points, 5 comments)
  8. 465 points, 17 submissions: Windowly
    1. Yeah!! "We are pleased to announce that the new Bitcoin Cash address format has been implemented on QuadrigaCX. This will help our users to easily distinguish Bitcoin and Bitcoin Cash addresses when funding/withdrawing their account. The BCH legacy addresses will still be supported." (164 points, 8 comments)
    2. "Friendly reminder: If you pay more than the bare minimum (1/sat per byte) to send a #BitcoinCash BCH transaction - you paid too much. 👍🏻"~James Howells (99 points, 12 comments)
    3. Bitpay Enables Bitcoin Cash (BCH) and Bitcoin Core (BTC) for Tax Payments - Bitcoin News (59 points, 31 comments)
    4. "I like the symbology of 1,000,000 ␢ = 1 ₿ for #BitcoinCash What the 'little b' units are called I don't care that much, it will settle in whether it remains 'bits', or 'cash', or 'credits' ... " (55 points, 54 comments)
    5. ~Public Service Announcement~ Please be extra careful using Bitcoin Cash on the Trezor! They have not yet implemented CashAddr Security. Make sure to covert your address with cashaddr.bitcoincash.org and double check with a block explorer to make sure the address is the same. (39 points, 12 comments)
    6. "WRT telling others what to do or not to do (as opposed to asking them) on the point of making proposals or petitioning others - I hope we can take the time to re-read and take to heart @Falkvinge 's excellent dispute resolution advice in . ." [email protected] (33 points, 0 comments)
    7. Why I support Bitcoin Cash (BCH). And why I support cash-denominated wallets. 1$ is inconsequential pocket change to some. To others it is their livelihood. Thank you @BitcoinUnlimit & @Bitcoin_ABC for your work in this regard. (7 points, 16 comments)
    8. If anyone feels that they are forced or imposed to do anything, or threatened by any other person or group’s initiative, he doesn’t understand Bitcoin Cash (BCH). The beauty of Bitcoin Cash is that innovation & creativity is permissionless. Let’s celebrate new ideas together! (5 points, 1 comment)
    9. "Bits as a unit right now (100sat), no matter named bits or cash or whatever, is extremely useless at this time and in the near future : Its worth 1/11 of a CENT right now. Even it suddenly 10x, its still only 1 single cent."~Reina Nakamoto (2 points, 7 comments)
    10. Love this converter! Thank you @rogerkver ! At present 778.17 ␢ = 1 USD (1,000,000 ␢ = 1 ₿) Tools.bitcoin.com (2 points, 0 comments)
  9. 443 points, 33 submissions: kairostech99
    1. Purse.io Adds Native BCH Support and Launches 'Bcash' (116 points, 40 comments)
    2. Openbazaar Enables Decentralized Peer-To-Peer Trading of 44 Cryptocurrencies (93 points, 21 comments)
    3. Thailand Waives 7% VAT for Individual Cryptocurrency Investors (84 points, 1 comment)
    4. Switzerland Formally Considers State Backed Cryptocurrency (26 points, 8 comments)
    5. Research Paper Finds Transaction Patterns Can Degrade Zcash Privacy (24 points, 2 comments)
    6. Japan's GMO Gets Ready to Start Selling 7nm Bitcoin Mining Chips (21 points, 0 comments)
    7. MMA Fighter Mei Yamaguchi Comes Out Swinging for Bitcoin.com (18 points, 5 comments)
    8. Bitmain Hits Back at “Dirty Tricks” Accusations (15 points, 4 comments)
    9. Circle Raises $110Mn With Plans to Launch USD-Backed Coin (6 points, 2 comments)
    10. Coinbase Remains the Most Successful and Important Company in the Crypto Industry (5 points, 7 comments)
  10. 420 points, 4 submissions: crypto_advocate
    1. Jihan on Roger: "I learnt a lot about being open and passionate about what you believe in from him[Roger]" (161 points, 45 comments)
    2. Bitcoin.com's first officially sponsored MMA fighter head to toe in Bitcoin Cash gear on her walkout - "She didn't win but won the hearts of a lot of new fans" (150 points, 14 comments)
    3. "Bitcoin Community is thriving again" Roger Ver at CoinGeek (98 points, 8 comments)
    4. Today is a historic day. [Twitter] (11 points, 1 comment)
  11. 376 points, 2 submissions: singularity87
    1. Bitcoin Cash Fund has partnered with Purse.io to launch their suite of BCH services and tools. (212 points, 15 comments)
    2. Proposal - Makes 'bits' (1 millionth BCH) the standard denomination and 'BIT' the ticker. (164 points, 328 comments)
  12. 349 points, 1 submission: bearjewpacabra
    1. UPGRADE COMPLETE (349 points, 378 comments)
  13. 342 points, 1 submission: Devar0
    1. Congrats! Bitcoin Cash is now capable of a 32MB block size, and new OP_CODES are reactivated! (342 points, 113 comments)
  14. 330 points, 3 submissions: btcnewsupdates
    1. Amaury Sechet in HK: "We want to be as boring as possible... If we do our job well, you won't even notice us." (173 points, 29 comments)
    2. This is the way forward: Miners Consider Using Bitcoin Cash Block Reward to Fund Development (136 points, 86 comments)
    3. Merchant adoption: unexpected success. Perhaps the community should now put more of its focus on canvassing end users. (21 points, 7 comments)
  15. 318 points, 3 submissions: HostFat
    1. From One to Two: Bitcoin Cash – Purse: Save 20%+ on Amazon [2018] (173 points, 25 comments)
    2. Open Bazzar v2.2.0 - P2P market and P2P exchange now! (92 points, 15 comments)
    3. Tree Signature Variations using Commutative Hash Trees - Andrew Stone (53 points, 5 comments)
  16. 287 points, 1 submission: Libertymark
    1. Congrats BCH developers, we appreciate your work here and continued innovation (287 points, 79 comments)
  17. 260 points, 9 submissions: unitedstatian
    1. The guy had 350 bucks received via Lightning Network but he can't even close the channels to actually withdraw the bitcoins. (135 points, 188 comments)
    2. The first megabytes are far more crucial than the 100th. Not every MB was born equal and by giving up on adoption for years Core may have given up on adoption forever. (69 points, 20 comments)
    3. Looks like fork.lol is misleading users on purpose into thinking the fees on BTC and BCH are the same (28 points, 32 comments)
    4. Just because the nChain patents aren't on the base protocol level doesn't mean it's a good idea, BCH could end up with patents which are so part of its normal use it will effectively be part of it. (13 points, 33 comments)
    5. [Not a meme] This is what the TxHighway BTC road should look like when the memepool is large. The unconfirmed tx's should be represented with cars waiting in the toll lines. (9 points, 2 comments)
    6. Lighthouse should have a small button to easily integrate it with any web page where a task is required (4 points, 1 comment)
    7. Poland Becomes World's First to put Banking Records on the Blockchain (2 points, 3 comments)
    8. If I were Core and wanted to spam BCH, and since spamming with multiple tx's will be counterproductive, I'd pay unnecessarily high fees instead (0 points, 32 comments)
    9. What happens when "the man" starts blocking nodes in China now that they function as mass media? (0 points, 1 comment)
  18. 259 points, 2 submissions: outofsync42
    1. Sportsbook.com now accepting BCH!! (215 points, 42 comments)
    2. BITCOIN CASH VS BITCOIN 2018 | Roger Ver on CNBC Fast Money (44 points, 15 comments)
  19. 255 points, 2 submissions: Bitcoinmathers
    1. Bitcoin Cash Upgrade Milestone Complete: 32MB and New Features (255 points, 90 comments)
    2. Bitgo Launches Institutional Grade Custodial Services Suite (0 points, 0 comments)
  20. 223 points, 2 submissions: ForkiusMaximus
    1. Japanese tweeter makes a good point about BTC: "You don't call it an asset if it crumbles away every time you go to use it. You call it a consumable." (141 points, 21 comments)
    2. Jimmy Nguyen: Bitcoin Cash can function for higher level technical programming (82 points, 3 comments)
  21. 218 points, 3 submissions: mccormack555
    1. Trying to see both sides of the scaling debate (193 points, 438 comments)
    2. Has Craig Wright Committed Perjury? New Information in the Kleiman Case (25 points, 56 comments)
    3. Thoughts on this person as a representative of Bitcoin Cash? (0 points, 21 comments)
  22. 216 points, 4 submissions: jimbtc
    1. $50K worth of crypto to anyone who leaks the inner communications of the #CultOfCore (183 points, 29 comments)
    2. Liquidity Propaganda: "The formation of payment hubs happens naturally even in two-party payment channels like the Lightning Network.". LOL. Fuel the LN vs Liquidity fire :D (31 points, 7 comments)
    3. WBD 017 - Interview with Samson Mow (2 points, 19 comments)
    4. If you wanted further proof that Andreas Antonopolous is a BCore Coreonic Cuck then here's a new speech from May 6th (0 points, 8 comments)
  23. 212 points, 1 submission: porlybe
    1. 32 Lanes on TXHighway (212 points, 96 comments)
  24. 211 points, 3 submissions: Akari_bit
    1. "AKARI-PAY Advanced" Released, for Bitcoin Cash! (73 points, 6 comments)
    2. 129% funded! We flew by our first BCH fundraising goal, demonstrating AKARI-PAY! HUGE SUCCESS! (70 points, 7 comments)
    3. Devs.Cash updated with new Dev projects, tools, and bounties for Bitcoin Cash! (68 points, 7 comments)
  25. 210 points, 1 submission: CollinEnstad
    1. Purse.io Introduces 'bcash', an Implementation of the BCH protocol, just like ABC, BU, or Classic (210 points, 125 comments)
  26. 206 points, 20 submissions: marcelchuo3
    1. Bitcoin Cash Community Sees OP_Code Innovation After Upgrade (70 points, 3 comments)
    2. Coingeek Conference 2018: Bitcoin Cash Innovation Shines in Hong Kong (65 points, 4 comments)
    3. Bitfinex Starts Sharing Customer Tax Data with Authorities (16 points, 3 comments)
    4. Colorado Proposal Aims to Allow Cryptocurrency Donations for Campaigns (12 points, 2 comments)
    5. Thailand Commences Cryptocurrency Regulations Today (8 points, 1 comment)
    6. Bitcoin Mining Manufacturer Canaan Files for Hong Kong Stock Exchange IPO (7 points, 0 comments)
    7. Bitcoin in Brief Thursday: OECD Explores Cryptocurrencies, Central Asian Powerhouse Calls for UN Crypto Rules (5 points, 0 comments)
    8. Moldova with New Crypto Exchange and a Token (5 points, 0 comments)
    9. Korean Regulators Widen Investigation of Cryptocurrency Exchanges (4 points, 0 comments)
    10. Arrest Warrants Issued to Employees of South Korean Crypto Exchange (3 points, 0 comments)
  27. 198 points, 1 submission: anberlinz
    1. I used to think BCH was the bad guy, now I'm beginning to change the way I see it... Convince me that BCH is the real Bitcoin (198 points, 294 comments)
  28. 196 points, 1 submission: Chris_Pacia
    1. First tree signature on Bitcoin Cash using new opcodes (196 points, 61 comments)
  29. 191 points, 3 submissions: cryptorebel
    1. Coinbase blog from 2015: "bits is the new default". The reason "bits" stopped being used was because of high fees on segwitcoin. Lets bring back "bits" on the real Bitcoin-BCH! (106 points, 66 comments)
    2. Here is the Bitcoin-BCH countdown clock to the hard fork upgrade with new 32MB block limit capacity, and re-enabled op-codes. Looks like its about 17 hours away. (78 points, 2 comments)
    3. This is Core's idea of open development, you are "super welcome" to work on anything that the gatekeepers say is ok. People tout Core as having so many devs but it doesn't matter much when you have to go through the gatekeepers. (7 points, 14 comments)
  30. 186 points, 2 submissions: coinfeller
    1. Bitcoin Cash France is offering 32 000 bits of BCH for Tipping Tuesday to celebrate the upgrade from 8MB to 32MB (178 points, 101 comments)
    2. How the Bitcoin Cash upgrade from 8MB to 32MB seems like :) (8 points, 10 comments)
  31. 185 points, 3 submissions: money78
    1. Congratulations Bitcoin Cash for the 32MB, WTG! (93 points, 5 comments)
    2. Roger Ver on CNBC's Fast Money again and he says bitcoin cash will double by the end of the year! (68 points, 30 comments)
    3. The Bitcoin Cash upgrade: over 8 million transactions per day, data monitoring, and other possibilities (24 points, 3 comments)
  32. 182 points, 26 submissions: haumeris28
    1. MMA Fighter Mei Yamaguchi Sponsored By Bitcoin Cash Proponent Roger Ver (32 points, 3 comments)
    2. Swiss Government is Studying the Risks and Benefits of State-Backed Cryptocurrency (30 points, 3 comments)
    3. Circle and Bitmain partner for US Dollar backed Token (25 points, 18 comments)
    4. Apple Co-Founder - Ethereum Has the Potential to be the Next Apple (16 points, 13 comments)
    5. Florida County To Begin Accepting Tax Payments in Crypto (14 points, 0 comments)
    6. ‘Blockchain Will Drive the Next Industrial Revolution’, According to a Major Wall Street Firm (11 points, 0 comments)
    7. Bitcoin Cash Undergoes a Hard Fork, Increases Block Size (10 points, 3 comments)
    8. Newly Appointed Goldman Sachs Vice President Leaves for Cryptocurrency (7 points, 5 comments)
    9. OKEx CEO Quits as Exchange Becomes World’s Largest Surpassing Binance (7 points, 2 comments)
    10. Texas Regulators Shut Down Crypto Scam, Falsely Using Jennifer Aniston and Prince Charles for Promotion (6 points, 0 comments)
  33. 174 points, 31 submissions: MarkoVidrih
    1. US Regulators Agree That They Will Not Will Not Suppress Cryptocurrencies (96 points, 10 comments)
    2. Why Stable Coins Are the New Central Bank Money (28 points, 9 comments)
    3. First Facebook, Then Google, Twitter and LinkedIn, Now Microsoft’s Bing Will Ban All Cryptocurrency Ads (10 points, 2 comments)
    4. Circle Raises $110 Mln and Plans to Use Circle USD Coin (USDC) instead of Tether (USDT) (9 points, 1 comment)
    5. 9 Million New Users Are About to Enter in Crypto Market (4 points, 6 comments)
    6. Japan’s Largest Commercial Bank Will Try its Own Cryptocurrency in 2019 (4 points, 0 comments)
    7. The Viability of the ERC-948 Protocol Proposal (4 points, 0 comments)
    8. A letter from Legendary VC Fred Wilson to Buffet: The Value of Bitcoin Lies in the Agreement Itself (3 points, 1 comment)
    9. This is Just The Beginning of Crypto! (3 points, 0 comments)
    10. What? U.S. SEC Just Launches ICO Called HoweyCoin (3 points, 2 comments)
  34. 170 points, 2 submissions: plaguewiind
    1. Twitter restricting accounts that mention Blockstream (104 points, 49 comments)
    2. This is actually fantastic! Jimmy Nguyen on ‘The Future of Bitcoin (Cash)’ at The University of Exeter (66 points, 31 comments)
  35. 168 points, 1 submission: MartinGandhiKennedy
    1. [COMPELLING EVIDENCE] Proof that Luke Jr does not lie (168 points, 41 comments)
  36. 167 points, 1 submission: higher-plane
    1. BCH showerthought: The first one or two killer apps for Bitcoin Cash that drive mass adoption will be the thing that decides the standards/denominations based on what people are using and catches on. Not a small forum poll or incessantly loud Twitter spam. (167 points, 24 comments)
  37. 160 points, 1 submission: SharkLaserrrrr
    1. [PREVIEW] Looks like Lighthouse powered by Bitcoin Cash is coming together nicely thanks to the hard work of an anonymous developer. I wonder how Mike Hearn feels about his project being resurrected. (160 points, 24 comments)
  38. 160 points, 1 submission: playfulexistence
    1. Lightning Network user has trouble with step 18 (160 points, 165 comments)

Top Commenters

  1. bambarasta (898 points, 154 comments)
  2. Kain_niaK (706 points, 177 comments)
  3. Ant-n (691 points, 145 comments)
  4. H0dl (610 points, 116 comments)
  5. Adrian-X (538 points, 93 comments)
  6. KoKansei (536 points, 35 comments)
  7. LovelyDay (456 points, 78 comments)
  8. 324JL (444 points, 109 comments)
  9. LexGrom (428 points, 132 comments)
  10. Erumara (427 points, 44 comments)
  11. lubokkanev (404 points, 119 comments)
  12. LuxuriousThrowAway (397 points, 66 comments)
  13. rdar1999 (387 points, 82 comments)
  14. zcc0nonA (379 points, 100 comments)
  15. MemoryDealers (369 points, 18 comments)
  16. RollieMe (366 points, 29 comments)
  17. Churn (352 points, 32 comments)
  18. jimbtc (349 points, 72 comments)
  19. btcnewsupdates (338 points, 61 comments)
  20. blockthestream (338 points, 25 comments)
  21. SharkLaserrrrr (335 points, 33 comments)
  22. kondratiex (311 points, 80 comments)
  23. trolldetectr (306 points, 58 comments)
  24. ForkiusMaximus (300 points, 47 comments)
  25. jonald_fyookball (300 points, 35 comments)
  26. mccormack555 (294 points, 78 comments)
  27. playfulexistence (292 points, 40 comments)
  28. scotty321 (287 points, 46 comments)
  29. BitcoinXio (269 points, 23 comments)
  30. TiagoTiagoT (263 points, 96 comments)
  31. Bitcoinopoly (260 points, 39 comments)
  32. homopit (249 points, 48 comments)
  33. DoomedKid (249 points, 41 comments)
  34. cryptorebel (246 points, 54 comments)
  35. Deadbeat1000 (243 points, 36 comments)
  36. mrtest001 (239 points, 78 comments)
  37. BeijingBitcoins (235 points, 16 comments)
  38. tippr (227 points, 122 comments)
  39. chainxor (226 points, 24 comments)
  40. emergent_reasons (222 points, 56 comments)
  41. morli (221 points, 1 comment)
  42. patrick99e99 (220 points, 8 comments)
  43. crasheger (214 points, 39 comments)
  44. ---Ed--- (213 points, 81 comments)
  45. radmege (212 points, 35 comments)
  46. anberlinz (212 points, 33 comments)
  47. unstoppable-cash (211 points, 46 comments)
  48. taipalag (210 points, 35 comments)
  49. rowdy_beaver (210 points, 25 comments)
  50. RareJahans (206 points, 45 comments)

Top Submissions

  1. Can’t believe this was available. My new license plate.. by VanquishAudio (582 points, 113 comments)
  2. Breaking News: Winklevoss Brothers Bitcoin Exchange Adds Bitcoin Cash support! by tralxz (510 points, 115 comments)
  3. Purse.io is paying its employees in Bitcoin Cash. by hunk_quark (441 points, 63 comments)
  4. Frances Coppola on Twitter: “Congratulations, Blockstream, you have just reinvented the interbank lending market.” by BitcoinXio (414 points, 139 comments)
  5. Forbes Author Frances Coppola takes blockstream to task. by hunk_quark (359 points, 35 comments)
  6. UPGRADE COMPLETE by bearjewpacabra (349 points, 378 comments)
  7. I am getting flashbacks from when I tried to close my Bank of America account ... by Kain_niaK (348 points, 155 comments)
  8. Congrats! Bitcoin Cash is now capable of a 32MB block size, and new OP_CODES are reactivated! by Devar0 (342 points, 113 comments)
  9. Purse CEO Andrew Lee confirms they are paying employees in BCH and native BCH integration update will be coming soon! by hunk_quark (334 points, 43 comments)
  10. Congrats BCH developers, we appreciate your work here and continued innovation by Libertymark (287 points, 79 comments)

Top Comments

  1. 221 points: morli's comment in Can’t believe this was available. My new license plate..
  2. 181 points: patrick99e99's comment in I used to think BCH was the bad guy, now I'm beginning to change the way I see it... Convince me that BCH is the real Bitcoin
  3. 173 points: RollieMe's comment in Trying to see both sides of the scaling debate
  4. 151 points: blockthestream's comment in Bitcoin Core supporter who scammed his way into consensus without a ticket is busy calling Bitcoin.com and others scammers at the event.
  5. 136 points: seleneum's comment in I am getting flashbacks from when I tried to close my Bank of America account ...
  6. 132 points: Falkvinge's comment in Talking to himself makes it so obvious that they're the same. lol
  7. 127 points: MemoryDealers's comment in Bitcoin Core supporter who scammed his way into consensus without a ticket is busy calling Bitcoin.com and others scammers at the event.
  8. 119 points: BitcoinXio's comment in Frances Coppola on Twitter: “Congratulations, Blockstream, you have just reinvented the interbank lending market.”
  9. 116 points: Erumara's comment in I used to think BCH was the bad guy, now I'm beginning to change the way I see it... Convince me that BCH is the real Bitcoin
  10. 115 points: KoKansei's comment in Purse.io Introduces 'bcash', an Implementation of the BCH protocol, just like ABC, BU, or Classic
Generated with BBoe's Subreddit Stats
submitted by subreddit_stats to subreddit_stats [link] [comments]

Reducing the block rate instead of increasing the maximum block size | Sergio Lerner | May 11 2015

Sergio Lerner on May 11 2015:
In this e-mail I'll do my best to argue than if you accept that
increasing the transactions/second is a good direction to go, then
increasing the maximum block size is not the best way to do it. I argue
that the right direction to go is to decrease the block rate to 1
minute, while keeping the block size limit to 1 Megabyte (or increasing
it from a lower value such as 100 Kbyte and then have a step function).
I'm backing up my claims with many hours of research simulating the
Bitcoin network under different conditions [1]. I'll try to convince
you by responding to each of the arguments I've heard against it.
Arguments against reducing the block interval
  1. It will encourage centralization, because participants of mining
pools will loose more money because of excessive initial block template
latency, which leads to higher stale shares
When a new block is solved, that information needs to propagate
throughout the Bitcoin network up to the mining pool operator nodes,
then a new block header candidate is created, and this header must be
propagated to all the mining pool users, ether by a push or a pull
model. Generally the mining server pushes new work units to the
individual miners. If done other way around, the server would need to
handle a high load of continuous work requests that would be difficult
to distinguish from a DDoS attack. So if the server pushes new block
header candidates to clients, then the problem boils down to increasing
bandwidth of the servers to achieve a tenfold increase in work
distribution. Or distributing the servers geographically to achieve a
lower latency. Propagating blocks does not require additional CPU
resources, so mining pools administrators would need to increase
moderately their investment in the server infrastructure to achieve
lower latency and higher bandwidth, but I guess the investment would be low.
  1. It will increase the probability of a block-chain split
The convergence of the network relies on the diminishing probability of
two honest miners creating simultaneous competing blocks chains. To
increase the competition chain, competing blocks must be generated in
almost simultaneously (in the same time window approximately bounded by
the network average block propagation delay). The probability of a block
competition decreases exponentially with the number of blocks. In fact,
the probability of a sustained competition on ten 1-minute blocks is one
million times lower than the probability of a competition of one
10-minute block. So even if the competition probability of six 1-minute
blocks is higher than of six ten-minute blocks, this does not imply
reducing the block rate increases this chance, but on the contrary,
reduces it.
3, It will reduce the security of the network
The security of the network is based on two facts:
A- The miners are incentivized to extend the best chain
B- The probability of a reversal based on a long block competition
decreases as more confirmation blocks are appended.
C- Renting or buying hardware to perform a 51% attack is costly.
A still holds. B holds for the same amount of confirmation blocks, so 6
confirmation blocks in a 10-minute block-chain is approximately
equivalent to 6 confirmation blocks in a 1-minute block-chain.
Only C changes, as renting the hashing power for 6 minutes is ten times
less expensive as renting it for 1 hour. However, there is no shop where
one can find 51% of the hashing power to rent right now, nor probably
will ever be if Bitcoin succeeds. Last, you can still have a 1 hour
confirmation (60 1-minute blocks) if you wish for high-valued payments,
so the security decreases only if participant wish to decrease it.
  1. Reducing the block propagation time on the average case is good, but
what happen in the worse case?
Most methods proposed to reduce the block propagation delay do it only
on the average case. Any kind of block compression relies on both
parties sharing some previous information. In the worse case it's true
that a miner can create and try to broadcast a block that takes too much
time to verify or bandwidth to transmit. This is currently true on the
Bitcoin network. Nevertheless there is no such incentive for miners,
since they will be shooting on their own foots. Peter Todd has argued
that the best strategy for miners is actually to reach 51% of the
network, but not more. In other words, to exclude the slowest 49%
percent. But this strategy of creating bloated blocks is too risky in
practice, and surely doomed to fail, as network conditions dynamically
change. Also it would be perceived as an attack to the network, and the
miner (if it is a public mining pool) would be probably blacklisted.
  1. Thousands of SPV wallets running in mobile devices would need to be
upgraded (thanks Mike).
That depends on the current upgrade rate for SPV wallets like Bitcoin
Wallet and BreadWallet. Suppose that the upgrade rate is 80%/year: we
develop the source code for the change now and apply the change in Q2
2016, then most of the nodes will already be upgraded by when the
hardfork takes place. Also a public notice telling people to upgrade in
web pages, bitcointalk, SPV wallets warnings, coindesk, one year in
advance will give plenty of time to SPV wallet users to upgrade.
  1. If there are 10x more blocks, then there are 10x more block headers,
and that increases the amount of bandwidth SPV wallets need to catch up
with the chain
A standard smartphone with average cellular downstream speed downloads
2.6 headers per second (1600 kbits/sec) [3], so if synchronization were
to be done only at night when the phone is connected to the power line,
then it would take 9 minutes to synchronize with 1440 headers/day. If a
person should accept a payment, and the smart-phone is 1 day
out-of-synch, then it takes less time to download all the missing
headers than to wait for a 10-minute one block confirmation. Obviously
all smartphones with 3G have a downstream bandwidth much higher,
averaging 1 Mbps. So the whole synchronization will be done less than a
1-minute block confirmation.
According to CISCO mobile bandwidth connection speed increases 20% every
year. In four years, it will have doubled, so mobile phones with lower
than average data connection will soon be able to catchup.
Also there is low-hanging-fruit optimizations to the protocol that have
not been implemented: each header is 80 bytes in length. When a set of
chained headers is transferred, the headers could be compressed,
stripping 32 bytes of each header that is derived from the previous
header hash digest. So a 40% compression is already possible by slightly
modifying the wire protocol.
  1. There has been insufficient testing and/or insufficient research into
technical/economic implications or reducing the block rate
This is partially true. in the GHOST paper, this has been analyzed, and
the problem was shown to be solvable for block intervals of just a few
seconds. There are several proof-of-work cryptocurrencies in existence
that have lower than 1 minute block intervals and they work just fine.
First there was Bitcoin with a 10 minute interval, then was LiteCoin
using a 2.5 interval, then was DogeCoin with 1 minute, and then
QuarkCoin with just 30 seconds. Every new cryptocurrency lowers it a
little bit. Some time ago I decided to research on the block rate to
understand how the block interval impacts the stability and capability
of the cryptocurrency network, and I came up with the idea of the DECOR+
protocol [4] (which requires changes in the consensus code). In my
research I also showed how the stale rate can be easily reduced only
with changes in the networking code, and not in the consensus code.
These networking optimizations ( O(1) propagation using headers-first or
IBLTs), can be added later.
Mortifying Bitcoin to accommodate the change to lower the block rate
requires at least:
have upgraded.
version 2 as being multiplied by 10.
All changes comprises no more than 15 lines of code. This is much less
than the number of lines modified by Gavin's 20Mb patch.
As a conclusion, I haven't yet heard a good argument against lowering
the block rate.
Best regards,
Sergio.
[0] https://medium.com/@octskyward/the-capacity-cliff-586d1bf7715e
[1] https://bitslog.wordpress.com/2014/02/17/5-sec-block-interval/
[2] http://gavinandresen.ninja/time-to-roll-out-bigger-blocks
[3]
http://www.cisco.com/c/en/us/solutions/collateral/service-providevisual-networking-index-vni/white_paper_c11-520862.html
[4] https://bitslog.wordpress.com/2014/05/02/deco
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008081.html
submitted by bitcoin-devlist-bot to bitcoin_devlist [link] [comments]

Units Of Storage Computer Skills Course: Bits, Bytes, Kilobytes, Megabytes, Gigabytes, Terabytes (OLD VERSION) Bitcoin vs. Bitcoin Cash vs. Bitcoin SV: Blockspace Demand!! (2017-2019) The Units of Information - Interactive Lesson BITCOIN INVESTING & RELATIONSHIPS & LOVE! Crypto in SMALL bytes!

So, if you're downloading a 10 MB (80 Mb) file on a network that can download data at 54 Mbps (6.75 MBs), you can use the conversion information below to find that the file can be downloaded in just over a second (80/54=1.48 or 10/6.75=1.48). Bit Calculator - Convert between bits/bytes/kilobits/kilobytes/megabits/megabytes/gigabits/gigabytes. Enter a number and choose the type of Units After indicating a significant surge of unconfirmed Bitcoin transactions on March 13, the mempool chart on Blockchain.com saw a steep drop from 32 megabytes (MB) to zero. As of press time, the chart has recovered to around 30 MB. Seven-day chart of Bitcoin mempool size in bytes. Source: Blockchain.com Bitcoin Mempool Briefly Drops: Major crypto wallet service and blockchain data supplier Blockchain.com has apparently experienced a system glitch, as its Bitcoin mempool tracker dropped to zero earlier today.After indicating a significant surge of unconfirmed Bitcoin transactions on March 13, the mempool chart on Blockchain.com saw a steep drop from 32 megabytes (MB) to zero. 80 bytes : First 80 bytes of the block as defined by the encoding used by "block" messages The header of the block being provided nonce uint64_t : 8 bytes : Little Endian : A nonce for use in short transaction ID calculations shortids_length : CompactSize: 1 or 3 bytes : As used to encode array lengths elsewhere

[index] [29022] [25152] [11041] [16352] [8346] [13329] [13452] [14370] [7535] [16695]

Units Of Storage

The mean daily block size in megabytes (MB) for each coin. This allows the demand for blockspace to be visualized from 2017 to 2019. Notice how Bitcoin consistently churns out larger blocks as the ... So, to recap, we talked about Bits, how there are 8 bits in 1 byte, how there are 1000 bytes in 1 kilobyte, and 1000 kilobytes in 1 megabyte, and 1000 megabytes in 1 gigabyte, and 1000 gigabytes ... Bit, Byte, KB, MB, GB, TB, PB? Computer Data Memory Units (Hindi) ... What is Bitcoin? & Bitcoin Mining Explained ll in telugu ll by prasad ll - Duration: 16:36. Fun educational video about the units of information. Short puzzle game to study the information units bytes and megabytes. Information Technology learning g... So, to recap, we talked about how there are 8 bits in one byte, 1,000 bytes in 1 kilobyte, 1,000 kilobytes in 1 megabyte, 1,000 megabytes in 1 gigabyte, and 1,000 gigabytes in 1 terabyte.

Flag Counter