What is Skywire? Where does it fit in with Skycoin?
Skycoin is a blockchain application platform. We have multiple coins in the platform (Metallicoin, mdl.life, solarbankers.com, etc). We let people launch their own blockchain applications (including coins). There are two parts to Skywire. The first part is the Skywire node. The second part is the hardware. Skywire is one of the first applications we are launching on the Skycoin platform. It is one of our flagship applications that has been in development for several years. Skywire is basically a decentralized ISP on blockchain. It is like Tor, but you are paid to run it. You forward packets for your neighbors and you receive coins You pay coins to other people for forwarding your packets. So it is like Tor but on blockchain and you are paid for running the network. Also, while Tor is slow, Skywire was designed to be faster than the current internet, instead of slower. Skywire is a test application for monetizing excess bandwidth. Eventually the software defined networking technology behind Skywire, will allow us to build physical networks (actual mesh nets) that can begin to replace centralized ISPs. However, the current Skywire prototype is still running over the existing internet, but later we will start building out our own hardware. Skywire is a solution for protecting people’s privacy and is also a solution to net neutrality. If Skycoin can can decentralize the ISPs with blockchain, then we wont have to beg the FCC to protect our rights. Skywire is just a prototype of a larger system. Eventually we will allow people to sell bandwidth, computational resources and storage. On the hardware side, the Skywire Miner is a like a personal cloud, for blockchain applications. It has eight computers in it and you plug it in and you can run your blockchain applications on it. You can even earn coins by renting out capacities to other users on the network.
How would your everyday, average Joe user access the Skywire network? Let's say from their phone…
We designed Skywire and Skycoin to be as usable as possible. We think you should not have to be a software developer to use blockchain applications. Skywire is designed to be “zeroconf”, with zero configuration. You just plug in your node and it works. Its plug and play. Eventually you will be able to buy a Skywire Miner and delegate control of the hardware to a “pool”, who will configure it for you and do all the work, optimize the settings and the pool will just take a small fee for the service and owner of the hardware will receive the rest of the coins their miners are earning. You will just plug in the Skyminer and start earning coins. It will be plug and play. Most users will not know their traffic is being carried over Skywire. Just like they do not know if they are using TCP or UDP. They will just connect their computer to the network with wifi or an ethernet cable and it will work exactly like the internet does now.
Are you completely anonymous on Skywire, or do you need to add a VPN and go through Tor for extra protection?
Skywire is designed, to protect users privacy much better than the existing internet. Each node only knows the previous hop and the next hop for any packet. The contents of the packet are encrypted (like HTTPS), so no one can spy on the data. Since Skywire is designed to be faster than the existing internet, you give up a little privacy for the speed. Tor makes packets harder to trace by reshuffling them and slowing them done. While Skywire is designed for pure speed and performance.
Will Skywire users be able to access traditional internet resources like Google and Facebook over Skywire?
Yes. Most users will not even know they are using Skywire at all. It will be completely invisible to them. Skywire has two modes of operation. One mode looks like the normal internet to the user and the other mode is for special applications designed to run completely inside of the Skywire network. Skywire native apps will have increased privacy, speed and performance, but all existing internet apps will still work on the new network.
How difficult will it be for a traditional e-service to port their products and services to Skywire / Skycoin? Are there plans in place to facilitate those transitions as companies find the exceeding value in joining the free distributed internet?
We are going to make it very easy. Existing companies run their whole internal networks on MPLS and Skywire is almost identical to MPLS, so they wont have to make any changes in most cases.
What is the routing protocol? How are the routes found?
Skywire is source routed. This means that you choose the route your data takes. You can chose routes that offer higher privacy, more bandwidth (for video downloads) or lower latency (for gaming). Skywire puts control of the data back to the user.
I have also understand that the protocols underlying in skywire will be/already are pretty different from the Internet protocols. Taking into account the years of research applied to the current Internet and the several strategies for routing it doesn't seem an easy task to rebuild everything and make it work. Where can be found the information about the routing strategies used in skywire?
The routing strategies are user defined. There is no best routing strategy that is optimal for every user or application. Instead we allow people to choose their routes and policies, based upon the application, time of day, available bandwidth, reliability and other factors. This is actually the way the original internet worked. However, it was scrapped because of the RAM limitations of early computers which only had 4 KB of memory. So the internet was built upon stateless routing protocols because of the limitations of the available computers at the time, not because the networking protocols were the best or highest performance. Today even a cell phone has 4 GB of ram and 1 million times the memory of a computer in the 1980s, so there is no reason to accept these limitations anymore. Our implementation is simpler and faster because we are stripping away the layers of junk that have accumulated. The internet was actually built up piecemeal, without any coherence, coordination or planning. The internet today is a mishmash of different ad-hoc protocols that have been duct taped together over decades, without any real design. Skywire is an re-envisioning of the internet, if it was built today knowing what we know now. This means simplifying the protocols and improving the performance.
How will the routing work if someone from Europe wants to access a video from a node in Australia (for example)? How do the nodes know the next hop if they cant read the origin or destiny of any packet?
If you have a route with N hops, then you contact each of the nodes on the route (through a messaging service) and set the route table on each route. Then when you drop a packet in the route, it gets forwarded automatically. You could have 60 or 120 hops between Australia and Europe and its fine. Each individual node only knows the previous hop and the next hop in the chain. That is all the node needs to know.
Could you estimate a timeline for when Skywire will operate independently from the current ISP infrastructure?
I think Skycoin is a very ambitious project and some parts could take ten or twenty years. Even if we started with a network of a few thousand nodes and we were growing the network over 1% per day, it will still take a decade or two to conquer the Earth. We are going to start with small scale prototypes (neighborhoods), then try cities. I think the first demonstration networks will be working this year.
How will bandwidth be priced in terms of coin hours and who determines this rate?
You could have 40 PHDs each do a thesis on this. The short answer is that an auction model has to be used (similar to Google’s Ad Words auction model) and the auction has to be designed in a way so that the bandwidth prices reach a stable equilibrium. There are parts of Skycoin that are completely open source and public, like the blockchain and consensus algorithm and Skywire. There are secrets like the auction model and pricing, that are designed to protect Skycoin from being forked and to prevent competitors from copying our work. We estimate that if a competitor was to start today, with 2 million dollars a year in R&D, that it would take them a minimum of eight years to develop a working bandwidth pricing model. And from experience in auction models for advertising networks, 80% of the competitors will fail to develop a working model at all. A working, fair, decentralized bandwidth pricing model that was competitive with what we have would take even longer. There are very few people (less than 4) on Earth who have the experience in mathematics, economics, game theory and cryptographic protocols to design the required auction and pricing models. One of Google’s secrets that allows them to dominate the internet advertising industry, is their auction model for ad pricing. That is what allows Google to pay the content producers the most money for their advertising inventory, while charging the advertising buyers the least. Google’s auction models for pricing AdSense inventory are even more secretive and important than Google’s search algorithm. This is one of the most important and secretive parts of Google’s business. Even companies like Facebook, with billion dollar war chests have been unable to replicate to close the algorithm gap in this area. Expertise in these algorithms and their auction and pricing models is one of the reasons that Google has been able to extract advertising premiums over Facebook. Even if a competitor raises a billion dollars and hires all the PHDs in the field and they had ten years to do research, I doubt they would be able to develop anything close to what we have now. The history of bandwidth markets is very interesting and Enron tried to do a trading desk for bandwidth and bandwidth futures and it completely failed. The mathematical stability and predictability of the pricing of bandwidth under adversarial conditions is one of the major problems. For instance, one of our “competitors” suggests that people will be paid coins if someone accesses their content. So why don’t you just put a website and then have 2000 bots go to it, to get free coins! How are they going to stop that. Or if they are pricing bandwidth, if the price is fixed and the price is too low, then people will not build capacity and bandwidth will be insufficient and the network will be slow. Or if the price is variable and adjusts with demands, what will stop someone from buying up the capacity for a link (“Cornering the Market”) to drive the price up 50x on links they control and extort money out of the other people on the network with a fake bandwidth shortage? The pricing algorithm has to be stable under adversarial conditions. It is a very difficult problem, harder than even consensus algorithm research. Even if a competitor had unlimited funding and unlimited time, it is unlikely that they would find a superior solution to what we have and that alone nearly guarantees that we are going to win this market. It gets even more difficult if you need price stability and you admit any type of bandwidth futures, that allow speculation on future prices. This is a kind of problem like Bitcoin consensus algorithm that can only be solved by an act of genius. We have a lot of experience in this area. It is hyper specialized and a very difficult area and is one of the areas that will give Skycoin a strong sustainable advantage.
Will there be a DNS for Skywire to register .sky domains?
Of course. We will definitely add some kind of DNS and name system eventually. Remembering and typing public keys is too difficult. We want to make it as easy as possible. We want people to be able to register aliases (like screen names) so that people can send coins to aliases instead of having to type in addresses every time. This will let people send 5 Skycoin to “@bobcat” instead of sending coins to “23TeSPPJVZ9HvXh6iYiKAaLNQroKg8yCdja”. This will be a revolution in usability.
When operating a Skyminer, will people in my surrounding area see it as a Wifi option on their devices?
You can configure it to expose a wifi access point. It depends on what you are trying to do.
While I plan on running a DIY miner regardless of the payout, will one of the first 6000 DIY miners built to the same spec as the official miner receive a worthwhile payout in Sky coin? What is the requirement for a DIY miner to get whitelisted (and earning Skycoin) on the Skywire testnet?
The reason we have white-listing on the testnet, is to stop too many nodes from joining the network at once. The network can only support so many nodes until we upgrade certain infrastructure (like the messaging/inter-process communication standard). Eventually, all DIY miners will be whitelisted, but there will probably be a queue.
The Sky team is developing antennas by their own instead of buying or using technology already developed, why is such an effort necessary?
You can of course, buy any commercial antenna or wifi system and use it for Skywire. We are developing our own custom antennas, to push performance limitations and experiment with advanced technology, like FPGAs (Field Programmable Arrays) and SDR (Software Defined Radio). Existing wifi has a huge latency (15 milliseconds per hop). We need to make several modification to get that down to 0.5 millisecond per hop. We have several custom PCB boards in development. We have a few secret hardware projects that will be announced when they are ready. For instance, the Skywire Miner was in development for two years before we publicly announced it. Some of our next hardware projects are focused on payments at the point of sale and improving usability, not just the meshnet.
So back in January Steve was asked a question in the skywire group: "Steve, I am not a tech savage, so how can I understand better the safety running a miner if people on the network do DeepWeb stuff? So i will receive and redirect data packets with crazy things and also there is around 128 GB of storage on my miner. How can i have peace of mind of that?" He replied with "If you don’t run an exit node to the open internet it won’t matter you can run relay nodes if you’re worried about it, or proxy specific content." This seems to goes counter to what you mentioned regarding end-to-end encryption with Skywire. Will some people only be relay nodes and some will be exit nodes as well?
I think the question is wrong. You only store content for public keys that you explicitly subscribe to. This means if you do not like particular content or do not want it on your hardware, then you can just blacklist those public keys or don’t subscribe to them. Data never goes on your machine unless you requested it. If you are holding data for a third party such as forwarding packets, it’s always going to be encrypted, so will look like random noise. There will never be anything in the data that causes legal liability. It will look the same as the output of a random number generator.
If using the skyminer, how much bandwidth will be necessary to run it at its best? And what about the router? It's true it has only 100mbits output? Is a 1gigbits connection necessary to reach toprates?
Hold on!!!! Let us get the software and test net running first, lol. We will know once we know what works for the testnet.
What will the price be for future Skynodes (formerly called Skyminers)?
We are working on ways of reducing the cost, such as by buying our own factory, doing custom PCB boards and using different materials. The cheapest Skywire Miner node will be about $30 for a single node miner. We will have a very cheap personal Skywire “hardware VPN” node also. The miners we are shipping now are for powering the network backbone and have 8 computers and are about $800 each. We sold people the miners for 1 BTC each so they can support development, but gave them a Skycoin bonus equal to about 1 BTC worth of Skycoin. Then that money, went to fund the cost for developing the newer hardware.
Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.
TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level. The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited). No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either. Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage. Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software. Longer Summary: People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you. Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid. But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin. Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else. Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture. This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure. The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees. These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years. Details: Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers. The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain). Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/ Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling. A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding" A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system. Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards. (Maybe you can already see where this is going...) Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on. Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.) Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.) Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do. Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees. Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements. The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems. Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC. Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem. And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe. We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections. Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results. It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain. Some people might object that those systems are different from Bitcoin. But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems. The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct. Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
an appropriate "decompose" operation,
an appropriate "recompose" operation,
the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors. This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency. The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
(DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable. In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream. ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work. Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code. An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code: https://youtu.be/uzahKc_ukfM?t=1101 Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it. The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing). The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth. This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code. Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline). Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives. Many of these sharding-based projects have many more nodes than the Bitcoin network. The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance. [email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set. [email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation. In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources. Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze. Data is merged into a database using [email protected] computers in Berkeley. The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused. Active users: 121,780 (January 2015)
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform. Active users 8,382 (March 2016)
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin? We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines. The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution. BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions. And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements. So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin? First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed. In the case of Bitcoin, the problem involves:
sequentializing (serializing) APPEND operations to a blockchain data structure
in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"? Yes we can! Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s). So, let's imagine how a possible future sharding-based architecture of Bitcoin might look. We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine. Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them. This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin. Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines. This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees. In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements. This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system. Conclusion Bitcoin isn't the only project in the world which is permissionless and distributed. Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives. Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
MiRex is a Bitcoin Cloud Mining HTML templates suitable for any types of Bitcoin Mining, ETH Mining, Doge Mining, Litecoin Moning, Cloud Miner, RIG Seller & CryptoCurrency Miner website. It includes high-quality HTML files are well organized and named accordingly so it’s very easy to change any and all of the design. Bitcoin News Mash-Up: Primary School Accepts Bitcoin; Charlie Shrem Pleads Not Guilty; and Many More By Forexminute - Yashu Gola | Bitcoin | Apr 30, 2014 6:21PM BST Freedomain Radio, the official YouTube channel of the Canadian philosopher Stefan Molyneux, reappeared after disappearing for a few hours. The online space, in which the Pi is a new digital currency being developed by a group of Stanford PhDs. For a limited time, you can join the beta to earn Pi and help grow the network. The governor of the Reserve Bank of India (RBI), Shaktikanta Das, spoke about cryptocurrency during a press conference on Thursday. He also talked about the prospect of an RBI-iss The founder of Freedomain, philosopher and alt-right activist, Stefan Molyneux, received more than $100,000 in cryptocurrency donations after he was banned from Youtube on June 29, 2020. Stefan Mol…