The number of nodes per subnet on the Internet Computer is a dynamic number, continuing to expand as more nodes come online. Refer to the Internet Computer Dashboard for more information.
At Genesis, the NNS subnet launched with 28 nodes, and application subnets have 7 nodes each. This is an initial number in order to test more subnets as nodes come online.
The NNS governance system (which is governed by the votes of neuron holders) decides how large a given subnet on the Internet Computer will be. Subnets do not need to be limited to 7 or 28 nodes, this is just the initial implementation of the Internet Computer. The DFINITY Foundation has successfully tested larger arrangements.
The continuous goal of the Internet Computer Protocol is to maximize node ownership independence as well as geographic variance.
One of the Internet Computer Protocol's key features is resumability. This means that it is relatively trivial for a node to leave or join the network without affecting the liveness of a subnet or taking a long time. This is important for the success of the Internet Computer, since some blockchain projects decrease in the number of nodes as it becomes harder and harder to fully participate in a network. Resumability is also important for the performance of the Internet Computer, allowing it to resume a node without downloading the entire state, and to do it fast. For more info, read about catch-up packages as part of Chain Key Technology.
The consensus algorithm of IC uses a voting mechanism that does not waste power on computing hashes of artificial difficulties, instead devoting that power to the real computation of user programs that do useful work.
We expect to see some real-world numbers of power usage soon, but the IC network is much more energy efficient than Proof-of-Work-based blockchains.
The IC has overhead due to:
The overhead is not dissimilar to any distributed system that involves replication. But on a subnet with a reasonable workload, the majority of resources (CPU, memory) would be used by actual canister code execution.
They are not. We do not rely on historical blocks for verification purpose. Each replica only stores enough blocks to keep the network healthy (e.g., help other replicas catch up), and will delete old ones when they are no longer needed.
This is also due to the finalization algorithm. Once input block is finalized, new states can be computed deterministically, and only need to keep the latest canister state. Older blocks and older states are not that useful.
The exact latency of a subnet depends on a number of factors, including the number of replicas, network topology, and real-world latency between them. Block time is not exactly equivalent to latency, in terms of when you send an ingress message and when you receive a reply. It takes time for a replica to receive your message, and it will only include it in the next block. Then it takes time to finalize and execute. A client (e.g., browser) is also polling to see if computation result is ready, so that also takes time. There is still room for continuous optimization. But as far as the consensus protocol is concerned, it will be difficult to do significantly better.
We have designed an extensive upgrade protocol for the Internet Computer that is controlled by the NNS. Honest node operators (i.e., data centers) can be sure that their nodes will follow this protocol if they have done a correct installation. The upgrade protocol can also cover OS upgrades in addition to upgrading the replica software. Issues are resolved through NNS voting, and once approved, upgrades are automatically rolled out.
As for the hard fork, while nothing prevents individuals from starting a new Internet Computer by themselves, they would be unable to obtain the same key material that drives the NNS network on the Internet Computer, so they would have to choose a different public key. Furthermore, it will be next to impossible for them to hard fork with the identical state of the Internet Computer, because there is no way to obtain such data, even for node operators. Data privacy is taken very seriously.
The Internet Computer can scale capacity without any bounds simply by ingesting more node machines to the network and creating more subnets. There is no limit to how high the TPS can go.
Correct, there is no explicit global consensus, only in the subnets. This is what allows the network to scale out. Subnets can always be added to increase the capacity of the Internet Computer.
The block rate goes down a bit with higher replication factors because a higher number of replicas are needed to send messages for the block to be agreed upon, and the Distributed Key Generation (DKG) work is more computationally expensive. This shouldn't be very significant yet, but once there are so many replicas that they can't all talk to each other and need to forward messages, the message latency will go up, and this will add a bit to the round time (should be linear in the "hops" of communication in the subnet).
This is similar to how the block rate decreases.
Cross-subnet calls cannot be, because the subnets are "islands of consensus." Cross-canister calls within a subnet theoretically could be, but this would make the parallel execution of different canisters more difficult, so they are not really atomic.
The fact that the subnets are islands of consensus is the main innovation. With the chain keys, subnets can securely communicate with each other and form a single Internet Computer, but because there is no need for a "global" consensus, the network can scale out without any bounds effectively by adding subnets.
The random beacon is useful in small subnets to randomly rank the block makers. This means that even if some replicas are malicious, they cannot always be the top-ranked block maker, so we'll get honestly generated blocks most of the time. Wrt replication factor: we can always create subnets of different replication factors, so you can choose the replication factor you want and run your canisters on such subnets only.
On our roadmap, we have plans to employ trusted execution environments (e.g., SGX/SEV) to get both attestation guarantees (that the replicas have not been tampered with, i.e., by the node operator) and privacy guarantee (namely that the state of the replicas/canisters is accessed only through their public interface — so nothing should be leaked besides the results of the computation).
Our consensus implementation also reaches agreement on (an approximation of) the current time, which is passed on to the deterministic processing layer. If a canister asks for the current time, it would be answered deterministically using the time included in the agreed-upon block.
A replica has to join a subnet first before it can become a block maker or a notary. Once a replica successfully joins, the selection is just a random shuffle of all participants using the random beacon. So each round, a new random beacon is generated and then all replicas can calculate their ranks.
The rank-0 block maker gets a priority. Other block makers will wait a brief moment before proposing, honest replicas will also gossip rank-0 block immediately, and other ranks will occur after some delay.
Exactly. Membership can only be changed at fixed intervals because we require a round of decentralized key generation for new members (as well as existing members) to generate keys to use in a future interval. This interval is adjustable for each subnet through the NNS.
A new replica first has to submit its identity to the NNS to join a subnet. The NNS will have to vote and approve. Data is stored in the registry canister and all replicas will monitor the registry, and hence learn the changes of adding/removing replicas in subnets, including their pubkeys, etc.
For inputs, they are blocks. For outputs, they are certifications. Both inputs and outputs have to go through consensus, otherwise we risk divergence.
Yes, it is proven to be only secure with >= 3f+1 replicas, where f is the number of faulty/malicious replicas.