A Proposal For Ethereum 2.0 | Vitalik Buterin

okay great
so I was originally I’m here to give a
talk specifically about what was called
maximally verifying like clients and
charting and I will end up touching on a
lot of that in this talk in certain ways
but I also wanted to make it broader a
bit and basically make it be not just
about scalability and just about
shorting but also a kind of general
outline at a proposal for what I think a
sensible path for aetherium protocol
development in general to be going
forward over maybe the next two to four
years or so right so oh and I’ll start
off by just let’s take a look at what’s
been going on in etherium so far right
so in etherium we’ve first of all had a
lot of successes so first of all the
Syrian blockchain Andy theory and
protocol work you may have noted the the
total lack of a crippling DDoS attack on
the network this year many applications
and you know we’ve gone to somewhere
over a thousand applications at this
point and I’m pretty sure people aren’t
even really counting anymore
and you know it shows in all the charts
right so this is the etherium
transaction chart and we just recently
broke above 500 thousand transactions
per day for the first time which in case
you can’t divide as something like over
seven transactions per second and so
it’s some well it’s almost exactly seven
transactions per second at the peak
actually so the skill the amount of
activity on the ethereum blockchain is
orders of magnitude it’s higher than it
was like one or two years ago and this
includes not just ether transfers it
it’s ERC 20 is it includes ether Delta
it includes various stuff for different
depths you know it just it includes just
a whole bunch of different smart
contracts of various types this is
another diagram I’m fairly proud of this
is the global aetherium network with
more than 20,000 notes worldwide with
hubs existing all over North America
including East Coast and West Coast hubs
all over Europe a bunch in Russia a
smaller but still very significant and
growing amounts scattered across the
various hubs in Asia and you know
increasingly I’m kind of you know know
it’s starting to show up in places like
South America in Africa as well
according to the data that got scraped
from either this side or another side a
couple of days ago we seem to have one
node in North Korea but not sure if
that’s actually in North Korea yeah so
in general north of a lot of progress
right and if you look at just by CNT M
as well you know by same hard fork
basically you know it has gone through
and you can now take advantage of
cryptographic primitives that are
available in accelerated form on top of
the etherium blockchain in order to
build privacy-preserving applications
and people already are so you know you
just saw Socrates in the last
presentation just yesterday I saw on
Reddit someone announced a protocol that
was doing a reputation market based on
ZK snarks you know look there’s I know
if one project that’s trying to do
things well with ring signatures
actually multiple projects so I also
know at least a couple of projects that
are taking advantage of RSA verification
to do things like better communicate
between aetherium and existing DNS
between you know I guess Tony residency
and just various other things so the
privacy part of the three kind of major
challenges to aetherium success that
I’ve outlined privacy security and
scalability is actually quite well on
the way to being solved you know there
is still a lot of work to be done
someone actually has to build the ring
Singh is your stuff someone actually has
to build
Socrates people actually have to use it
people have to actually make using your
knowledge proves practical but we’ve
made progress so what are the challenges
so a scalability is probably challenged
number one right so in this is a quick
excerpt from the shorting FAQ we are I
talk about this challenge called the the
scalability trilemma and I basically
state that it seems likely that
blockchain systems can only have at most
two of these three properties at the
same time
number one decentralization which I
define as basically being able to run
without trusting super nodes scalability
so being able to process many more
transactions than a regular laptop and
security so basically you know it’s okay
to be vulnerable to 51% attacks but you
definitely cannot be vulnerable to 0.51
percent attacks there is a very very big
and growing graveyard of systems that
claim to solve the scalability triangle
but actually keep really don’t so you
know like I actually think the
scalability triangle is solvable but it
basically requires significantly more
kind of complex technologies then you
know the kinds of technologies that we
have available part of this is something
called the Teta availability problem
which it’s I’m not gonna talk about
today because it oh really does require
a one-hour presentation to go through
but basically you know it gets this
problem that a lot of people don’t even
think about but basically not only you
need to verify that the blockchain is
valid you also need to verify that
everyone can access the blockchain and
even zk snarks and Starks cannot solve
that part of the puzzle and there’s
mathematical proofs based on information
theory that say that if you cannot solve
that part of the possible your protocol
is basically you know basically cannot
be secure the relevant paper here is one
basically one that says that badge
updates for cryptographic accumulators
are impossible and so I’m had to adjust
in drag for that one so it’s it is a
very significant and hard challenge so
basically right now every node processes
every transaction the capacity of a
blockchain is limited to the capacity of
one node and it’s actually even worse
because we need to have safety d’argent
margins for anti-ddos purposes the
limiting factor in etherium scalability
theorems to likely worst points of DDoS
vulnerability is likely to be disk reads
and also aetherium VM execution is
currently absolutely not parallelizable
so you know these are just known facts
and the you know severely limit the
extent to which aetherium in its present
form can scale so the likely solution to
a lot of this now there are more
incremental solutions that we can do so
for example you can use some VIP 6:48 to
greatly improve pairwise ability but if
you want to really solve the fundamental
problem every node processes every
transaction basically sharding and it’s
the way that you think about charting is
well basically you can split the
blockchain state so split the set of
information that everyone needs to keep
track of into and universes and we call
each one of these universes a shard
right so each universe has its own
account space its own accounts its own
contracts you can have transactions when
they each card and you only allow
asynchronous communication between
shards so if you want to send a coin
from shard 8 a short B you actually have
to have two transactions the first
transaction on shard a in the second
transaction on shard beam and contracts
on shard a cannot synchronous we call
contracts on shard B so we cannot call a
contract on sugar B and get a response
and do something with that response in
one transaction and the benefit of this
is that each client only processes a
small portion of all the activity on the
network so the reason why we call it
quadratic sharding is because if you
imagine you know though a processing
capacity of one computer to be n then
you can imagine n shards and each shard
has you know n worth of activity each
node would have to stall would have to
process the activity on basically just
the one short it’s interested in and
every node would have to process the
activity of kind of the entire system of
shards so the amount of activity if each
node would basically be in at the top
and M at the bottom so a total of 2n but
the capacity of the entire system would
be n squared and theoretically you can
make the sharding be exponential but you
know a quadratic is simpler and you know
there might be good reasons to keep it
quadratic and it may well be a
reasonable approach to say you know if
we only do shorting up the quadratic and
if you want to go up to exponential then
basically you have to stick with things
like shorting in plasma or sorry well
steeped channels in plasma um another
challenge that we’ve had is this kind of
challenge of governance and protocol
evolution so basically you know we have
this trade-off where on the one hand we
want progress but on the other hand you
know there is demand for protocol
stability and hard Forks making deep
changes are hard right so we finally
gotten by CNT em out on October 16th and
Byzantium is the result of over a year
of work a large amount of testing and
you know even after a year of testing we
still required some pretty epic work
between you know it’s the go team and
the parity team doing you know super
advanced forms of fuss testing that even
and that I don’t even fully understand
in order to catch you know where we were
a large number of bugs that had appeared
pretty close to launch so hard Forks
baking deep changes take a long time to
code take a long time to agree on a long
time to test and there is a high risk of
consensus bugs especially since the
etherium proton network has basically
seven different clients that you have –
that all have to implement the protocol
and agree on it but even still even with
all these challenges you know we’ve done
by CNT and we’ve done it successfully so
but the problem is right then on the one
hand hard Forks making deep changes are
hard on the other hand to get to what we
want to see in aetherium 2.0 so to get
to this kind of highly shorted highly
scalable network as well as all of the
EVM upgrades we wanted to all of the
paralyzed ability upgrades we want to do
EVM 1.5 II was potentially more pre
compiles in order to get to this ideal
and state very deep changes or exactly
what we need so what do we do how do we
handle the trade-off so intuition one
blockchain two systems and I’ll make
this clear with the kind of diagram that
will show in the next slide so here is
you know what a sorting system as we’re
already starting to build my could
possibly look like right so you have the
main chain and in the main chain you
know it’s basically or everything is
basically as it was before
you have your blocks you have your state
route you have your transaction route
you have your state you have your
accounts your contracts your storage now
what we’re going to do is in the main
chain we’re going to add something or a
called a validator manager contract now
what does the validator manager contract
– it basically runs a proof of stake
system that maintains the consensus for
a kind of two layer shorting system that
exists kind of on top over even inside
the main chain in a certain sense so the
validator manager contract would keep
track of validators and would allow
anyone to join and leave the system
service validator and basically assign
the right to create blocks on like in
each of these end shards that the
validator manager contract keeps track
of the VMC would also keep track of
block headers for each of these charts
but and but what the PMC would not do is
it would not verify the full blocks of
the shard and the VMC would not contain
a copy of all the new consensus rules
instead basically actually creating
valid blocks actually enforcing that the
blocks are valid in the shorts actually
enforcing that the data in the shorts is
it’s entirely available and follows the
desired consensus rules basically is
there would be the responsibility of the
proof of stake that exists inside the
validator manager contract so the
validator manager contract would
initially be the set of very low risk
thing to participate in there would
actually not be a writ not even be a
risk of total swashing in the same way
that there is in Casper and the fear the
goal would be to basically just get you
know like a fairly significant portion
of ether stake participating in this and
basically if you are part of the
validator manage your contract then you
just get assigned randomly the right to
create blocks in each and every one of
these shorts at periodic intervals I
should probably use more correct
terminology we we call the blocks inside
of shorts correlations so if you have a
transaction then transactions get
grouped into correlations and the
collation headers then get put into the
valid that widget or manage your
contracts and they get put into
transactions that get it get included
into blocks
so we have this kind of two layer
structure we are at the top layer you
would have the blockchain
and you at the bottom layer you would
have all of these different shards and
each shard would be like its own
universe and connecting the two
universes you have this validator
manager contract which enforces the
proof of stake basically maintains a
built-in internal light client for each
of the shards and processes and
facilitates things like you know moving
ether from the main ship from the main
change of the shards and possibly from
one shards or the other so you can think
of the kind of differences between you
know what’s the two systems between the
main shard world and the new shard world
so somewhat like this right so
scalability so main chart is OFC so see
as the water I normally use to refer to
the computational capacity of one node
so the main shard whatever scalability
of just ofc which is exactly what it
does now and the new shirts would have a
capacity of over C squared and the
reason for this is you have ofc shorts
and each of the ofc shorts itself has or
c capacity right so by the way if you’re
not familiar with Big O notation just
like don’t think about the O’s main
chain scalability of C new shard
scalability of C squared because there’s
C shorts in each shard has C
transactions per block consensus
algorithm so on the main shard the
consensus algorithm is currently
in the very short to medium term we hope
to transition it into hybrid proofs take
in the form of a Kasper a friendly
finality gadget a paper a paper for
which was released on archive something
like 5 days ago and which is actually I
would say nearing the kind of final
stages of a full specification and
eventually we would transition the main
chain consensus into full proof of stake
you know the consensus which itself is
the signs to be fairly conservative and
so possible to implement in this kind of
fairly slow and careful way right so for
example the first part of the switch to
hybrid proof of stake actually doesn’t
require a hard fork
technically it doesn’t mean it requires
this weird kind of soft fork we are
basically it’s not even like a real soft
fork instead it’s just an optional
to the forks rule that’s used inside of
a client’s the only thing you would need
a hard fork for is when we want to
basically stop providing rewards to
proof-of-work minors or at least reduce
the words to proof of work minors and
switch the rewards to validators so the
switch the proofs take I’m fairly
confident can be done in this kind of
gradual fairly conservative way
Ameen chard has full security basically
the same security guarantees we can
expect to have now and the governance of
the main shard basically in order to
kind of satisfy the demand of people
that really wants to have like basically
stronger stronger immutability and less
risk on the main chain we can you know
we can have norms of governance that
emphasize conservatism and strong
immutability more than today now what
happens in the new shorts so scalability
ofc squared consensus is this proof of
stake we are in order to vote in the
proof of stake you would have to deposit
ether into the validator manage your
contract and the validator manager
contracts would randomly assign to you
the right to create blocks on all these
charts the security would be fifty
percent honest majority of the Unchained
proof of stake holders now and basically
once the like what we wants to do was a
theorem 2.0 when shorting solidify
enough then you know eventually it with
the shorts would be kind of tightly
coupled and that you would have roughly
the same level of security and now the
governance for the new shards is
basically just going to be Unchained
either voting right so basically for
these new shards because you have this
proof of stake we might as well have
this proof of stake be the be used in
order to like basically vote on what the
protocol changes are within this
shortened universe right so basically
you know validators themselves already
or are going to be the ones that enforce
the consensus rules so like why not just
have them be you know just the one they
kind of are the ones that decide on the
rules already so we might as well accept
this and we basically actually take
advantage of this because this kind of
on chain is voting governance would give
us the ability to basically emphasize
Lucian in the new shorts initially and
then when we have this tight coupling
and when every client’s becomes
basically a fully validating client for
all the shorts then you know the kind of
governance would become much more
conservative so this is kind of the the
economy that I’m suggesting where we can
have these two universes at the same
time we are the universe on the left the
main shard
is just basically continuous working as
a theorem we do not need to negotiate
evm 1.5 negotiate awasum between seven
different etherium implemented and we
can focus on relatively milder things
that do things like state size control
that do things like you know just
improving security possibly possibly
improving paralyze ability a bit heavy
focus on the pro stakes which and we
have this other new universe or really
these new end universes in which like
basically the implementation of all of
these things that we’ve already been
working on over the last one or two
years can basically be rolled out onto
the main net much much faster so here’s
what an implementation roadmap might
look like rate so shorting row basically
step one is you would implement this
kind of shorting universe as a proof of
stake sidechain and you know in a
previous takes I chained what in each
chart you would have collisions in the
cohesion headers will be verified and
processed by this on main chain
validator manager contract block
creation writes are assigned by a simple
perverse stake in the VM see clients
would just get randomly assigns the
right to create blocks and randomly
shuffle between shards and in the first
step you could have one-way teeth
convertibility in the second stage you
could make the if convertibility two-way
now this is what a node for the etherium
shorted network might look like right so
you already have the existing aetherium
notes that talk to the existing
aetherium network everything on the Left
does not need to be changed one single
bit what you have on the right is you
could have a shorter chain node
Shoreditch a know it’s would be
responsible for basically broadcasting
full collations right so collation
headers would go into the validator
manager contracts full collisions would
only be broadcasted in the shorter chain
between all of the notes that are
participating in the shorting system so
shorter chain note would be written in
Python initially and it could be
designed in such a way that it can talk
to any aetherium node by RPC now if we
one fast evolution we could have like
basically one or two clients for the
further shorter chain node initially and
then over time you know gramp it up to
the full seven as some more teams become
ready so you know basically you would
have this kind of theory in a kind of
partial split between what happens on
the etherium side and what happens in
these in these shorter networks where
you know basically you would really have
a short network for basically every
single shard right so after this you had
a two-way communicator ability you would
move the collation headers from the VM
sia into let’s say being maintained
etherium uncles and then the step four
would be tight coupling so what slight
coupling basically means if you have a
block in the main a block in the main
chain that contains a collation header
which is invalid then the entire
maintain block is invalid so invalidity
on the on the short side we will be able
to cause invalidity on the main chain
side and this would be basically when
the main chain and the shards would both
switch into this kind of tighter secure
tighter security mode and stage four is
something that theoretically could
happen later ideally it should happen
basically when things on the shorting
side are actually reasonably stabilized
so remember the shards that I’m
suggesting or creating new address space
they are not affecting existing address
space this gives us a unique opportunity
to make many very important many
efficiency improving but backwards
incompatible changes to the etherium
protocol so what kind of changes do we
want to make well we can just list
through a few right so number one
changing Merkle trees from hexa
retreat’s to binary trees this is a
no-brainer it makes the Marko Bruce four
times shorter accounts tea tree
redesigns so you might want to get rid
of contract storage trees you might want
to have much multi level nests the trees
you might want to have a different
account system you know we there
things that we can brainstorm these are
things that you know what we can think
about an optimized EVM upgrades so
basically you know can we make the EVM
efficient enough that pre-compiled is
are no longer necessary it seems like
it’s very it’s very likely that we can
so in the etherium community we have had
these kind of two paths for optimizing
the EVM where one of them is the second
of Greg Colvin’s approach of EVM 1.5
which is more moderate upgrades and
additions to the existing EVM in the
second one is this kind of false key or
replacement with you awesome well
basically the idea would be let’s work
together let’s come up with you know
what what combination of these two
models seems most reasonable and let’s
just immediately apply it as the only
EVM available in the shards and so this
way you know what you’d have much more
efficiency without having to worry about
as many backwards compatibility issues
on P realisability so there are fairly
simple aetherium upgrades that we can do
so e IP 648 is one example there’s also
a sharding EAP which has a stricter
version of EAP 648 which basically lets
you execute aetherium transactions in
parallel if you can do that then we
basically have you know if we really if
we want it kind of unlimited access to
the kind of big walk scaling route right
so if we if we have paralyze ability if
we wants to we could always make the
blocks bigger and all that what happen
is that the kind of resource
requirements of running a node would
just be a bit higher but if you have
enough course you would still be able to
process a block fairly quickly so
paralyzed ability is something that if
we tried to add it to the existing
aetherium I mean it would require some
fairly deep changes but if we can make
this kind of new system where we have
paralyze ability right from the start
then it might be easier stateless
clients so stateless clients or again
the topic that I’ve written about
slightly but they’re a topic that you
might end up hearing about more and more
over the next few year
while we’re so the basic idea behind a
stateless client is this right so I mean
as you might have seen from the diagram
steek stateless clients have something
to do with a Merkle tree
everyone bow down a Ralph Merkle again
yeah okay so get census notes do not
hold the entire state
right so consensus no it’s so basically
full notes miners anyone do not need to
hold the entire state they only need to
hold the root hash of the state people
who want to send transactions would
apply attach Merkel branches so
witnesses basically proving dismiss the
correctness of the specific portions of
the state they want to access so if you
have a state bird and if I want to send
a transaction that sends ten coins from
address a to address via I would provide
to Merkel branches that prove the
existing balances of address a and
address B then the transaction the
transaction the blocks would get passed
around with the witness and like that’s
all the information that a node needs in
order to execute the block execute every
transaction in the block and figure out
what the new state brut is going to be
and and what the new moral branches are
and what then you what the portions of
the state are that got modified by that
transaction so the idea here is that we
can substitute each individual instance
of a disc read so basically each
individual instance of reading like a
storage key or an accounts from disk
which takes I mean possibly something
like a millisecond with a belt one
kilobyte of bandwidth assuming we
optimize the merkel Shmi and assuming a
billion accounts if we go up to a
trillion accounts then we’ll only need
shock or 1300 bytes of bandwidth now the
benefits of this are that first of all
it makes it much easier to reshuffle
validators when sharding basically the
idea is that if you get if you have a if
you’re in a stateless client
architecture and you yet suddenly
reassigns you a new shard you do not
have to download the state of the shard
in a stateless client paradigm fast
syncing is literally pretty much instant
and this is true regardless of what the
state size is stateless clients allow us
to care much less about the state size
so even if the state was you know like
one petabyte
nobody really needs to store that one
petabyte and they massively increased
paralyze a bill increased paralyze
ability kind of as a side effect
so you know look this is one other kind
of secondary scalability path that I
mean like basically it might even just
make sense to do it to do it in the in
the shorted universe from day one
so regardless of kind of each individual
proposal right the general idea is that
you know we could have ongoing
development of the etherium protocol
happen in two layers we are the first
layer basically involves relatively
small changes efficiency improvements
getting Casper rolled out and then the
second layer is the one where rapid
development and experimentation happens
and it’s the one where you know we could
get you a thousands of transactions per
second actually fairly quickly and on a
significantly accelerated schedule if we
want if we wants to so if there are
applications that do not need super
duper high security then you know if you
just go on layer 2 fairly quickly so
this gives us the benefits of basically
both approaches at the same time right
so safe and conservative layer one and
all this rapid development in layer two
and the ability to use layer two in
order to do all of these protocol
improvements that no developers have
been dreaming about the whole time
without having to go through this kind
of political process of getting all
seven clients to agree on it and write
and write a million state tests for it
and and fuzz it for six months in the
short term in the longer term
even as the shorting bus system
solidifies of course the shorting system
will and object focusing greeted more
and more on conservatism and security
and eventually the two will be kind of
merged in some nice and clean way but
look this is something like like if we
wants to can wait three or four years so
the good news is that the along with the
initial work of on shorting having a
proceeding very quickly to the points we
are we are basically just inches away
from having a working Kaspar
proof-of-concept test and in python we
also have initial work on shorting
that’s being done in this repo so you
know we have some developers that are
working on a shorting capable clients
and that are starting to work on some of
these theists it was quite an
improvement starting to experiment with
some of these optimizations and you know
the hope would be
– also if there are other client
developers that wants to participate in
this as well then you know if this is
something that should happen totally
openly in this and this is a process
that you know should eventually go you
know beyond it’s just just happening
inside of one client and you know
basically you know in the short in the
short term the shorting stuff would
continue as kind of proof of concept and
test note test mode and the nice thing
is that because this is not being done
as a hard fork because this is being
rolled in gradually as this kind of
loosely coupled overlay at first and
tightly coupled over time like basically
there is no sharp test net – may not cut
off right so like the extent to which
the shards or a test that and the extent
to which the shards are a main net
actually kind of crook flows fluidly
from one to the other over time so you
know basically this way we can get
development of both scalability and a
large number of other needed
improvements in you know both both
quickly and safely with respect to the
existing aetherium protocol thank you

Add a Comment

Your email address will not be published. Required fields are marked *