418 stories
·
14 followers

BEAT: asynchronous BFT made practical

1 Share

BEAT: asynchronous BFT made practical Duan et al., CCS’18

Reaching agreement (consensus) is hard enough, doing it in the presence of active adversaries who can tamper with or destroy your communications is much harder still. That’s the world of Byzantine fault tolerance (BFT). We’ve looked at Practical BFT (PBFT) and HoneyBadger on previous editions of The Morning Paper. Today’s paper, BEAT, builds on top of HoneyBadger to offer BFT with even better latency and throughput.

Asynchronous BFT protocols are arguably the most appropriate solutions for building high-assurance and intrusion-tolerant permissioned blockchains in wide-are (WAN) environments, as these asynchronous protocols are inherently more robust against timing and denial-of-service (DoS) attacks that can be mounted over an unprotected network such as the Internet.

The best performing asynchronous BFT protocol, HoneyBadger, still lags behind the partially synchronous PBFT protocol in terms of throughput and latency. BEAT is actually a family of five different asynchronous BFT protocols that start from the HoneyBadger baseline and make improvements targeted at different application scenarios.

Unlike HoneyBadgerBFT, which was designed to optimize throughput only, BEAT aims to be flexible and versatile, providing protocol instances optimized for latency, throughput, bandwidth, or scalability (in terms of the number of servers).

The BEAT protocols divide into two groups: those supporting full (general) state-machine replication (SMR), as required e.g. for smart contract use cases (BEAT0, BEAT1, BEAT2); and those that support BFT storage (append-only ledger) use cases only (BEAT3, BEAT4).

The following table summarises the BEAT family and the key distinguishing features of each member.


(Enlarge)

There’s a lot of ground to cover here, but I’ll do my best to give you an overview. Alongside the BEAT protocols themselves, the paper also includes two new building blocks: the generalized fingerprinted cross-checksum and an asynchronous verifiable information dispersal (AVID) algorithm.

The HoneyBadger baseline

HoneyBadger supports ACS (the asynchronous common subset) meaning that it provides these guarantees:

  • Validity: if a correct server delivers a set V, then |V| \geq n -f and V contains the inputs of at least n - 2f correct servers.
  • Agreement: if a correct server delivers a set V, then all correct servers deliver V.
  • Totality: if n -f correct servers submit an input, then all correct servers deliver an output.

HoneyBadger uses reliable broadcast (RBC) and asynchronous Byzantine binary agreement (ABA) protocols to achieve its aims. Threshold signatures are used to provide common coins for ABA, and threshold encryption is used to avoid censorship and achieve liveness.

In a threshold scheme the partial outputs (e.g. decryption shares) of at least t participants need to be combined in order to recover (decrypt) the intended value.

BEAT0: improved security and performance

BEAT0, our baseline protocol, incorporates a more secure and efficient threshold encryption, a direct instantiation of threshold coin-flipping (instead of using threshold signatures), and more flexible and efficient erasure-coding support.

BEAT0’s threshold encryption uses the TDH2 scheme by Shoup and , providing 128-bit security under elliptic curve cryptography. This gives stronger security and better performance than the scheme used in HoneyBadger.

In place of the zfec erasure coding library used by HoneyBadger, which supports only Reed-Solomon codes and at most 128 servers, BEAT uses the Jerasure library giving access to more efficient erasure coding schemes and lifting the replica restriction.

BEAT1: lower latency

Via a careful study of latency for each HoneyBadgerBFT subprotocol, we find that (1) most of the latency comes from threshold encryption and threshold signatures, and (2) somewhat surprisingly, when the load is small and there is low contention, erasure-coded reliable broadcast (AVID broadcast) causes significant latency.

BEAT1 swaps out the AVID broadcast protocol of BEAT0 for a replication-based reliable broadcast protocol, Bracha’s broadcast. Under small loads BEAT1 has lower latency. With small batch sizes BEAT1’s throughput is higher than HoneyBadger / BEAT0, but with larger batch sizes throughput is down by 20-30%.

BEAT2: causal ordering

BEAT2 builds on BEAT1 and also opportunistically moves the use of threshold encryption to the client side.

In BEAT2, when the ciphertexts are delivered, it is too late for the adversary to censor transactions. Thus, the adversary does not know what transactions to delay, and can only delay transactions from specific clients. BEAT2 can be combined with anonymous communication networks to achieve full liveness. BEAT2 additionally achieves causal order, which prevents the adversary from inserting derived transactions before the original, causally prior transactions.

BEAT3: higher throughput for storage use cases

BEAT3 is the first member of the BEAT family targeted for BFT-storage use cases (as opposed to general SMR).

Recall that the safety and liveness properties of BFT storage remain the same as those of general SMR, with the only exception that the state may not be replicated at each server (but instead may be erasure-coded). BEAT3 can be used for blockchain applications that need append-only ledgers, and specific blockchains where the consensus protocol serves as an ordering service, such as Hyperledger Fabric.

Whereas so far we’ve been using a reliable broadcast protocol (AVID), BEAT3 replaces this with a bandwidth-efficient information dispersal scheme called AVID-FP. To disperse a block M, AVID requires bandwidth O(n|M|), whereas AVID-FP can do it in O(|M|). To order transactions of size B, the communication complexity of BEAT0 is O(nB), of BEAT1 and BEAT2 is O(n^2 B), and of BEAT3 is O(B).

AVID-FP is a bandwidth-efficient AVID (asynchronous verifiable information dispersal) protocol using fingerprinted cross-checksum. In AVID-FP, given a block B to be dispersed, the dealer applies an (m,n) erasure coding scheme, where m \geq f + 1 and n = m + 2f… then it generates the corresponding fingerprinted cross-checksum for B with respect to the erasure coding scheme.

Each server verifies the correctness of its fragment with respect to the fingerprint cross-checksum, “and then, roughly speaking, leverages the (much smaller) fingerprinted cross-checksum in place of the fragment in the original AVID protocol.

An (n,m) fingerprinted cross-checksum contains a cross-checksum array of n values, and a fingerprint array of m values. The ith entry in the checksum array contains the hash of the ith coded fragment. See section 4 in the paper for details of the fingerprint array usage.

BEAT4: partial reads

BEAT4 further reduces read bandwidth using a novel erasure-coded reliable broadcast protocol called AVID-FP-Pyramid. This supports use cases where clients only need to read a fraction of a data block. AVID-FD-Pyramid is based on pyramid codes, which trade space for access efficiency in erasure-coded storage systems (about 10% extra space requirement for a 50% drop in access overhead). Pyramid codes can be efficiently built from any (n, m) systematic and MDS (maximum distance separable) code. See section 4 in the paper for brief details, or Huang et al. for an in-depth treatment. BEAT4 uses a 2-level pyramid scheme which can tolerate one failure in each level, and is able to reduce read bandwidth by 50%. Full details are in section 9 of the paper.

Evaluation

The evaluation is conducted on EC2 with up to 92 nodes from ten different regions in five different continents, using a variety of network sizes and batch sizes. In the figures that follow, f represents the network size such that BEAT0,1,2 & 3 require 3f+1 nodes and BEAT4 requires 3f + 2 nodes.

When f=1, BEAT0, BEAT1, BEAT2, and BEAT3 are around 2x faster than HoneyBadger, and when f becomes larger, they are even faster than HoneyBadger. When f = 1, BEAT4 is about as fast as HoneyBadger… As f increases, HoneyBadger is much slower than BEAT4.

For throughput, BEAT0 slightly outperforms HoneyBadger. BEAT1 and BEAT2 achieve higher throughput than HoneyBadger with small batch sizes, but have 20-30% lower throughput at larger batch sizes. BEAT3 and BEAT4 outperform all the other protocols consistently.

If this write-up has captured your interest, I highly encourage you to go an and read the full paper which contains significantly more detail than I was able to convey here.



Read the whole story
graydon
3 days ago
reply
Share this story
Delete

"The new, post-nationalization ARM-13 architecture will have a separate quantum currency coprocessor..."

1 Share
“The new, post-nationalization ARM-13 architecture will have a separate quantum currency coprocessor for handling interconversion and factoring of British currency units…”

- Charles Stross
Read the whole story
graydon
13 days ago
reply
Share this story
Delete

Go Die Me

2 Shares

We are so inured of evil that this hardly raises an eyebrow — but this shows the extent of the failure of our society. That we have become accustomed to it does not excuse it, but makes it yet more monstrous.

Read the whole story
graydon
13 days ago
reply
sarcozona
14 days ago
reply
Share this story
Delete

Why Intentional Communities Fail

2 Shares

While it may seem as though communal or collective ownership of the means of production is the ideal scenario, it appears that it only works under a certain set of conditions and circumstances.

What brought this realization was reading numerous accounts of how attempts to found voluntary subcultures, from intentional communities, to hippie “back-to-land” moments, failed. The particulars of these failures are all unique, but the basic outlines tend to follow the same general pattern.

The problem is that such communities are fundamentally artificial constructs. They are nothing like the original primordial societies that they attempted to replicate, which, as we saw before, were based around two essential factors: kinship structures and religious worship. The problem is these voluntary subcultures lack the essential ingredients that held these traditional societies together organically.

It’s why we don’t really have very many examples to point to of alternatives to the mainstream society succeeding in the long-term. I’m sure if we looked hard enough, we might find some, but they are so exceptional as to be almost not worth mentioning. Most attempts to secede from the broader society fail. Some fail quickly, and some fail slowly, but they all fail in the end.

Another observation is that nearly all of the long-term successful attempts have been based around some sort of religious affiliation; be it religious movement or cult. This ensures the requisite social cohesion.

Now, this is a bitter pill to swallow. I’m as critical of organized religion as the next person (unless the next person happens to be Sam Harris). I don’t want to have to admit that religion—with all its superstition, irrationality, hierarchic, hypocrisy, sanctimony, magical thinking, moralizing and repression—is a necessary prerequisite for living without the State. It seems like just trading one sort of oppression for another.

But, I must grudgingly admit that it does seem to be the magic ingredient that has kept voluntary subcultures alive and functional long term. The primary example is, of course, the Amish. While the Amish do not claim descent from a single common ancestor, they are united by the religion that they follow, and their basic social structure is based around their beliefs. The basic unit is the conjugal, monogamous household. They speak a common language (a German dialect), and certainly share much of their DNA due to inbreeding. Another example might be Hasidic communities. One example from the past is the Amana Colonies of Iowa. They were an alternative commune founded by German Pietists:

Amana Colonies (Wikipedia).

Concerning the Amish, here’s Patrick Deneen from that same interview as before illustrating how they represent an alternative model of society to the Liberalism he’s criticizing:

[27: 40] From the macro view, the debate really tends to be over which system of depersonalized relationship [State or Market] are we going to prefer and put into the primary position. The major desiderata of the Liberal order is to transform every form of what might have been a form of personal obligation entailing certain duties and responsibilities that you would owe to a specific person, and to transform those into depersonalized relationships. Because when you have a personal obligation to someone, you don’t feel free. You feel like you’re obligated to do something for someone. When it’s depersonalized, I’m not obligated to that particular person.

I’ll give you a really quick example of what I mean by this. I often use the Amish as an example of an opposite system. In certain Amish communities, it’s not permitted to take on insurance. To procure insurance as seen as a sort of aggression against the community. And the reason is that you withdraw yourself from the shared responsibility. When some catastrophe or some bad event happens to a person in your community, you are personally obligated as a member if that community to help those people. So if someone’s house burns down, you are personally obligated to help rebuild that house. Or, if a parent dies, the community is obligated to raise that family and to help financially with that family.

What we do in modern Liberal society is to create an insurance market that we can all contribute to, and if something happens we can make a claim on, but that claim is not on any *particular* individual, it’s rather a depersonalized mechanism that liberates us from any particular obligation to any other person.

You could say, this is the ground condition of our liberty–it’s to minimize those personal obligations. And so notice how these debates take place in our society. When we were debating the health care policy in recent years, the debate is about whether it should be provided through more private means or more public means. Should it be more of a state-based system or a market-based system? But notice that’s a debate about means, and not really ends.

[…]

[32:30] “[Liberalism] ultimately acts as a kind of solvent against almost every form of relationship that we can think of. You can talk about it in terms of community, you can talk about it in terms of family, you can talk about it in terms of religion, you can talk about it in terms of association, even today you can talk about it increasingly in terms of nation.”

“One of the interesting debates we’re having today that really comes out of this is, is there any coherent idea of what a nation is? Is there any reason why there should be borders or boundaries where they’re drawn? Or is that just ultimately arbitrary? And if a nation is really just an instantiation of the liberal philosophy, then in the end there’s probably no reason to think that borders and boundaries actually have any coherent reason to exist, because it’s really an idea that can’t be bounded. And so one of the debates that’s roiling us today is: is there something more to a nation other than simply the Liberal idea that seems to define it?”

1811 – Why Liberalism Failed w/ Patrick J Deneen (YouTube)

“It is not permitted to take on insurance…” Now we see the power of, for example, things like guilds, which provided those essential services to their initiates. And the guild system might be extrapolated to other sodalities, from the village community in India, to the tribes of the Native Americans to the oikoi households of the ancient Near East and Greece. All these were the basic functional units of society before the advent of the Liberal state and the globalized Market system.

In the failing Western Roman Empire, for example, monastic brotherhoods flourished across Western Europe. These fraternal orders acted as both colonizers and proselytizers for Christianity thriving among hostile tribal peoples while gradually converting them to the new religion—one based not upon ancestors or consanguinity, but belief. These “intentional communities” were often the hotbeds of productive activity in the post-Roman world. They kept reading and literacy alive during the Dark Ages and feudal times. Many medieval innovations in craftsmanship, fabrication, alternative energy (watermills), and even banking (e.g. Knights Templar) were originally developed in monastic brotherhoods before being extrapolated to the wider society. Clocks were invented by monks to keep track of their prayer schedule. Monasteries are still known for the quality of their beer to this day. But key to their survival was always religion.

This is why intentional communities and back-to-the-land movements almost always fail. They are attempting to recreate an older, primordial, more organic societal order without the requisite social glue that held them together. Counterculture movements are usually full of people obsessed with individualism—“just be yourself, man!” was at the heart of the hippie ethos. And before the original Jesus Freaks, hippies were often reacting aggressively against the organized religions they were brought up in, which were perceived as “oppressive” and “intolerant” (as indeed they were). Instead, the counterculture sought “liberation,” often through hedonism–sex and drugs and the like. But the overwhelming evidence is that such selfish attitudes made implementing functional, viable alternative communities effectively impossible. One can read about the failure of any number of these hippie communes or alternative communities in order to come to this sad conclusion.

The irony is, these countercultural attitudes were only made possible by living in the modern Liberal state that they were rejecting! A profound irony indeed…

Instead, the alternative communities that ultimately succeeded long-tern were suffused with religion and antagonistic to strident individualism. I don’t like that fact, but that’s what history shows. That is, they were all functionally illiberal.

All this got me thinking about a book Dmitry Orlov published several years ago called Communities that Abide. Orlov went off to the ethnographic and sociological literature to find out just was the “secret sauce” of the communities that–like the Big Lebowski’s Dude–abided. What struck me is that the one thing they had in common was that they were subcultures existing within modern nation states and at the same time co-existing with them. What held these subcultures together in the absence of either the State or the Market, and despite the often open hostility of the nation-states under which they lived towards them?

Well, as with the example of the Amish given by Patrick Deneen above, what they had in common was that they were all conceived in imitation of the family; they spoke a common dialect and shared similar values and behavioral ethics, and were often (although not always) intensely religious. Here are the major communities Orlov analyzed in his book (I’ve listed their group names along with the nominal nation-state reside in):

The Hutterites (Amish) — United States and Canada
The Roma (Gypsies) — Eastern and Western Europe
The Russian Mafia — Russia and the former Soviet Union
The Pashtuns — Afghansitan and Pakistan.
The Israeli Kibbutzim — Israel
The Mormons — Primarily the western United States
The Dukhobors — Western Canada

What do all these communities have in common? Their basic social structures are essentially identical that of all people on earth pre-state! They essentially share the characteristics as the kinds of societies described by people Maine and Morgan such as the Indian Village or the Iroquois. This effectively describes their legal systems; their social structure; their economic system. They are all illiberal according to Patrick Deneen’s description above. They are suffused with social obligations. As anthropologists have determined, this was the composition of basically the entire human race before the coming of the modern Liberal nation-state. Henry Maine writes in Lectures on the Early History of Institutions [1874]:

Cæsar’s failure to note the natural divisions of the Celtic tribesmen, the families and septs or subtribes, is to me particularly instructive. The theory of human equality is of Roman origin; the comminution of human society, and the unchecked competition among its members, which have gone so far in the Western Europe of our days, had their most efficient causes in the mechanism of the Roman State. Hence Cæsar’s omissions seem to be those most natural in a Roman general who was also a great administrator and trained lawyer; and they are undoubtedly those to which an English ruler of India is most liable at this moment. It is often said that it takes two or three years before a Governor-General learns that the vast Indian population is an aggregate of natural groups and not the mixed multitude he left at home; and some rulers of India have been accused of never having mastered the lesson at all.

Of course, when we talk about collapse, we are really taking about nation-states, which are basically legal constructs and shared fictions. People themselves don’t just disappear. I don’t know of any tribal communities that have “collapsed.” And when states do collapse, what’s left are these more primordial forms of human social affiliation and solidarity to fall back on. So while empires are fragile and ephemeral things that come and go; expand and contract, the underlying fabric of society remains (or I should say, remained) more-or-less intact before industrialism (i.e. they abided). In fact, this might be a good way of understanding ancient history. Ancient empires were merely a “layer” of power above a substrate or “traditional” society, which was organized around the family, household, lineage, and clan, and headed by patriarchs, similar to many of Orlov’s subcultures like the Roma (Gypsy)  people. According to Orlov, despite their surface differences, the underlying societies more-or-less shared the same characteristics:

  • Autonomous, refusing to coalesce into larger groups. (an anthropologist would say segmentary–ch)
  • Separatist, shunning the outsiders (and those of their own number who misbehave), and interacting with the outside world as a group rather than as individuals.
  • Anarchic in their patterns of self-governance—neither patriarchal nor matriarchal—with certain individuals granted positions of responsibility, but not positions of authority.
  • Having an oral rather than a written code of conduct. (Maine writes of the early Celts: “[Caesar] says that the Druids presided over schools of learning, to which the Celtic youth flocked eagerly for instruction, remaining in them sometimes (so he was informed) for twenty years at a time. He states that the pupils in these schools learned an enormous quantity of verses, which were never committed to writing; and he gives his opinion that the object was not merely to prevent sacred knowledge from being popularised, but to strengthen the memory. Besides describing to us the religious doctrine of the Druids, he informs us that they were extremely fond of disputing about the nature of the material world, the movements of the stars, and the dimensions of the earth and of the universe. At their head there was by his account a chief Druid, whose place at his death was filled by election, and the succession occasionally gave rise to violent contests of arms”.-ch)
  • Communist in their patterns of production and consumption, with little use for money or markets.
  • Based on a strong central ideology (or faith) which they refuse to analyze, question or debate.
  • Having lots of children, bringing them up as their replacements, and retiring as young as possible.
  • Refusing to “work jobs,” and doing little work beyond what they consider necessary.
  • Consciously rejecting much of the culture and quite a lot of the technology of surrounding society. (less relevant to ancient cultures-ch)
  • Speaking their own languages or dialects, which they jealously preserve and safeguard against outside influences.
  • Adhering to a certain protocol for interacting with outsiders, perhaps hiding in plain sight, perhaps through a certain “in your face” disguise that hides who they are behind a more conventional image.
  • Pacifist rather than warlike, refusing to carry weapons or take part in military actions of any sort, and fleeing from danger rather than confronting it.
  • Nomadic rather than settled, with minimal attachment to any one piece of land beyond its immediate usefulness to them, and willing to relocate as a group in times of danger, hardship or persecution.
  • Quite happy and generally content with their lot in life, being resigned to accepting whatever life gives, and relatively unafraid of death, neither fighting it nor seeking it.

Communities that Abide – Part 2 (Club Orlov)

Again, this is describing pretty much every society on earth prior to 1500! I think when we look at history, we tend to read about the exploits of conquerors, emperors, kings, generals, and rulers. We read the annals of empire building—famous battles, capital cities, court intrigue, trading patterns and the like. That’s what was written down, after all. But underneath it all, this was the fabric of daily life, from the advent of sedentism and agriculture until the time of the Industrial Revolution.

Meanwhile, in the rest of the world, older social transitions prevailed outside of the cosmopolitan port cities connected by their expanding webs of trade. Even in modern economies like the United States and China—the two biggest economies on earth—we see this rural/urban divide between traditional family structures in the countryside and the anonymous business activities in urban centers mediated by laws and contracts. Really, Liberalism at it’s heart has been a great experiment in undoing these connections to family, language and place. In it’s place, State and Market marched together hand-in-hand as conquerors rather than adversaries.

So the nation-state is little else than the ruling class and its scaffold of institutions and interlocking power webs which comes and goes (i.e. collapses), but the older, underlying social fabric remains more-or-less intact, or at least has done in the past. A big problem, as I see it, is that in Western Liberalized democracies (especially the United States), there is no underlying society. We are all atomized strangers now, and self-serving media and corporate interests have every incentive to fan the flames of division and discord to the greatest extent possible (as do the Russians), unlike an earlier generation of politicians who were obsessed with the idea of encouraging unity. If and when (more likely when) the central government in WIERD States like the U.S. falls apart, there is no underlying “village community,” “kinship structure,” “folk tradition,” or common religion (or whatever else that unites us) to fall back upon. And sorry, Libertarians, but in those circumstances, feudalism run by warlords is the most likely outcome based on historical precedent, not the flourishing of “free markets” or voluntary transactions of small independent producers mediated by lumps of intrinsically valuable gold nuggets.

Henry Sumner Maine wrote about the prototypical village communities he encountered throughout India in Ancient Law:

[T]here is a strong à priori improbability of our obtaining any clue to the early history of property, if we confine our notice to the proprietary rights of individuals. It is more than likely that joint-ownership, and not separate ownership, is the really archaic institution, and that the forms of property which will afford us instruction will be those which are associated with the rights of families and of groups of kindred. The Roman jurisprudence will not here assist in enlightening us, for it is exactly the Roman jurisprudence which…has bequeathed to the moderns the impression that individual ownership is the normal state of proprietary right, and that ownership in common by groups of men is only the exception to a general rule…

It happens that, among the [Hindus], we do find a form of ownership…respecting the original condition of property. The Village Community of India is at once an organised patriarchal society and an assemblage of co-proprietors. The personal relations to each other of the men who compose it are indistinguishably confounded with their proprietary rights, and…attempts of English functionaries to separate the two may be…some of the most formidable miscarriages of Anglo-Indian administration.

The Village Community is known to be of immense antiquity…Conquests and revolutions seem to have swept over it without disturbing or displacing it, and the most beneficent systems of government in India have always been those which have recognised it as the basis of administration.

[I]n India …As soon as a son is born, he acquires a vested interest in his father’s substance, and on attaining years of discretion he is…permitted…to call for a partition of the family estate. As a fact, however, a division rarely takes place even at the death of the father, and the property constantly remains undivided for several generations, though every member of every generation has a legal right to an undivided share in it. The domain thus held in common is…managed by the eldest agnate, by the eldest representative of the eldest line of the stock.

Such an assemblage of joint proprietors, a body of kindred holding a domain in common, is the simplest form of an Indian Village Community, but the Community is more than a brotherhood of relatives and more than an association of partners. It is an organised society, and besides providing for the management of the common fund, it seldom fails to provide…for internal government, for police, for the administration of justice, and for the apportionment of taxes and public duties.

…Although, in the North of India…the Community was founded by a single assemblage of blood-relations… men of alien extraction have always, from time to time, been engrafted on it, and a mere purchaser of a share may generally, under certain conditions, be admitted to the brotherhood. In the South of the Peninsula there are often Communities which appear to have sprung not from one but from two or more families; and there are some whose composition is known to be entirely artificial; indeed, the occasional aggregation of men of different castes in the same society is fatal to the hypothesis of a common descent. Yet in all these brotherhoods either the tradition is preserved, or the assumption made, of an original common parentage. Mountstuart Elphinstone…observes of them:

“The popular notion is that the Village landholders are all descended from one or more individuals who settled the village; and that the only exceptions are formed by persons who have derived their rights by purchase or otherwise from members of the original stock. The supposition is confirmed by the fact that, to this day, there are only single families of landholders in small villages and not many in large ones; but each has branched out into so many members that it is not uncommon for the whole agricultural labour to be done by the landholders, without the aid either of tenants or of labourers. The rights of the landholders are theirs collectively and, though they almost always have a more or less perfect partition of them, they never have an entire separation. A landholder, for instance, can sell or mortgage his rights; but he must first have the consent of the Village, and the purchaser steps exactly into his place and takes up all his obligations. If a family becomes extinct, its share returns to the common stock.”

The tokens of an extreme antiquity are discoverable in almost every single feature of the Indian Village Communities. We have so many independent reasons for suspecting that the infancy of law is distinguished by the prevalence of co-ownership by the intermixture of personal with proprietary rights, and by the confusion of public with private duties, that we should be justified in deducing many important conclusions from our observation of these proprietary brotherhoods, even if no similarly compounded societies could be detected in any other part of the world. It happens, however…that [there are] a similar set of phenomena in those parts of Europe which have been most slightly affected by the feudal transformation of property, and which in many important particulars have as close an affinity with the Eastern as with the Western world…

http://www.gutenberg.org/files/22910/22910-h/22910-h.htm (153)

Echoing H.S. Maine’s description of the Indian village community above, Dmitry Orlov writes:

There are two organizing principles that self-sufficient communities can rely on in order to succeed: communist organization of production and communist organization of consumption. Both of these produce much better results for the same amount of effort, and neither is generally available to the larger society, which has to rely on the far more wasteful market-based or central planning-based mechanisms, both of which incur vast amounts of unproductive overhead—bankers, traders and regulators in the case of market-based approaches, and government bureaucrats and administrators in the case of centrally planned approaches. History has shown that market-based approaches are marginally more efficient than centrally planned ones, but neither one comes anywhere near the effectiveness of communist approaches practiced on the small scale of a commune.

It stands to reason that communist production methods would outperform capitalist ones. On the one hand, you have a group of people driven to work together out of a sense of solidarity and mutual obligation, cooperating of their own free will, free to switch tasks to keep life from becoming monotonous, free to do what they believe would work best, using work as a way to earn respect and improve their social standing, knowing full well that their fellows will take care of them and their families should they ever become unable to work.

On the other hand, you have commoditized human beings pigeon-holed by a standardized skill set and a job description, playing the odds in an arbitrary and precarious job market, blindly following orders for fear of ending up unemployed, relying on work to keep them and their immediate family from homelessness and starvation, and discarded once “burned out” on the set of tasks for which they are considered “qualified.” The result of all this is that 70% of the workers in the US say that they hate their job, putting a gigantic drag on the capitalist economy…

Those who chafe at the use of the word “communist” should feel reassured that no military or political “communist menace” is ever likely to reassert itself: state communism is as dead as a burned piece of wood. The one remaining, ongoing attempt at unreformed state communism is North Korea, and it is the exception that proves the rule. On the other hand, regardless of your opinions, you too are a communist.

First, you are human, and over 99% of their existence as a species humans have lived in small tribes organized on communist principles, with no individual land ownership, no wage labor, no government, and no private property beyond a few personal effects. If it weren’t for communism, you wouldn’t be here.

Second, if you have a family, it is likely to be run on communist principles: it is unlikely that you invoice your children for the candy they eat, or negotiate with your spouse over who gets to feed them. The communist organizing principle “From each according to abilities, to each according to needs” is what seems to prevail in most families, and the case where it doesn’t we tend to regard as degenerate. From this it seems safe to assume that if you are human and draw oxygen, then you must be, in some sense, a communist.

Communities That Abide – Part 3 (Club Orlov)

Echoing Orlov, Nassim Taleb writes: “Today’s Roma people (aka Gypsies) have tons of strict rules of behavior toward Gypsies, and others toward the unclean non-Gypsies called payos. And, as the anthropologist David Graeber has observed, even the investment bank Goldman Sachs, known for its aggressive cupidity, acts like a communist community from within, thanks to the partnership system of governance. Much of the difference, he explains, comes down to a question of scale:

Things don’t “scale” and generalize, which is why I have trouble with intellectuals talking about abstract notions. A country is not a large city, a city is not a large family, and, sorry, the world is not a large village. There are scale transformations…

So we exercise our ethical rules, but there is a limit from scaling beyond which the rules cease to apply. It is unfortunate, but the general kills the particular. The question we will reexamine later, after deeper discussion of complexity theory, is whether it is possible to be both ethical and universalist. In theory, yes, but, sadly, not in practice. For whenever the “we” becomes too large a club, things degrade, and each one starts fighting for his own interest. The abstract is way too abstract for us.

This is the main reason I advocate political systems that start with the municipality, and work their way up (ironically, as in Switzerland, those “Swiss, rather than the reverse, which has failed with larger states. Being somewhat tribal is not a bad thing–and we have to work in a fractal way in the organized harmonious relations between tribes, rather than merge all tribes in one large soup. In that sense, an Americans federalism is the ideal system.

This scale transformation from the particular to the general is behind my skepticism about unfettered globalization and large centralized multiechnic states. The physicist and complexity researcher Yaneer Bar-Yam showed quite convincingly that “better fences make better neighbors”–something both “policymakers” and local governments fail to get about the Near East. Scaling matters, I will keep repeating until I get hoarse…

Nassim Micholas Taleb; Skin in the Game, pp. 58-59

Next time, we’ll take a closer look at some important insights into this idea provided by Taleb’s in his new book.

Read the whole story
sarcozona
38 days ago
reply
graydon
38 days ago
reply
Share this story
Delete

Noria: dynamic, partially-stateful data-flow for high-performance web applications

1 Share

Noria: dynamic, partially-stateful data-flow for high-performance web applications Gjengset, Schwarzkopf et al., OSDI’18

I have way more margin notes for this paper than I typically do, and that’s a reflection of my struggle to figure out what kind of thing we’re dealing with here. Noria doesn’t want to fit neatly into any existing box!

We’ve seen streaming data-flow engines that maintain state and offer SQL interfaces and even transactions (e.g. Apache Flink, and data Artisan’s Streaming Ledger for Flink). The primary model here is data-flow, and SQL is bolted on as an interface to the state. The title of this paper sets me off thinking along those lines, but from the end user perspective, Noria looks and feels more like a database. The SQL interface is primary, not ancillary, and it maintains relational data in base tables (using RocksDB as the storage engine). Noria makes intelligent use of data-flow beneath the SQL interface (i.e., dataflow is not exposed as an end-user programming model) in order to maintain a set of (semi-)materialized views. Noria itself figures out the most efficient data-flows to maintain those views, and how to update the data-flow graphs in the face of schema / query set changes.

The primary use case Noria is designed for is read-heavy web applications with high performance (low latency) requirements. Such applications today normally include some kind of caching layer (e.g., memcached, Redis) to accelerate read performance and lighten database load. A lot of application developer effort can go into maintaining these caches and also denormalised and computed state in the database.

In general, developers must choose between convenient, but slow, ‘natural’ relational queries (e.g., with inline aggregations), and increased performance at the cost of application and deployment complexity (e.g. due to caching).

Noria simplifies application development by keeping data in base tables (roughly, the core persistent data) and maintaining derived views (roughly, the data an application might choose to cache). Any computed information derived from the base tables is kept out of those tables. Programmers don’t need to worry about explicit cache management/invalidation, computing and storing derived values, and keeping those consistent. Noria does all this for them.

At its core, Noria runs a continuous, but dynamically changing, dataflow computation that combines the persistent store, the cache, and elements of application logic. Each write to Noria streams through a joint data-flow graph for the current queries and incrementally updates the cached, eventually-consistent internal state and query results.

(Which is also reminiscent of CQRS, but again, the pattern here is used inside the datastore).

It’s not enough for Noria to maintain just some recent window of state, it needs to store all the persistent state. So state explosion is a potential problem. That’s where the ‘partially-stateful data-flow’ part from the paper title comes in, as Noria has a mechanism for retaining only a subset of records in memory and re-computing any missing values from the upstream operators (and ultimately, base tables) on demand.

The current prototype has some limitations, but it’s also showing a whole lot of promise:

When serving the Lobsters web application on a single Amazon EC2 VM, our prototype outperforms the default MySQL-based backend by 5x while simultaneously simplifying the application. For a representative query, our prototype outperforms the widely-used MySQL/memcached stack and the materialized views of a commercial database by 2-10x. It also scales the query to millions of writes and tens of millions of reads per seconds on a cluster of EC2 VMS, outperforming a state-of-the-art data-flow system, differential dataflow.

The end-user perspective

A Noria program looks like SQL DDL and includes definitions of base tables, internal views used as shorthands in other expressions, and external views which the application can later query. Data is retrieved via parameterised SQL queries. Data in base tables can be updated with SQL insertions, updates, and deletes. Noria applies these changes to the appropriate base tables and updates dependent views.

Noria also implements the MySQL binary protocol, so existing applications using prepared statements against a MySQL database can work directly on top of Noria with no changes required.

The consistency model is eventual with a guarantee that if writes quiesce, all external views eventually hold results that are the same as if the queries had been executed directly against the base table data. “Many web applications fit this model: they accept the eventual consistency imposed by caches that make common-case reads fast.

One very nice feature of Noria is that it accepts the fact that application queries and schema evolve over time. Noria plans the changes needed to the data-flow graph to support the changes and transitions the application with no downtime.

Porting a MySQL-based application to Noria typically proceeds in three steps:

  1. Import existing data into Noria from a database dump, and point the application at the Noria MySQL adapter. You should see performance improvements for read queries, especially those that are frequently used.
  2. Create views for computations that the MySQL application currently manually materialises.
  3. Incrementally rewrite the application to rely on these natural views, updating the write path so that the application itself no longer manually updates views and caches.

During the third phase application performance should steadily improve while the code simplifies at the same time.

Data-flows in Noria

Noria creates a directed acyclic data-flow graph of relational operators with base tables at the roots and external views at the leaves.

When an application write arrives, Noria applies it to a durable base table and injects it into the data-flow as an update. Operators process the update and emit derived updates to their children; eventually updates reach and modify the external views. Updates are deltas that can add to, modify, and remove from downstream state.

Joins are implemented using upqueries: requests for matching records from stateful ancestors.

Consistency

To provide its eventual consistency guarantees Noria requires that:

  • operators are deterministic functions over their own state and the inputs from their ancestors;
  • there are no races between updates and upqueries;
  • updates on the same data-flow path are not reordered; and
  • races between related updates that arrive independently at multi-ancestor operators via different data-flow paths are resolved. Noria addresses this by requiring such operators to be commutative. “The standard relational operators Noria supports have this property.

With respect to ordering, each operator totally orders all updates and upquery requests it receives for an entry, and the downstream dataflow ensures that all updates and upquery responses from the entry are processed by all consumers in that order.

Upqueries require special care since upquery responses don’t commute with each other or with previous updates. Noria handles this by ensuring that no updates are in flight between the upstream stateful operator and the join when a join upquery occurs: each join upquery is scoped to an operator chain processed by a single thread. (Updates on other chains can be processed in parallel).

State

The partially-stateful data-flow model lets operators maintain only a subset of their state. This concept of partial materialization is well-known for materialized views in databases, but novel to data-flow systems. Partial state reduces memory use, allows eviction of rarely-used state, and relieves operators from maintaining state that is never read… Noria makes state partial whenever it can service upqueries using efficient index lookups. If Noria would have to scan the full state of an upstream operator to satisfy upqueries, Noria disables partial state for that operator.

Partial-state operators start out empty and are gradually and lazily populated by upqueries.

Like a cache, entries can be evicted under memory pressure. Eviction notices flow along the update data-flow path, indicating that some state entries will no longer be updated. If it is later required to read from evicted state Noria recomputes it via recursive upqueries (all the way to the base tables if necessary).

For correct handling of joins, once upstream state has been filled in via recursive upqueries, a special join upquery executes within a single operator chain and excludes concurrent updates.

Data-flow transitions

Changes to a Noria program over time (e.g. the set of SQL query expressions) are handled by adapting the data-flow dynamically.

Noria first plans the transition, reusing operators and state of existing expressions where possible. It then incrementally applies these changes to the data-flow, taking care to maintain its correctness invariants. Once both steps complete, the application can use new tables and queries.

See section 5 in the paper for full details.

Evaluation

Noria is 45kloc of Rust and supports both single server and clustered usage. The prototype is evaluated using backend workloads generated from the production Lobsters web application. It is compared against vanilla MySQL (MariaDB), a MySQL/memcached stack, the materialized views of a commercial database, and an idealized cache-only deployment (the latter not offering any persistence, but giving an estimate of the performance when all reads are served from memory).

Here’s how Noria compares to MariaDB on Lobsters, where “Noria achieves both good performance and natural, robust queries.”

Noria’s space overhead is around 3x the base table size for Lobsters.

The rest of the comparisons are done with single server setups and a subset of Lobsters. For read-heavy workloads Noria outperforms all other systems except for the pure memcached at 100-200K requests/sec. With a mixed read-write workload Noria again beats everything except for the (unrealistic) pure memcached solution.

See section 8.2 for an interesting comparison of Noria with DBToaster as well.

Compared to a Differential Dataflow implementation based on Naiad and a 95% read Lobsters subset workload, Noria scales competitively and starts to show advantage from 4 machines onwards.

To achieve truly large scale Noria can shard large base tables and operators with large state across machines. “Efficient resharding and partitioning the data-flow to minimize network transfers are important future work for Noria…

So let’s return to the question we started with, what kind of thing is Noria? In the authors’ own words:

Noria is a web application backend that delivers high performance while allowing for simplified application logic. Partially-stateful data-flow is essential to achieving this goal: it allows fast reads, restricts Noria’s memory footprint to state that is actually used, and enables live changes to the data-flow.

Noria is available at https://pdos.csail.mit.edu/noria.<



Read the whole story
graydon
40 days ago
reply
Share this story
Delete

RobinHood: tail latency aware caching – dynamic reallocation from cache-rich to cache-poor

2 Shares

RobinHood: tail latency aware caching – dynamic reallocation from cache-rich to cache-poor Berger et al., OSDI’18

It’s time to rethink everything you thought you knew about caching! My mental model goes something like this: we have a set of items that probably follow a power-law of popularity.

We have a certain finite cache capacity, and we use it to cache the most frequently requested items, speeding up request processing.

Now, there’s a long tail of less frequently requested items, and if we request one of these that’s not in the cache the request is going to take longer (higher latency). But it makes no sense whatsoever to try and improve the latency for these requests by ‘shifting our cache to the right.’

Hence the received wisdom that unless the full working set fits entirely in the cache, then a caching layer doesn’t address tail latency.

So far we’ve been talking about one uniform cache. But in a typical web application one incoming request might fan out to many back-end service requests processed in parallel. The OneRF page rendering framework at Microsoft (which serves msn.com, microsoft.com and xbox.com among others) relies on more than 20 backend systems for example.

RobinHood-Fig-1.jpeg

 

The cache is shared across these back-end requests, either with a static allocation per back-end that has been empirically tuned, or perhaps with dynamic allocation so that more popular back-ends get a bigger share of the cache.

The thing about this common pattern is that we need to wait for all of these back-end requests to complete before returning to the user. So improving the average latency of these requests doesn’t help us one little bit.

Since each request must wait for all of its queries to complete, the overall request latency is defined to be the latency of the request’s slowest query. Even if almost all backends have low tail latencies, the tail latency of the maximum of several queries could be high.

(See ‘The Tail at Scale’).

The user can easily see P99 latency or greater.

Techniques to mitigate tail latencies include making redundant requests, clever use of scheduling, auto-scaling and capacity provisioning, and approximate computing. Robin Hood takes a different (complementary) approach: use the cache to improve tail latency!

Robin Hood doesn’t necessarily allocate caching resources to the most popular back-ends, instead, it allocates caching resources to the backends (currently) responsible for the highest tail latency.

…RobinHood dynamically allocates cache space to those backends responsible for high request tail latency (cache-poor) backends, while stealing space from backends that do not affect the request tail latency (cache-rich backends). In doing so, Robin Hood makes compromises that may seem counter-intuitive (e.g., significantly increasing the tail latencies of certain backends).

If you’re still not yet a believer that caching can help with tail latencies, the evaluation results should do the trick. RobinHood is evaluated with production traces from a 50-server cluster with 20 different backend systems. It’s able to address tail latency even when working sets are much larger than the cache size.

In the presence of load spikes, RobinHood meets a 150ms P99 goal 99.7% of the time, whereas the next best policy meets this goal only 70% of the time.

Look at that beautiful blue line!

When RobinHood allocates extra cache space to a backend experience high tail latency, the hit ratio for that backend typically improves. We get a double benefit:

  • Since backend query latency is highly variable in practice, decreasing the number of queries to a backend will decrease the number of high-latency queries observed, improving the P99 request latency.
  • The backend system will see fewer requests. As we’ve studied before on The Morning Paper, small reductions in resource congestion can have an outsized impact on backend latency once a system has started degrading.

Caching challenges

Why can’t we just figure out which backends contribute the most to tail latency and just statically assign more cache space to them? Because the latencies of different backends tends to vary wildly over time: they are complex distributed systems in their own right. The backends are often shared across several customers too (either within the company, or perhaps you’re calling an external service). So the changing demands from other consumers can impact the latency you see.

Most existing cache systems implicitly assume that latency is balanced. They focus on optimizing cache-centric metrics (e.g., hit ratio), which can be a poor representation of overall performance if latency is imbalanced.

Query latency is not correlated with query popularity, but instead reflects a more holistic state of the backed system at some point in time.

An analysis of OneRF traces over a 24 hour period shows that the seventh most queried backend receives only about 0.06x as many queries as the most queried backend, but has 3x the query latency. Yet shared caching systems inherently favour backends with higher query rates (they have more shots at getting something in the cache).

The RobinHood caching system

RobinHood operates in 5 second time windows, repeatedly taxing every backend by reclaiming 1% of its cache space and redistributing the wealth to cache-poor backends. Within each window RobinHood tracks the latency of each request, and chooses a small interval (P98.5 to P99.5) around P99 to focus on, since the goal is to minimise the P99 latency. For each request that falls within this interval, RobinHood tracks the ID of the backend corresponding to the slowest query in the request. At the end of the window RobinHood calculates the request blocking count (RBC) of each backend – the number of times it was responsible for the slowest query.

Backends with a high RBC are frequently the bottleneck in slow requests. RobinHood thus considers a backend’s RBC as a measure of how cache-poor it is, and distributes the pooled tax to each backend in proportion to its RBC.

RBC neatly encapsulates the dual considerations of how likely a backend is to have high latency, and how many times that backend is queried during request processing.

Since some backends are slow to make use of the additional cache space (e.g., if their hit rations are already high). RobinHood monitors the gap between the allocated and used cache capacity for each backend, and temporarily ignores the RBC of any backend with more than a 30% gap.

When load balancing across a set of servers RobinHood makes allocation decisions locally on each server. To avoid divergence of cache allocations over time, RobinHood controllers exchange RBC data. With a time window of 5 seconds, RobinHood caches converge to the average allocation within about 30 minutes.

The RobinHood implementation uses off-the-shelf memcached instances to form the caching layer in each application server. A lightweight cache controller at each node implements the RobinHood algorithm and issues resize commands to the local cache partitions. A centralised RBC server is used for exchange of RBC information. RBC components store only soft state (aggregated RBC for the last one million requests, in a ring buffer), so can quickly recover after a crash or restart.

Key evaluation results

The RobinHood evaluation is based on detailed statistics of production traffic in the OneRF system for several days in 2018. The dataset describes queries to more than 40 distinct backend systems. RobinHood is compared against the existing OneRF policy, the policy from Facebook’s TAO, and three research systems Cliffhanger, FAIR, and LAMA. Here are the key results:

  • RobinHood brings SLO violations down to 0.3%, compared to 30% SLO violations under the next best policy.
  • For quickly increasing backend load imbalances, RobinHood maintains SLO violations below 1.5%, compared to 38% SLO violations under the next best policy.
  • Under simultaneous latency spikes, RobinHood maintains less than 5% SLO violations, while other policies do significantly worse.
  • Compared to the maximum allocation for each backend under RobinHood, even a perfectly clairvoyant static allocation would need 73% more cache space.
  • RobinHood introduces negligible overhead on network, CPU, and memory usage.

Our evaluation shows that RobinHood can reduce SLO violations from 30% to 0.3% for highly variable workloads such an OneRF. RobinHood is also lightweight, scalable, and can be deployed on top of an off-the-shelf software stack… RobinHood shows that, contrary to popular belief, a properly designed caching layer can be used to reduce higher percentiles of request latency.



Read the whole story
graydon
40 days ago
reply
Share this story
Delete
Next Page of Stories