399 stories
·
14 followers

Absent-mindedness as dominance behaviour.

2 Shares

Absent-mindedness as dominance behaviour.

Read the whole story
graydon
155 days ago
reply
sarcozona
155 days ago
reply
Share this story
Delete

Predicting Properties of Molecules with Machine Learning

1 Share


Recently there have been many exciting applications of machine learning (ML) to chemistry, particularly in chemical search problems, from drug discovery and battery design to finding better OLEDs and catalysts. Historically, chemists have used numerical approximations to Schrödinger’s equation, such as Density Functional Theory (DFT), in these sorts of chemical searches. However, the computational cost of these approximations limits the size of the search. In the hope of enabling larger searches, several research groups have created ML models to predict chemical properties using training data generated by DFT (e.g. Rupp et al. and Behler and Parrinello). Expanding upon this previous work, we have been applying various modern ML methods to the QM9 benchmark –a public collection of molecules paired with DFT-computed electronic, thermodynamic, and vibrational properties.

We have recently posted two papers describing our research in this area that grew out of a collaboration between the Google Brain team, the Google Accelerated Science team, DeepMind, and the University of Basel. The first paper includes a new featurization of molecules and a systematic assessment of a multitude of machine learning methods on the QM9 benchmark. After trying many existing approaches on this benchmark, we worked on improving the most promising deep neural network models.

The resulting second paper, “Neural Message Passing for Quantum Chemistry,” describes a model family called Message Passing Neural Networks (MPNNs), which are defined abstractly enough to include many previous neural net models that are invariant to graph symmetries. We developed novel variations within the MPNN family which significantly outperform all baseline methods on the QM9 benchmark, with improvements of nearly a factor of four on some targets.

One reason molecular data is so interesting from a machine learning standpoint is that one natural representation of a molecule is as a graph with atoms as nodes and bonds as edges. Models that can leverage inherent symmetries in data will tend to generalize better — part of the success of convolutional neural networks on images is due to their ability to incorporate our prior knowledge about the invariances of image data (e.g. a picture of a dog shifted to the left is still a picture of a dog). Invariance to graph symmetries is a particularly desirable property for machine learning models that operate on graph data, and there has been a lot of interesting research in this area as well (e.g. Li et al., Duvenaud et al., Kearnes et al., Defferrard et al.). However, despite this progress, much work remains. We would like to find the best versions of these models for chemistry (and other) applications and map out the connections between different models proposed in the literature.

Our MPNNs set a new state of the art for predicting all 13 chemical properties in QM9. On this particular set of molecules, our model can predict 11 of these properties accurately enough to potentially be useful to chemists, but up to 300,000 times faster than it would take to simulate them using DFT. However, much work remains to be done before MPNNs can be of real practical use to chemists. In particular, MPNNs must be applied to a significantly more diverse set of molecules (e.g. larger or with a more varied set of heavy atoms) than exist in QM9. Of course, even with a realistic training set, generalization to very different molecules could still be poor. Overcoming both of these challenges will involve making progress on questions–such as generalization–that are at the heart of machine learning research.

Predicting the properties of molecules is a practically important problem that both benefits from advanced machine learning techniques and presents interesting fundamental research challenges for learning algorithms. Eventually, such predictions could aid the design of new medicines and materials that benefit humanity. At Google, we feel that it’s important to disseminate our research and to help train new researchers in machine learning. As such, we are delighted that the first and second authors of our MPNN paper are Google Brain residents.
Read the whole story
graydon
354 days ago
reply
Share this story
Delete

Updating Google Maps with Deep Learning and Street View

1 Comment and 2 Shares


Every day, Google Maps provides useful directions, real-time traffic information and information on businesses to millions of people. In order to provide the best experience for our users, this information has to constantly mirror an ever-changing world. While Street View cars collect millions of images daily, it is impossible to manually analyze more than 80 billion high resolution images collected to date in order to find new, or updated, information for Google Maps. One of the goals of the Google’s Ground Truth team is to enable the automatic extraction of information from our geo-located imagery to improve Google Maps.

In “Attention-based Extraction of Structured Information from Street View Imagery”, we describe our approach to accurately read street names out of very challenging Street View images in many countries, automatically, using a deep neural network. Our algorithm achieves 84.2% accuracy on the challenging French Street Name Signs (FSNS) dataset, significantly outperforming the previous state-of-the-art systems. Importantly, our system is easily extensible to extract other types of information out of Street View images as well, and now helps us automatically extract business names from store fronts. We are excited to announce that this model is now publicly available!
Example of street name from the FSNS dataset correctly transcribed by our system. Up to four views of the same sign are provided.
Text recognition in a natural environment is a challenging computer vision and machine learning problem. While traditional Optical Character Recognition (OCR) systems mainly focus on extracting text from scanned documents, text acquired from natural scenes is more challenging due to visual artifacts, such as distortion, occlusions, directional blur, cluttered background or different viewpoints. Our efforts to solve this research challenge first began in 2008, when we used neural networks to blur faces and license plates in Street View images to protect the privacy of our users. From this initial research, we realized that with enough labeled data, we could additionally use machine learning not only to protect the privacy of our users, but also to automatically improve Google Maps with relevant up-to-date information.

In 2014, Google’s Ground Truth team published a state-of-the-art method for reading street numbers on the Street View House Numbers (SVHN) dataset, implemented by then summer intern (now Googler) Ian Goodfellow. This work was not only of academic interest but was critical in making Google Maps more accurate. Today, over one-third of addresses globally have had their location improved thanks to this system. In some countries, such as Brazil, this algorithm has improved more than 90% of the addresses in Google Maps today, greatly improving the usability of our maps.

The next logical step was to extend these techniques to street names. To solve this problem, we created and released French Street Name Signs (FSNS), a large training dataset of more than 1 million street names. The FSNS dataset was a multi-year effort designed to allow anyone to improve their OCR models on a challenging and real use case. FSNS dataset is much larger and more challenging than SVHN in that accurate recognition of street signs may require combining information from many different images.
These are examples of challenging signs that are properly transcribed by our system by selecting or combining understanding across images. The second example is extremely challenging by itself, but the model learned a language model prior that enables it to remove ambiguity and correctly read the street name. Note that in the FSNS dataset, random noise is used in the case where less than four independent views are available of the same physical sign.
With this training set, Google intern Zbigniew Wojna spent the summer of 2016 developing a deep learning model architecture to automatically label new Street View imagery. One of the interesting strengths of our new model is that it can normalize the text to be consistent with our naming conventions, as well as ignore extraneous text, directly from the data itself.
Example of text normalization learned from data in Brazil. Here it changes “AV.” into “Avenida” and “Pres.” into “Presidente” which is what we desire.
In this example, the model is not confused from the fact that there is two street names, properly normalizes “Av” into “Avenue” as well as correctly ignores the number “1600”.
While this model is accurate, it did show a sequence error rate of 15.8%. However, after analyzing failure cases, we found that 48% of them were due to ground truth errors, highlighting the fact that this model is on par with the label quality (a full analysis our error rate can be found in our paper).

This new system, combined with the one extracting street numbers, allows us to create new addresses directly from imagery, where we previously didn’t know the name of the street, or the location of the addresses. Now, whenever a Street View car drives on a newly built road, our system can analyze the tens of thousands of images that would be captured, extract the street names and numbers, and properly create and locate the new addresses, automatically, on Google Maps.

But automatically creating addresses for Google Maps is not enough -- additionally we want to be able to provide navigation to businesses by name. In 2015, we published “Large Scale Business Discovery from Street View Imagery”, which proposed an approach to accurately detect business store-front signs in Street View images. However, once a store front is detected, one still needs to accurately extract its name for it to be useful -- the model must figure out which text is the business name, and which text is not relevant. We call this extracting “structured text” information out of imagery. It is not just text, it is text with semantic meaning attached to it.

Using different training data, the same model architecture that we used to read street names can also be used to accurately extract business names out of business facades. In this particular case, we are able to only extract the business name which enables us to verify if we already know about this business in Google Maps, allowing us to have more accurate and up-to-date business listings.
The system is correctly able to predict the business name ‘Zelina Pneus’, despite not receiving any data about the true location of the name in the image. Model is not confused by the tire brands that the sign indicates are available at the store.
Applying these large models across our more than 80 billion Street View images requires a lot of computing power. This is why the Ground Truth team was the first user of Google's TPUs, which were publicly announced earlier this year, to drastically reduce the computational cost of the inferences of our pipeline.

People rely on the accuracy of Google Maps in order to assist them. While keeping Google Maps up-to-date with the ever-changing landscape of cities, roads and businesses presents a technical challenge that is far from solved, it is the goal of the Ground Truth team to drive cutting-edge innovation in machine learning to create a better experience for over one billion Google Maps users.

Read the whole story
jepler
354 days ago
reply
They're justifiably proud of this, but on the other hand google told me verbally last night to drive down "West Oh Saint" (actually West "O" Street) in my home town. wat.
Earth, Sol system, Western spiral arm
duerig
354 days ago
Where I live, all the house numbers are both a number and a direction like '1234 North'. And most of the street names are some number (usually multiple of a hundred) and a different direction like '1200 West'. So the full address is '1234 North 1200 West'. Google's navigation can't handle this. So confusingly, it refers to every street by two directions. It tells you to 'Turn left on North Twelve Hundred West' which doesn't exist. None of the signs say 'North 1200 West'. Nobody who lives here would refer to it that way. And you have to pause and disambiguate which direction matters and which is added spuriously by Google. I think weird stuff like this will only become more common. The world is more diverse and weird than the models and the oddities of each locale will be shoehorned in and never quite fit right.
graydon
354 days ago
reply
Share this story
Delete

10 questions and answers about America’s “Big Government”

2 Comments and 5 Shares

The ongoing debate over the Trump administration’s plan to freeze federal hiring has thus far involved arguments and “alternative facts” from those on both sides of the question. This obscures certain hard truths about America’s “Big Government” and its real federal bureaucracy.  What follows is an (I hope brief and user-friendly but duly detailed) attempt to mediate that debate and spotlight certain deeply inconvenient truths about the character and quality of present-day American government and “we the people” to whom it is accountable.

Author

  1. What is “Big Government?”

As commonly used in America, “Big Government” refers to three features of the national or federal government headquartered in Washington, D.C.:

  • How much it spends
  • How much it does, and
  • How many people it employs
  1. How much has federal government spending grown?

Since 1960, annual federal spending (adjusted for inflation) has increased about fivefold: it doubled between 1960 and 1975, and doubled again between 1975 and 2005.

  1. Has Washington been doing more or just spending more?

Doing lots more!

jdj_chart_1 jdj_chart_2 jdj_chart_3

Seven new federal Cabinet agencies have been established since 1960—from Housing and Urban Development in 1965, to Homeland Security in 2002.

Dozens of new sub-Cabinet agencies were also established, like the Environmental Protection Agency in 1970 and the Federal Emergency Management Agency in 1979.

Batteries of new federal laws, regulations, and programs were enacted on issues that were virtually absent from the pre-1960 Federal policy agenda—crime, drug abuse, campaign finance, sexual orientation, gun control, school quality, occupational safety, the environment, health care insurance, and others.

Take a look at the three figures above.  A crude if suggestive measure of this post-1960 growth in what Washington does is the Federal Register, which catalogues all federal rules and regulations.

As federal spending increased five-fold, the number of pages in the Federal Register increased about six-fold to more than 80,000 small-print pages.

So, spending lots more, check.

Doing lots more, check.

  1. What about growth in the federal government workforce and in the ranks of federal bureaucrats?

Well, as suggested by the middle figure above, it would appear that there has been… none at all!

We have had roughly the same number of federal workers, not counting uniformed military personnel and postal workers, for the past 57 years.

When John F. Kennedy was elected in 1960, we had about 1.8 million full-time federal bureaucrats—the same number as when George W. Bush was elected president in 2000.

When Ronald Reagan was reelected in 1984, there were about 2.2 million bureaucrats—nearly 200,000 more than when Barack Obama was elected in 2008.

  1. So, how did post-1960 United States have a five-fold increase in national government spending, establish seven new cabinet agencies, effect a steady expansion in programs and regulations, and yet experience zero growth in the workforce responsible for stewarding trillions of tax dollars and translating 80,000-plus pages of words into action?

By employing three species of administrative proxies:

  • State and local governments
  • For-profit businesses
  • Nonprofit organizations

De facto Feds: Since 1960, while the federal workforce hovered around two million full-time bureaucrats, the total number of state and local government employees tripled to more than 18 million workers.

jdj_chart_4

This sub-national government workforce expansion was fueled by the feds.  Adjusted for inflation, between the early 1960s and the early 2010s, federal grants-in-aid for the states increased more than 10-fold.

JDJ_4

For instance, take the Environmental Protection Agency (EPA), with its fewer than 20,000 employees spread across 10 administrative regions.  More than 90 percent of EPA programs are administered A-to-Z by state government agencies that employ thousands of environmental protection workers.

Or consider the federal center directly responsible for overseeing Medicare and Medicaid administration—the Centers for Medicare and Medicaid Services (CMS). It has fewer than 5,000 employees handling two mega-programs that together account for about a quarter of the federal budget. The federal government pays for at least half of the states’ administrative costs for Medicaid.

By the same token, local police departments received increased federal funding and expanded their payrolls via successive post-1968 federal crime bills and the federal homeland security initiatives that began in 2002.

Measurements and estimates vary, but by conservative estimates, there are about three million state and local government workers that funded via federal grants and contracts.

Armies of paid contractors: For-profit contracting businesses are used by every federal department, bureau, and agency. There are many thorny data issues that make it difficult to obtain an exact count, but the best available estimates indicate that the total number of federal contract employees increased from about five million in 1990 to about 7.5 million in 2013.

JDJ_5

For instance, the military-industrial complex that President Eisenhower warned Americans about in 1961 is today the massive Defense Department-private contractor complex.  Over the last nine years or so, the Department of Defense has had the full-time equivalent of about 700,000 to 800,000 federal civilian workers, plus the full-time equivalent of between 620,000 to 770,000 for-profit-contract employees—nearly one full-time contract employee for every DoD civilian bureaucrat.

During the first Gulf War in 1991, American soldiers outnumbered private contractors in the region by about 60-to-1; but, by 2006, there were nearly as many private contractors as soldiers in Iraq—about 100,000 contract employees, not counting subcontractor employees, versus 140,000 troops.

Government-supported nonprofit workers: Employment in the tax-exempt or independent sector more than doubled between 1977 and 2012 to more than 11 million people.  Just the subset of nonprofit organizations that files with the IRS has more than $2 trillion a year in revenues.

nonprof_irs
So, beyond the two million civilian federal bureaucrats, how many people now make a living administering federal government policies and programs? 
Roughly a third of those nonprofit revenues flows from government grants plus fees for services and goods from government sources.  Each year, tens of billions of federal “pass-through” dollars flow from Washington through state capitals and into the coffers of local government and nonprofit organizations—nonprofit hospitals, universities, religious charities, and others.

Nobody can say for sure, but let’s quickly do some informed guesstimating and federal workforce arithmetic:

  • With one-third of its revenues flowing from government, if only one-fifth of the 11 million nonprofit sector employees owe their jobs to federal or intergovernmental grant, contract, or fee funding, that’s 2.2 million workers.
  • As noted, the best for-profit contractor estimate is 7.5 million.
  • And the conservative sub-national government employee estimate is three million.
  • That’s 12.2 million in all, but let’s scale down to call it 12 million.
  • 12 million plus our good-old two million actual federal bureaucrats equals 14 million.
  • And how many were there back in 1960? The feds had some administrative proxies even then, maybe as many as two million, plus two million actual federal bureaucrats.
  • So, let’s call it 14 million in all today versus four million back when Ike was saying farewell.

So, the real federal bureaucracy, defined as the total number of people (federal civilian workers, de facto feds in state or local government agencies, for-profit contractor employees, and nonprofit workers) paid to administer federal policies and programs, probably increased at least 3.5-fold during the same five-and-a-half decades that real federal spending increased five-fold and the number of pages in the Federal Register increased six-fold.

  1. What, then, is the one “must-know fact” about “Big Government” in America today?

It is that “Big Government” in America today is both debt-financed and proxy-administered.

The first half of that essential fact is well known, much discussed, and much debated.  For all but five post-1960 years, the federal government has run deficits, and the national debt is now bordering on $20 trillion.  But the latter half of that essential fact—rampant proxy administration—is little known, poorly understood, and, except in certain moments of crisis, ignored.

  1. But why is “Big Government” both debt-financed and proxy-administered?

There are three separate but related reasons: public opinion, lobbying by the proxies, and congressional electoral politics.

  1. Public opinion: For all the polls proclaiming mass public mistrust of government and for all the bad-mouthing of Washington, most Americans want the very government benefits and programs that the post-1960 federal government has enacted. The only thing that a near-majority of Americans wants to cut is foreign aid, which is not even static on the static in the federal budget adding machine.jdj_gov_cuts
  1. Lobbying by the proxies: State and local governments and their governors associations, mayors associations, state legislatures, corrections commissioners, and more; big and small business lobbies; and, yes, nonprofit sector lawyer-lobbyists—all three federal proxies exert nonstop pressure in favor of federal policies that pay them to administer federal business, with as few strings attached as possible, and with lots of paperwork but little real accountability for performance and results.
  2. Congressional electoral politics: To oversimplify but only a little, when it comes to “Big Government,” Americans are philosophically conservative but behaviorally liberal. They want big government benefits and programs, but they do not want to pay big government taxes and they prefer not to receive their goods and services directly from the hand of big government bureaucracies.Since the late 1960s, congressional incumbents in both parties have been reelected at a rate of about 90 percent because they give the public exactly what it wants.  Republicans are, for this purpose, the party of “tax less.”  Democrats are the party of “benefit more.”  Together, they have won every post-1960 election.

The late, great American politics scholar, our Brookings Institution colleague Professor Martha Derthick, said it all:

“Congress has habitually chosen the medium of grants not so much because it loves the states more as because it loves the federal bureaucracy less. Congress loves action – it thrives on policy proclamation and goal setting – but it hates bureaucracy and taxes, which are the instruments of action. Overwhelmingly, it has resolved this dilemma by turning over the bulk of administration to the state governments or any organizational instrumentality it can lay its hands on whose employees are not counted on the federal payroll.”

— Prof. Martha Derthick, Keeping the Compound Republic (Brookings, 2001)

Congress, the keystone of the Washington establishment, has spent half a century promising us that, so to speak, we can all go to heaven without needing to die first.  The American way of “Big Government” has produced massive deficits, both financial and administrative.

  1. How does the real federal bureaucracy—the bureaucracy-by-proxy—perform?

Not well!

In his recent book “Escaping Jurassic Government” (Brookings, 2016), the public administration scholar of scholars, Prof. Donald F. Kettl, analyzed the federal programs on the U.S. Government Accountability Office (GAO) list of programs that have suffered the worst cost overruns, management meltdowns, or other acute or chronic failures.  Some have been on the list for more than two decades straight.

jdj_table_Highrisk

Kettl found that 28 of the 32 programs on that GAO high-risk list were among the very federal programs with the highest proxy-administration quotients—all the ones in bold in the boxes just above.

To be more graphic, consider the tale of the two FEMAs. The left-side photo below represents the FEMA that, just before Hurricane Katrina, had had its full-time federal staff slashed to under 3,000 people while being loaded up with more than a dozen new official chores relating to homeland security.

sandy_katrina

The right-side photos, however, represent the FEMA that, in the wake of post-Katrina congressional investigations, got its staffing not only restored but doubled, just in time for the agency’s imperfect but far superior response to Hurricane Sandy.

JDJ_govt_logos

Now, gaze at the logos above.  They represent just three of many severely short-staffed federal agencies that are losing veteran staff members.  If they do not get more full-time federal bureaucrats before too long, they are likely to implode administratively—and they have arguably already begun to do so:

  • The short-staffed IRS fails to collect $400 billion in taxes per-year.
  • The skeleton-staffed EPA has thousands of toxic waste sites that have been on the clean-up list forever, gives provisional approval to scores of pesticides that it has not the staff capacity to fully examine before allowing to go to market, and even at one point had to contract out a congressionally mandated report on what it should not contract out.
  • The SSA is losing a third of its veteran workforce at a moment when its beneficiary population is booming and its disability claims are exploding. Nearly 180,000 people visit SSA offices and another several hundred thousand call SSA offices each day, and within the next decade, the agency will disburse nearly $1.8 trillion per year!

And before your next flight, take a look at this January 2016 report.  It states that about one-third of the nation’s 13,800 air traffic controllers will turn over by 2021, and that the FAA has an air traffic control system that is so badly antiquated technologically that nobody truly knows how to begin to modernize and patch it.

  1. Does Washington actually spend more on defense contractors alone than it does on the entire federal civilian workforce?

Yes.  It spends about $250 billion in wages and benefits for “bureaucrats” versus $350 billion or so for defense contractors.  And it spends more than twice as much on Medicare beneficiaries as it does on the entire federal civilian workforce.

JDJ_bubbles

We the people

So, let’s not kid ourselves, or let politicians in either party or at either end of Pennsylvania Avenue, kid us:

  • America has over-leveraged, not limited, government. Our debt-financed and proxy-administered system has been growing for a half-century under both parties.
  • Freezing our federal workforce, which is the same size today as it was in 1960, will have no significant impact on federal spending.
  • To “drain the swamp” in Washington, D.C. would mean draining state and local governments, private contractors, nonprofit grantees, and middle-class entitlement beneficiaries like most Medicare beneficiaries.
  • Debt-financed, proxy-administered, poor-performing, American-style “Big Government” now represents about 40 percent of the nation’s GDP.

In Federalist Paper No. 63, James Madison warned about the duty of elected leaders (U.S. senators, in particular) to guard “the people against their own temporary errors and delusions.”

But “we the people” are a half-century into the errors and delusions behind our debt-financed, proxy-administered “Big Government” and the real federal bureaucracy.

In Federalist Paper No. 68, Alexander Hamilton, who remains the most finance savvy leader in American history, and who was by no means allergic to stronger national government, lectured that “the true test of good government is its aptitude and tendency to produce a good administration.”

For all the partisan and ideological fights, and across all the usual demographic and regional lines, Americans and their leaders are today ever more strongly united, not badly divided—united, that is, in failing Hamilton’s good government test.

Note: This post was corrected on February 28. The previous version stated that $400 million in taxes are not collected, but the correct number is $400 billion.

John J. DiIulio, Jr., a Nonresident Senior Fellow at the Brookings Institution and professor at the University of Pennsylvania, is the author of Bring Back the Bureaucrats: Why More Federal Workers Will Lead to Better (and Smaller!) Government (Templeton Press, 2014).

Read the whole story
acdha
420 days ago
reply
These discussions always highlight just how little the average person understands about how the government actually works — a combination of epic failure and malicious sabotage on the part of the media, politicians, and various pundits.
Washington, DC
graydon
419 days ago
reply
sarcozona
419 days ago
reply
Share this story
Delete
1 public comment
superiphi
419 days ago
reply
so the one thing that's big about the federal government is military and security contractors?
Idle, Bradford, United Kingdom
acdha
419 days ago
Not just military: all sorts of jobs have been contracted out, even though GAO studies have consistently shown that contracting routine work costs more. Politicians love to attack the size of the federal workforce but rarely pick entire functions to cut since it's politically easier to pretend that staff are overpaid, knowing that most people have never seen the data, or inefficient, failing to address that where true that's often required by law.

Let’s Do More of What Works

2 Shares

Separated bike lanes are extremely effective at attracting people to the two-wheeled transportation alternative.  But there is less attention paid to the larger component of Vancouver’s continent-dominating cycling infrastructure.  Namely, neighbourhood streets  changed into cycling routes. Traffic calming, lower speed limits, bike buttons at arterials — all contribute to making it easy to get around the city by bike.

Mike Hagar looks at this cheap and effective infrastructure in the Globe and Mail.  Replete with extensive quotes from Gordon Price.

Urban-planning and transportation experts have long feted Vancouver’s extensive system of bike-friendly side streets as a cheap and uncontroversial way for bike-resistant North American cities to create the infrastructure that gets people out of their cars and onto two wheels.

It’s very simple,” says Gordon Price, a six-term former city councillor and former director of Simon Fraser University’s City Program. “All you have to do is put in traffic signals where these side streets cross another arterial.” . . .

. . .  Price was a councillor from 1986 to 2002, after which he says his Non-Partisan Association party committed to fomenting a “bikelash” among Vancouver’s more conservative residents to oppose any expansion to the city’s cycling infrastructure. This movement began to reach a fever pitch in the run-up to council reallocating a car lane of the Burrard Street Bridge in 2009 to create a separated path for cyclists riding in and out of downtown.

It’s territorial, it is tribal – it doesn’t matter what the data says,” Mr. Price says of the resistance toward such separated bike lanes. “People just feel like ‘you’re taking space; the congestion’s bad already; you’re deliberately making my life worse. For who? A bunch of jerks who aren’t obeying the law. Why don’t you licence them and make them pay their way? Anyway, we don’t have room and blah blah blah.’ And guess what happens [after a new bike lane is built]? Nothing.”

Nothing, that is, except a steady rise in mode-share for the two-wheeled alternative. Today, roughly 10% of trips to and from work are made by bicycle, and that number seems likely to continue its rise.






Read the whole story
graydon
470 days ago
reply
sarcozona
471 days ago
reply
Share this story
Delete

Just say NO to Paxos overhead: replacing consensus with network ordering

1 Comment and 3 Shares

Just say NO to Paxos overhead: replacing consensus with network ordering Li et al., OSDI 2016

Everyone knows that consensus systems such as Paxos, Viewstamped Replication, and Raft impose high overhead and have limited throughput and scalability. Li et al. carefully examine the assumptions on which those systems are based, and finds out that within a data center context (where we can rely on certain network properties), we can say NO to the overhead. NO in this case stands for ‘Network Ordering’.

This paper demonstrates that replication in the data center need not impose such a cost by introducing a new replication protocol with performance within 2% of an unreplicated system.

If you have a completely asynchronous and unordered network, then you need the full complexity of Paxos. Go to the other extreme and provide totally ordered atomic broadcast at the network level, and replica consistency is trivial. Providing totally ordered atomic broadcast though requires moving pretty much the exact same coordination overhead just to a different layer.

So at first glance it seems we just have to pay the coordination price one way or another. And you’d think that would be true for any division of responsibility we can come up with. But Li et al. find an asymmetry to exploit – there are certain properties that are stronger than completely asynchronous and unordered, but weaker than totally ordered atomic, which can be implemented very efficiently within the network.

Our key insight is that the communication layer should provide a new ordered unreliable multicast (OUM) primitive – where all receivers are guaranteed to process multicast messages in the same order, but messages may be lost. This model is weak enough to be implemented efficiently, yet strong enough to dramatically reduce the costs of a replication protocol.

That really only leaves us with three questions to answer: (i) can we implement OUM efficiently in a modern data center, (ii) does OUM meaningfully simplify a replication protocol, and (iii) does the resulting system perform well in practice.

We’ll take each of those questions in turn over the next three sections. The TL;DR version is: Yes, yes, and yes.

By relying on the OUM primitive, NOPaxos avoids all coordination except in rare cases, eliminating nearly all the performance overhead of traditional replication protocols. It provides _throughput within 2% and latency within 16µs of an unreplicated system, demonstrating that there need not be a tradeoff between enforcing strong consistency and providing maximum performance.

Can OUM be implemented efficiently?

Ordered unreliable multicast skips the difficult problem of guaranteeing reliable delivery in the face of a wide range of failures, but does provide an in-order guarantee for the messages it does deliver.

  • There is no bound on the latency of message delivery
  • There is no guarantee that any message will ever be delivered to any recipient
  • If two messages m and m’ are multicast to a set of processes R, then all processes in R that receive m and m’ receive them in the same order.
  • If some message m is multicast to some set of processes, R, then either (1) every process in R receives m or a notification that there was a dropped message before receiving the next multicast, or (2) no process in R receives m or a dropped message notification for m.

The asynchrony and unreliability properties are standard in network design. Ordered multicast is not: existing multicast mechanisms do not exhibit this property.

An OUM group is a set of receivers identified by an IP address. For each OUM group there are one or more sessions. The stream of messages sent to a particular group is decided into consecutive OUM sessions, and during a session all OUM guarantees apply. (They sound similar to the more familiar consensus algorithm notion of an epoch). Sessions are generally long-lived, but failures may cause them to end, in which case we can use a more expensive protocol to switch to a new session.

One easy to understand design for OUM would simply be to route all traffic for a group through a single sequencer node that adds a sequence number to every packet before forwarding it. That of course would be a crazy bottleneck and single point of failure… or would it??

Adding the sequencer itself is straightforward using the capabilities of modern SDNs. But how do we achieve high throughput, low latency, and fault-tolerance?

Each OUM group is given a distinct address in the data center network that senders use to address messages to the group. All traffic for this group is routed through the group’s sequencer. If we use a switch itself as the sequencer, and that switch happens to be one through which nearly all of the traffic passes anyway (i.e., a common ancestor of all destination nodes in the tree hierarchy to avoid increasing path lengths) then there is almost zero added overhead.

In 88% of cases, network serialization added no additional latency for the message to be received by a quorum of 3 receivers; the 99th percentile was less than 5µs of added latency.

Because the sequencer increments its counter by one on each packet, the client libOUM library can easily detect drops and return drop-notifications when it sees gaps in the sequence numbering.

Using a switch as a sequencer is made possible by the increasing ability of data center switches to perform flexible, per-packet computations.

A whole new class of switch architectures provide the needed programmability, exposed through high-level languages like P4, and provides orders-of-magnitude lower latency and greater reliability than using an end-host for the same functionality. Such programmable switches will be commercially available within the next year, but they’re not here yet. In the meantime, it’s possible to implement the scheme as a middlebox using existing OpenFlow switches and a network processor. An implementation using a Cavium Octeon II CN68XX network processor imposed latency of 8µs in the median case, and 16µs in the 99th percentile.

Such switches and network processors are very unlikely to become the bottleneck – even a much slower end-host sequencer using RDMA can process close to 100M requests per second – many more than any single OUM group can process.

If a sequencer fails, the controller selects a different switch and reconfigures the network to use it.

During the reconfiguration period, multicast messages may not be delivered. However, failures of root switches happen infrequently, and rerouting can be completed within a few milliseconds, so this should not significantly affect system availability.

Adding session numbers in front of the sequencer count numbers ensures that the rare failures of switches can always be detected. Changing sessions is done using a Paxos-replicated controller group.

Does OUM meaningfully simplify replication?

NOPaxos, on Network-Ordered Paxos, is a new replication protocol which leverages the Ordered Unreliable Multicast sessions provided by the network layer.

Here’s the intution: a traditional state machine replication system must provide two guarantees:

  • Ordering: if some replica processes request a before b, no replica processes b before a.
  • Reliable delivery: every request submitted by a client is either processed by all replicas or none.

In our case, the first of these two requirements is handled entirely by the OUM layer.

NOPaxos is built on top of the guarantees of the OUM network primitive. During a single OUM session, requests broadcast to the replicas are totally ordered but can be dropped. As a result, the replicas have only to agree on which requests to execute and which to permanently ignore, a simpler task than agreeing on the order of requests. Conceptually, this is equivalent to running multiple rounds of binary consensus. However, NOPaxos must explicitly run this consensus only when drop-notifications are received. To switch OUM sessions (in the case of sequencer failure), the replicas must agree on the contents of their shared log before they start listening to the new session.

Details of the four sub-protocols that comprise NOPaxos (normal operations, gap agreement, view change, and periodic synchronisation) are given in section 5.2 of the paper.

Read the fine print, and you’ll discover that only the session leader actually executes requests (single master), and replicas log all requests but do not always know which ones have actually been executed / accepted at any given point in time (and therefore can’t be used reliably as read slaves either). Therefore we’re really looking at replication for availability and reliability, but not for scalability.

During any view, only the leader executes operations and provides results. Thus, all successful client REQUESTs are committed on a stable log at the leader, which contains only persistent client REQUESTs. In contrast, non-leader replicas might have speculative operations throughout their logs. If the leader crashes, the view change protocol ensures that the new leader first recreates the stable log of successful operations. However, it must then execute all operations before it can process new ones. While this protocol is correct, it is clearly inefficient.
Therefore, as an optimization, NOPaxos periodically executes a synchronization protocol in the background. This protocol ensures that all other replicas learn which operations have successfully completed and which the leader has replaced with NO-OPs. That is, synchronization ensures that all replicas’ logs are stable up to their syncpoint and that they can safely execute all REQUESTs up to this point in the background

Does the resulting system perform well in practice?

NOPaxos itself achieves the theoretical minimum latency and maximum throughput: it can execute operations in one round trip from client to replicas, and does not require replicas to coordinate on requests.

The evaluation compared NOPaxos to Paxos, Fast Paxos, Paxos with batching, and Speculative Paxos, as well as against an unreplicated system providing no fault tolerance.

Fig. 5 below shows how the systems compare on latency and throughput:

And here’s an evaluation of a distributed in-memory key-value store running on top of all of these algorithms:

NOPaxos outperforms all other variants on this metric: it attains more than 4 times the performance of Paxos, and outperforms the best prior protocol, Speculative Paxos, by 45%. Its throughput is also within 4% of an unreplicated system.







Read the whole story
zwol
482 days ago
reply
I wonder if this primitive might also be useful in CPU-to-memory bus design.
Mountain View, CA
graydon
482 days ago
reply
Share this story
Delete
Next Page of Stories