431 stories

Considerations on Rent Control

1 Comment and 3 Shares

(On November 13, I was invited to testify before the Jersey City city council on rent control. Below is an edited version of my testimony.)

My name is J. W. Mason. I have a Ph.D. in Economics from the University of Massachusetts at Amherst, I am an assistant professor of economics at John Jay College of the City University of New York, and I am a Fellow at the Rosevelt Institute.

My goal today is to present some general observations on rent regulation from the perspective of an economist.

Among economists, rent regulation seems be in similar situation as the minimum wage was 20 years ago. At that time, most economists  took it for granted that raising the minimum wage would reduce employment. Textbooks said that it was simple supply and demand — if you raise the price of something, people will buy less of it. But as more state and local governments raised minimum wages, it turned out to be very hard to find any negative effect on employment. This was confirmed by more and more careful empirical studies. Today, it is clear that minimum wages do not reduce employment. And as economists have worked to understand why not, this has improved our theories of the labor market.

Rent regulation may be going through a similar evolution today. You may still see textbooks saying that as a price control, rent regulation will reduce the supply of housing. But as the share of Americans renting their homes has increased, more and more jurisdictions are considering or implementing rent regulation. This has brought new attention from economists, and as with the minimum wage, we are finding that the simple supply-and-demand story doesn’t capture what happens in the real world.

As of 2019, there are approximately 200 cities in the US with some type of rent regulation. Most of them are in three states — New York, New Jersey, and California. Other areas where rent control was once widespread, such as Massachusetts, have seen it eliminated by state law.

A number of recent studies have looked at the effects of rent regulations on housing supply, focusing on changes in rent regulations in New Jersey and California and the elimination of rent control in Massachusetts. Contrary to the predictions of the simple supply-and-demand model, none of these studies have found evidence that introducing or strengthening rent regulations reduces new housing construction, or that eliminating rent regulation increases construction. Most of these studies do, however, find that rent control is effective at holding down rents.

A 2007 study by David Sims and a 2014 study by Autor, Palmer, and Pathak both look at the effects of the end of rent control in Massachusetts, after the passage of Question 9 by Massachusetts ballot referendum in 1994. Sims found that the end of rent control had little effect on the construction of new housing. He did however find evidence that rent control decreased the number of available rental units, by encouraging condo conversions. In other words, rent control seemed to affect the quantity of rental housing, but not the total quantity of the housing stock. Unsurprisingly, Sims also found significant increases in rent charged after decontrol, suggesting that rent control was effective in limiting rent increases. Finally, he found that rent controlled units had much longer tenure times, supporting the idea that rent control promotes neighborhood stability. Autor and coauthors reached similar conclusions. They also found that eliminating rent control also raised rents in homes in the same area that were never subject to the controls, reinforcing the idea that rent control contributes to neighborhood stability.

A 2007 study by Gilderbloom and Ye of more recent rent control laws here in New Jersey finds evidence that rent controls actually increase the supply of rental housing, by incentivizing landlords to subdivide larger rental units.

A 2015 study by Ambrosius, Glderbloom and coauthors also looks at changes in New Jersey rent regulations. As with the previous study, they find that rent control in New Jersey has not produced any detectable reduction in new housing supply. However, they also find that many of these laws,  because of their relatively generous provisions, in particular vacancy decontrol, only limit rent increases on a relatively small number of housing units. 

The most recent major study of rent control, by Diamond McQuade, and Qian in 2018, uses detailed data on San Francisco housing market to look at the effect of the mid-1990s change in rent control rules there. They suggest that while the law did effectively limit rent increases, and had no effect on new housing construction, it did have a negative effect n the supply of rental housing by encouraging condo conversions. 

The main conclusions from this literature are, first, that rent regulation is effective in limiting rent increases, although how effective it is depends on the specifics of the law. Vacancy decontrol in particular may significantly weaken rent control. Second, there is no evidence that rent regulations reduce the overall supply of housing. They, may, however, reduce the supply of rental housing if it is easy for landlords to convert apartments to condominiums or other non-rental uses. This suggests that limitations on these kinds of conversions may be worth exploring. Third, in addition to their effect on the overall level of rents, rent regulations also play an important role in promoting neighborhood stability and protecting long-term tenants.

Let me now turn to the question of why the textbook story is wrong. There are several features of housing markets and of rent control that help explain why the simple supply-and-demand model is inapplicable.

First, these arguments misunderstand the goal of rent regulation. In part, it is to preserve the supply of affordable housing. But it also recognizes the legitimate interest of long-term tenants in remaining in their homes. A rented house or apartment is still a family’s home, which they have a reasonable expectation of remaining in on terms similar to those they have enjoyed in the past. Just as we have a legal principle that people cannot be arbitrarily deprived of their property, and just as many local governments put limits on how rapidly property taxes can increase, a goal of rent control is to give people similar protection from being forced out of their homes by rent increases. 

Second, and related to this, there is a social interest in income diversity and stable neighborhoods. In the absence of rent control or other measures to control housing costs, an area that sees rising productivity or improved amenities may see a sharp rise in rents and become affordable only for higher-income households. Besides the questions of equity this raises, there are economic costs here, as it becomes difficult for people holding lower paid jobs to live within commuting distance; an area that becomes more homogenous may also lose the social and cultural dynamism that caused the improvement in the first place. Similarly, the evidence seems clear that in the absence of rent regulation, turnover among tenants will be higher, leading to less stable communities and discouraging investment by renters in their neighborhoods. The absence of rent regulation may also create political obstacles to efforts to increase housing supply, attract new employers, or otherwise improve urban areas, since current residents correctly perceive that the result of any improvement may be higher rents and displacement. Rent regulation removes these conflicts between the social interest in thriving, high-wage cities and the interests of current residents. This makes it an important component of any broader urban development program.

Third, rent regulations in general affect only increases in rents. When a new property comes on the market, landlords can charge whatever the market will bear. And when they make major improvements, again, most existing rent regulations, including the current Jersey City law, allow them to recapture those costs via higher rents. So what rent control is limiting are the rent increases that are not the result of anything the landlord has done — the rent increases that result from the increased desirability of a particular area, or of a broader regional shortage of housing relative to demand. There is no reason that limiting these windfall gains should affect the supply of housing.

Fourth, in many high-cost areas, housing supply is relatively fixed. The reason that existing homes in many large cities cost multiple times more than the costs of construction, is that the ability to add new housing in these areas is very limited, by some mix of regulatory barriers like zoning, and physical or economic barriers. In economists’ terms, the supply of housing in these areas is inelastic  – it doesn’t respond very much to changes in price. This fact is widely recognized, but its implications for rent regulation are not. In a setting where the supply of new housing is already limited by other factors  – whether land-use policy or the capacity of existing infrastructure or sheer physical limits on construction –  rent regulation will have little or no additional effect on housing supply. Instead, it will simply reduce the monopoly profits enjoyed by owners of existing housing.

Fifth, housing is very long-lived. According to the Bureau of Economic Analysis, the average age of a tenant-occupied residential structure in the US is 42 years. In much of the northeast and in older cities, the average age will be greater. The fact that housing lasts this long has important implications. No one constructing new housing is thinking about returns that far out. Most business investment is expected to repay its costs in less than 10 years. Housing construction may have a longer payback period — as we know, much construction is financed with 30-year mortgages. But the rents 40 or more years in the future are simply not a factor in the construction of new housing.  This means that there is a great deal of space to regulate the rents on existing housing without affecting the decision to build or not build

The bottom line is that rents in the everyday sense are often also economic rents. When economists use the term rent, they mean a payment that someone receives from some economic activity because of an exclusive right over it, as opposed to contributing some productive resource. When a landlord gets an income because they are lucky enough to own land in an area where demand is growing and new supply is limited, or an income from an older building that has already fully paid back its construction costs, these are rents in the economic sense. They come from a kind of monopoly, not from contributing real resources to production of housing. And one thing that almost all economists agree on is that removing economic rents does not have costs in terms of reduced output or efficiency. 

Finally, I would like to offer a few design principles for rent regulation, based on my read of the literature.

First, rent control needs to be combined with other measures to create more affordable housing. The main goals of rent regulation are to protect renters’ legitimate interest in remaining in their homes; to advance the social interest in stable, mixed-income neighborhoods; and to curb the market power of landlords. Other measures, including subsidies and incentives, reforms to land-use rules, and public investment in social housing, are needed to increase the supply of affordable housing. These two approaches should be seen as complements.

Second, there are good reasons that most existing rent control focuses on rent increases rather than the absolute level of rents. Rent control structured this way allows new housing to claim the market rent, giving the developer a chance to recover the costs of construction. Rent increases many years after the building is finished are more likely to reflect changes in the value of the location, rather than the costs of production. From the point of view of allowing existing tenants to remain in their homes, it is also makes sense to focus on increases, rather than the absolute level of rents.

Third, since rent regulation is aimed at the monopoly rents claimed by landlords, it should allow for reasonable rent increases to reflect increased costs of maintaining a building. At the same time, there is a danger that landlords will engage in unneeded improvements if this allows them to raise rents more than they would otherwise be allowed to. A natural way to balance this is to adjust the allowable rent increase each year based on some measure of average costs or a broader price index, as in the current Jersey City law.

Fourth, for rent control to be effective, tenants also need to be protected from the threat of eviction or other pressure from landlords. To give renters genuine security in their homes, they need an automatic right to renew their lease, unless the landlord can demonstrate nonpayment of rent or other good cause.

Fifth rent control is more likely to have perverse effects when the controls are incomplete. When rent regulations do reduce the supply of affordable rental housing, this is typically because they have loopholes allowing landlords to escape the regulations. In particular, vacancy decontrol or allowing larger rent increases on vacancy significantly reduces the impact of rent control and may encourage landlords to push out existing tenants. There is also some evidence that landlords seek to avoid rent regulation by converting rental units into units for sale. To avoid these kinds of unintended consequences, rent regulations should be as comprehensive as possible, and options to remove units from the regulated market need to be closed off wherever possible. 

Thank you.

Read the whole story
6 days ago
For your right wing friend who insists rent control doesn't work or is counterproductive.
6 days ago
Washington, DC
Share this story

Virginia’s big buy-in on rail could transform regional mobility

1 Comment and 2 Shares

“We cannot pave our way out of congestion.” With that declaration, Virginia Governor Ralph Northam announced a historic 3.7 billion dollar rail deal with CSX on Thursday that will allow the Commonwealth to vastly expand Amtrak and Virginia Railway Express (VRE) service over the next decade.

In the deal, Virginia acquired 225 miles of track and purchased the right of way to a further 350 miles of CSX-owned railroad. Northam also announced funding for the construction of 37 miles of new track to remove rail capacity chokepoints between Richmond and Washington, DC.

The list of projects that will be unlocked thanks to this deal reads like a rail enthusiast’s wish list: VRE service will expand by 75% and add in weekend operations, direct train connections between DC and Richmond will become a nearly hourly affair, VRE and MARC trains will have access to each other’s networks for the first time, high-speed rail to Raleigh is now possible, and planning for a mooted “Commonwealth Corridor”—linking the Blue Ridge Mountains and Hampton Roads by rail—can begin.

Here's what the proposed passenger rail network could look like under Northam's deal. Image by Virginia Department of Rail and Public Transportation.

The deal will address the Long bridge chokepoint

Currently, all DC-bound trains from Virginia have to pass over the Long Bridge, a 110-year-old rail crossing privately owned by CSX which spans the Potomac River between DC and Arlington. If the Long Bridge were to fail, the next closest north-south rail connection passes through Harper’s Ferry, West Virginia. The critical nature of this connection across the Potomac means the Long Bridge is already at 98% capacity and thus unable to accommodate any additions to passenger rail service.

With a lack of other northward rail crossings, any expansions of Amtrak and VRE service have been on hold. Northam’s announcement will double the Long Bridge’s capacity and create separate dedicated freight and passenger tracks by 2030.

The deal will also provide more freight rail capacity going northward, further fueling the rapid growth of the Port of Virginia in Norfolk, the deepest harbor on the Eastern Seaboard. This unprecedented deal means all rail connections (passenger, commuter, and freight) between the Commonwealth and the District are getting the green light for long-term expansion.

Even more critical to the future of rail in Virginia, the state will now own the Long Bridge in full. Under CSX’s exclusive ownership state officials were essentially forced to bribe the company with rail improvements in return for each additional Amtrak or VRE train the Department of Rail and Public Transportation (DRPT) wanted to run. In a September interview, DRPT head Jennifer Mitchell underscored that, “the Long Bridge is the connection between the entire Northeast and Southeast rail corridors. This is a project with national implications and impact.”

Long Bridge, a key chokepoint for rail in the region, will expand its capacity under the new deal. Image by Rex Block used with permission.

The 3.7 billion dollar price tag may sound steep, but Virginia’s alternate plan to widen I-95 by just one lane would have cost taxpayers an estimated $12 billion. Those familiar with highway expansion projects will already be aware that those estimates tend to double, triple, and even quadruple once construction is already underway.

Under Northam’s deal, Amtrak will also chip into the Commonwealth’s huge rail investment. By investing in interstate connectivity, the governor’s administration believes this deal will unlock two billion dollars worth of economic growth in Virginia.

Rail advocates are thrilled

At Thursday’s announcement in Crystal City, the governor hailed the deal with CSX as “a once in a generation opportunity to make the rail system work better for everyone in Virginia and the whole East Coast.” Many industry and business leaders agree. One of the deal’s key backers is the Greater Washington Partnership (GWP), a civic alliance that began rallying support for rail investment with its “Blueprint for Regional Mobility.”

In a statement immediately following Northam’s announcement, the co-chairs of the GWP’s Regional Mobility Initiative called the deal “one of the biggest achievements for passenger rail in the United States since Amtrak was created almost 50 years ago.” Joe McAndrew, the GWP’s Director of Transportation Policy, similarly feted the deal.

“Within a decade, Capital Region residents will be able to take a train on the hour, every hour to Richmond and beyond. This is an unprecedented step that will bind Maryland, DC, and Virginia together for many years to come,” McAndrew said in an interview right after the announcement.

Given the level of excitement on display yesterday, it’s hard to believe that Virginia begin to pay for rail service in the state only 10 years ago. Mary Hughes Hynes, the Northern Virginia District representative on the Commonwealth Transportation Board, believes the deal struck between Northam’s administration, CSX, and Amtrak will be a game-changer.

When reached for an interview yesterday evening, Hynes could hardly contain her excitement: “This will bring congestion relief to NoVA and open up freight and passenger rail options all along the Eastern Seaboard. Innovative transportation solutions continue to guide Virginia’s future!”

Comment on this article

Read the whole story
37 days ago
Not electing Republican governors pays off if your future is more than a decade out. Contrast with Maryland, where Hogan is determined to prove that the only reason we have traffic is that nobody has tried building more lanes before.
Washington, DC
33 days ago
Share this story

The Age of Instagram Face

Jia Tolentino on the increasing use of plastic surgery to resemble FaceTune-filtered photos in real life
Read the whole story
39 days ago
43 days ago
Melbourne, Australia
Share this story

Local-first software: you own your data, in spite of the cloud

1 Comment and 3 Shares

Local-first software: you own your data, in spite of the cloud Kleppmann et al., Onward! ’19

Watch out! If you start reading this paper you could be lost for hours following all the interesting links and ideas, and end up even more dissatisfied than you already are with the state of software today. You might also be inspired to help work towards a better future. I’m all in :).

The rock or the hard place?

On the one-hand we have ‘cloud apps’ which make it easy to access our work from multiple devices and to collaborate online with others (e.g. Google Docs, Trello, …). On the other hand we have good old-fashioned native apps that you install on your operating system (a dying breed? See e.g. Brendan Burns’ recent tweet). Somewhere in the middle, but not-quite perfect, are online (browser-based) apps with offline support.

The primary issue with cloud apps (the SaaS model) is ownership of the data.

Unfortunately, cloud apps are problematic in this regard. Although they let you access your data anywhere, all data access must go via the server, and you can only do the things that the server will let you do. In a sense, you don’t have full ownership of that data— the cloud provider does.

Services do get shut down1, or pricing may change to your disadvantage, or the features evolve in a way you don’t like and there’s no way to keep using an older version.

With a traditional OS app2 you have much more control over the data (the files on your file system at least, which if you’re lucky might even be in an open format). But you have other problems, such as easy access across all of your devices, and the ability to collaborate with others.

Local-first software ideals

The authors coin the phrase “local-first software” to describe software that retains the ownership properties of old-fashioned applications, with the sharing and collaboration properties of cloud applications.

In local-first applications… we treat the copy of the data on your local device — your laptop, tablet, or phone — as the primary copy. Servers still exist, but they hold secondary copies of your data in order to assist with access from multiple devices. As we shall see, this change in perspective has profound implications…

Great local-first software should have seven key properties.

  1. It should be fast. We don’t want to make round-trips to a server to interact with the application. Operations can be handled by reading and writing to the local file system, with data synchronisation happening in the background.
  2. It should work across multiple devices. Local-first apps keep their data in local storage on each device, but the data is also synchronised across all the devices on which a user works.
  3. It should work without a network. This follows from reading and writing to the local file system, with data synchronisation happening in the background when a connection is available. That connection could be peer-to-peer across devices, and doesn’t have to be over the Internet.
  4. It should support collaboration.In local-first apps, our ideal is to support real-time collaboration that is on par with the best cloud apps today, or better. Achieving this goal is one of the biggest challenges in realizing local-first software, but we believe it is possible.
  5. It should support data access for all time. On one level you get this if you retain a copy of the original application (and an environment capable of executing it). Even better is if the local app using open / long lasting file formats. See e.g. the Library of Congress recommended archival formats.
  6. It should be secure and private by default.Local-first apps can use end-to-end encryption so that any servers that store a copy of your files only hold encrypted data they cannot read.”
  7. It should give the user full ownership and control of their data.…we mean ownership in the sense of user agency, autonomy, and control over data. You should be able to copy and modify data in any way, write down any thought, and no company should restrict what you are allowed to do.

How close can we get today?

Section 3 in the paper shows how a variety of different apps/technologies stack up against the local-first ideals.

The combination of Git and GitHub gets closest, but nothing meets the bar across the board.

… we speculate that web apps will never be able to provide all the local-first properties we are looking for, due to the fundamental thin-client nature of the platform. By choosing to build a web app, you are choosing the path of data belonging to you and your company, not to your users.

Mobile apps that use local storage combined with a backend service such as Firebase and its Cloud Firestore take us closer to the local-first ideal, depending on the way the local data is treated by the application. CouchDB also gets an honourable mention in this part of the paper, only being let down by the difficulty of getting application-level conflict resolution right.

CRDTs to the rescue?

We have found some technologies that appear to be promising foundations for local-first ideals. Most notably the family of distributed systems algorithms called Conflict-free Replicated Data Types (CRDTs)… the special thing about them is that they are multi-user from the ground up… CRDTs have some similarity to version control systems like Git, except that they operate on richer data types than text files.

While most industrial usage of CRDTs has been in server-centric computing, the Ink & Switch research lab have been exploring how to build collaborative local-first client applications built on top of CRDTs. One of the fruits of this work is an open-source JavaScript CDRT implementation called Automerge which brings CRDT-style merge operations to JSON documents. Used in conjunction with the dat:// networking stack the result is Hypermerge.

Just as packet switching was an enabling technology for the Internet and the web, or as capacitive touchscreens were an enabling technology for smart phones, so we think CRDTs may be the foundation for collaborative software that gives users full ownership of their data.

The brave new world

The authors built three (fairly advanced) prototypes using this CRDT stack: a Trello clone called Trellis, a collaborative drawing program, and a ‘mixed-media workspace’ called PushPin (Evernote meets Pinterest…).

If you have 2 minutes and 10 seconds available, it’s well worth watching this short video showing Trellis in action. It really brings the vision to life.

In section 4.2.4 of the paper the authors share a number of their learnings from building these systems:

  • CRDT technology works – the Automerge library did a great job and was easy to use.
  • The user experience with offline work is splendid.
  • CRDTs combine well with reactive programming to give a good developer experience. “The result of [this combination] was that all of our prototypes realized real-time collaboration and full offline capability with little effort from the application developer.”
  • In practice, conflicts are not as significant a problem as we feared. Conflicts are mitigated on two levels: first, Automerge tracks changes at a fine-grained level, and second, “users have an intuitive sense of human collaboration and avoid creating conflicts with their collaborators.”
  • Visualising document history is important (see the Trellis video!).
  • URLs are a good mechanism for sharing
  • Cloud servers still have their place for discovery, backup, and burst compute.

Some challenges:

  • It can be hard to reason about how data moves between peers.
  • CRDTs accumulate a large change history, which creates performance problems. (This is an issue with state-based CRDTs, as opposed to operation-based CRDTs).

Performance and memory/disk usage quickly became a problem because CRDTs store all history, including character-by-character text edits. These pile up, but can’t be easily truncated because it’s impossible to know when someone might reconnect to your shared document after six months away and need to merge changes from that point forward.

It feels like some kind of log-compaction with a history watermark (e.g., after n-months you might not be able to merge in old changes any more and will have to do a full resync to the latest state) could help here?

  • P2P technologies aren’t production ready yet (but “feel like magic” when they do work).

What can you do today?

You can take incremental steps towards a local-first future by following these guidelines:

  • Use aggressive caching to improve responsiveness
  • Use syncing infrastructure to enable multi-device access
  • Embrace offline web application features (Progressive Web Apps)
  • Consider Operational Transformation as the more mature alternative to CRDTs for collaborative editing
  • Support data export to standard formats
  • Make it clear what data is stored on device and what is transmitted to the server
  • Enable users to back-up, duplicate, and delete some or all of their documents (outside of your application?)

I’ll leave you with a quote from section 4.3.4:

If you are an entrepreneur interested in building developer infrastructure, all of the above suggests an interesting market opportunity: “Firebase for CRDTs.”

  1. This link to ‘Our Incredible Journey’ handily provides a good example— it will take you first to a page announcing that Tumblr has been acquired by Automattic, on which you can agree to the new terms of service should you wish. ↩
  2. Not the new breed of OS apps that are really just wrapped browsers over an online service ↩

Read the whole story
65 days ago
The caveats are significant and mean practical designs are all either hybrid or awaiting not yet existing research.
65 days ago
Pittsburgh, PA
Share this story

Coal Knew, Too


“Exxon knew.” Thanks to the work of activists and journalists, those two words have rocked the politics of climate change in recent years, as investigations revealed the extent to which giants like Exxon Mobil and Shell were aware of the danger of rising greenhouse gas emissions even as they undermined the work of scientists.

But the coal industry knew, too — as early as 1966, a newly unearthed journal shows.

In August, Chris Cherry, a professor in the Department of Civil and Environmental Engineering at the University of Tennessee, Knoxville, salvaged a large volume from a stack of vintage journals that a fellow faculty member was about to toss out. He was drawn to a 1966 copy of the industry publication Mining Congress Journal; his father-in-law had been in the industry and he thought it might be an interesting memento.

Cherry flipped it open to a passage from James R. Garvey, who was the president of Bituminous Coal Research Inc., a now-defunct coal mining and processing research organization. 

“There is evidence that the amount of carbon dioxide in the earth’s atmosphere is increasing rapidly as a result of the combustion of fossil fuels,” wrote Garvey. “If the future rate of increase continues as it is at the present, it has been predicted that, because the CO2 envelope reduces radiation, the temperature of the earth’s atmosphere will increase and that vast changes in the climates of the earth will result.” 

“Such changes in temperature will cause melting of the polar icecaps, which, in turn, would result in the inundation of many coastal cities, including New York and London,” he continued.

“It pretty well described a version of what we know today as climate change,” said Cherry. “Increases in average air temperatures, melting of polar ice caps, rising of sea levels. It’s all in there.” 

In a discussion piece immediately following Garvey’s article, Peabody Coal combustion engineer James R. Jones noted that the coal industry was merely “buying time” before more air pollution regulations came into effect. “We are in favor of cleaning up our air,” he wrote. “Everyone can point to examples in his own community where something should be done. Our aim is to have control that does not precede the technical knowledge for compliance.” 

Climate change is not Cherry’s area of study, but he was struck by how the tone of the articles differed from the way many fossil fuel companies talk about climate change today. Rather than engage in denial, the articles offered a fairly straightforward acknowledgment of the emerging science. (This reporter is also a writer for UT’s Tickle College of Engineering, where Cherry teaches.)

As Cherry did some of his own digging, he soon realized his discovery could be the first evidence that the coal industry was aware of the impending climate crisis more than half a century ago — a finding that could open mining companies to the type of litigation that the oil industry is now facing. 

Decades Of Denial

While Peabody Energy, the largest private-sector coal company in the world and the largest producer of coal in the U.S., now acknowledges climate change on its website, it has been directly and indirectly involved in obfuscating climate science for decades. It funded dozens of trade, lobbying and front groups that peddled climate misinformation, as The Guardian reported in 2016

As recently as 2015, Peabody Energy argued that carbon dioxide was a “benign gas essential for all life.” 

Increases in average air temperatures, melting of polar ice caps, rising of sea levels. It’s all in there. Chris Cherry, a professor at the University of Tennessee, Knoxville

“While the benefits of carbon dioxide are proven, the alleged risks of climate change are contrary to observed data, are based on admitted speculation, and lack adequate scientific basis,” the company wrote in a letter that year to the White House Council on Environmental Quality.

At the heart of big coal’s denial campaign was Fred Palmer, who served as Peabody’s senior vice president of government relations from 2001 to 2015. In 1997, Palmer founded the Greening Earth Society, a now-defunct industry front group that argued that burning fossil fuels was good for the planet. The group was based in the same office as the Western Fuels Association, a consortium of coal suppliers and coal-fired utilities that Palmer also ran. 

“Every time you turn your car on and you burn fossil fuels and you put CO2 into the air, you’re doing the work of the Lord,” Palmer told a Danish documentary team in 1997. “That’s the ecological system we live in.” 

Asked for comment, a Peabody spokesperson told HuffPost: “Peabody recognizes that climate change is occurring and that human activity, including the use of fossil fuels, contributes to greenhouse gas emissions. We also recognize that coal is essential to affordable, reliable energy and will continue to play a significant role in the global energy mix for the foreseeable future. Peabody views technology as vital to advancing global climate change solutions, and the company supports advanced coal technologies to drive continuous improvement toward the ultimate goal of near-zero emissions from coal.”

Palmer, who did not respond to HuffPost’s request for comment, continues to carry the torch. He now works as an energy policy adviser to The Heartland Institute, a Chicago-based think tank whose climate denial is so severe that even Exxon Mobil abandoned funding it and its climate denial efforts a decade ago. In 2011, leaked memos showed that the institute paid contrarian scientists like Craig Idso, founder of the Center for the Study of Carbon Dioxide and Global Change, $11,600 a month to promote carbon dioxide as beneficial to the environment.

The group sits at the heart of a broader right-wing misinformation network funded in large part by hedge fund billionaire Robert Mercer and his daughter, Rebekah, both Republican mega-donors who backed President Donald Trump and financed projects such as Breitbart News and Cambridge Analytica, the data firm considered key to Trump’s 2016 win. Palmer’s daughter, Downey Magallanes, was a top policy adviser at Trump’s Interior Department before joining oil giant BP in September 2018. 

All of this was taking place well after climate change had become a commonly understood idea in the scientific community. A 1965 report from President Lyndon Johnson’s Science Advisory Committee was the first from the White House to address climate change (and is likely what precipitated the Mining Congress Journal article). “The climate changes that may be produced by the increased CO2 content could be deleterious from the point of view of human beings,” it warned. In 1988, NASA scientist James Hansen testified to Congress about what was then known as the “greenhouse effect.” And in 1992, the United Nations established the Framework Convention on Climate Change, an international treaty to begin addressing the problem.

But as this consensus emerged, so too did a wave of industry-funded climate denial via vast, shadowy networks of front groups, public relations campaigns and scientists for hire.

Pulling Back The Curtain

In 2015, journalists at InsideClimate News, the Los Angeles Times and Columbia University exposed internal Exxon Mobil documents showing that the company’s scientists had a deep understanding of climate change even as Exxon worked publicly to downplay that science. 

Twenty state attorneys general launched an “Exxon Knew” campaign, which eventually led to communities across the country filing at least 14 legal challenges against Exxon and other fossil fuel companies. One lawsuit, from the New York state attorney general’s office, went to trial on Oct. 22 and focuses on how the company accounted for the costs of potential future regulations on climate change. The Massachusetts attorney general filed another suit on Oct. 24, this time claiming the company had engaged in deceptive advertising and misled investors about the systemic financial risks to its business posed by fossil fuel-driven climate change. Earlier this month, two of Hawaii’s biggest municipalities sued Exxon and other big oil companies to recoup the costs of adapting to rising seas and more violent storms. 

Evidence of what fossil fuel companies knew about climate change and when is critical to the legal strategy of those seeking damages for carbon dioxide emissions. If fossil fuel companies were aware of their products’ harmful effects on the planet, they could be held liable for damages.

They fought the hardest because they had the biggest existential threat. Kert Davies, founder and director of the Climate Investigations Center

Legal liability boils down to four factors, said David Bookbinder, chief counsel for the Niskanen Center, which is representing counties in Colorado that have filed suit: one, whether the defendants knew that their products would cause climate change; two, what they told or did not tell the public about the consequences of using their products; three, the extent of injuries caused by climate change; and four, whether the defendants’ actions have led to a portion of those injuries. What the plaintiffs in these suits can prove remains to be seen.

What we do know is that coal, when burned, has by far the biggest climate footprint of any fossil fuel, producing more carbon dioxide per unit than oil or gas. In the U.S. alone, coal produced 65% of the power sector’s planet-warming emissions. The 1966 article in the Mining Congress Journal certainly raises questions about what the coal industry knew at the time.

Robert Brulle, a professor emeritus of sociology and environmental science at Drexel University, authored a recent paper that suggests the coal industry must have known quite a bit, given how prominently it positioned itself in the climate denial movement. 

Brulle researched 12 major groups and coalitions that argued against mandatory regulation of carbon dioxide from 1989 to 2015 — which he calls the “climate change countermovement.” That countermovement included 2,000 different businesses, political or social groups, as well as other organizations, but Brulle found that 179 core organizations belonged to multiple coalitions. Coal companies and predominantly coal-burning utilities were the most prevalent. He describes oil and gas companies as “more of a marginal player” by comparison. 

“The coal mining industry — the utilities that were burning it for electricity, along with the railroads who were hauling it — and manufacturing industries like steel were the first corporate forces to become climate deniers and try to block action on climate policy,” said Kert Davies, founder and director of the Climate Investigations Center. “They fought the hardest because they had the biggest existential threat.”

Where Do We Go From Here?

In the aftermath of the 1973 oil embargo, Exxon and other oil giants leased large parcels of land for coal mining with the goal of manufacturing synthetic fuels and lowering U.S. dependence on the Middle East.

Some previously released documents show that Exxon’s scientists began advising that the world phase out coal as a fuel as early as 1979. In one scenario, the Exxon scientists concluded that non-fossil fuels would need to be substituted for coal beginning in the 1990s to keep carbon dioxide levels below atmospheric concentrations of 440 parts per million. In 1999, Exxon merged with Mobil, and by 2002, Exxon Mobil had dumped its coal assets. 

Meanwhile, the coal industry tried to reinvent itself with the concept of “clean coal.” This as-yet-undelivered promise that carbon capture and other technological advances could lower coal’s environmental impact has been around for decades but resurged in the early 2000s as regulations seemed imminent. 

The biggest proponent of this idea was the American Coalition for Clean Coal Electricity, a coal front group that spent $35 million on public relations campaigns in 2008 alone, seeking to influence the election. A year later, ACCCE was caught sending Congress fraudulent letters opposing federal climate legislation and pretending to be from veterans, women’s and civil rights groups. The incident led many members to leave the organization, but Peabody remains a member to this day.

“Its whole mission was to stop climate regulations but pretend that they were in favor of clean coal, which, of course, doesn’t exist,” said Davies.

Peabody Energy filed for bankruptcy protection in 2016, the same year carbon dioxide levels hit 400 parts per million. Eight other coal companies have filed for bankruptcy this year. Even as the Trump administration has promised a coal resurgence and rolled back Obama-era regulations, the industry’s profitability continues to experience a downward slide. If the slogan “Coal Knew” ever does take off, it’s unclear who’ll be left to sue.

PAID FOR BY Happy Family Organics

Illustration by Rebecca Zisser, with photos from Getty.

Read the whole story
66 days ago
67 days ago
Washington, DC
Share this story

Perspective | What does female authority sound like? Marie Yovanovitch and Fiona Hill just showed us.

Perspective | What does female authority sound like? Marie Yovanovitch and Fiona Hill just showed us.:

“If you listen to Yovanovitch without looking, her voice sounds just like Elizabeth Warren,” one armchair analyst attempted as a comparison on Twitter, which was — no. The Canadian-born former ambassador sounded nothing like the twangy Okie running for president.

But the armchair analyst was working with limited options. We’re in the early stages of building a listening library of powerful female voices. We still can’t ask, as Sen. Amy Klobuchar (D-Minn.) pointed out in Wednesday’s presidential debate, “Who is your favorite woman president?” During the height of Hillary Clinton’s 2016 campaign, the candidate was so besieged with charges of “shrillness” that the Atlantic magazine interviewed experts to figure out what made her voice so allegedly irritating. They found that her so-called shrill voice was actually “average in pitch and loudness for her age and gender.”

The issue wasn’t how she sounded. It was how she sounded to us, a listening public without the aural reference library to assess female authority, trustworthiness and power.

Read the whole story
66 days ago
66 days ago
Washington, DC
Share this story
Next Page of Stories