478 stories
·
14 followers

Walking Places Is Part of the Culture Wars Now

10 Shares

Approximately two million years after our ancestors first learned to move about the planet with an upright gait, whether or not walking places is good or bad has become yet another dividing line in the American culture wars.

According to a recent Pew Research Center poll that studied the issue of whether people prefer to live in places where "schools, stores, and restaurants are within walking distance" versus where they are "several miles away," the biggest divide in opinion is not young versus old, urban versus rural, or education level. It is political preference.

Just 22 percent of Conservatives want to live in walkable neighborhoods, while 77 percent prefer driving everywhere. A slightly higher percentage of Republicans or people who lean Republican as a whole, 26 percent, want walkable neighborhoods. Meanwhile, 44 percent of moderate Democrats and 57 percent of liberals want walkable neighborhoods, resulting in a 50/50 split among Democrats as a whole.

That gap of 35 percent between Liberals who want to live in walkable neighborhoods and Conservatives who do is larger than the gap between those with postgraduate degrees and high school diplomas or less who want walkable neighborhoods (14 percent) or 18-29 year olds versus 50-64 year olds (12 percent). The poll also shows a 26-point gap between Asians who want walkable neighborhoods (58 percent) versus whites (36 percent), although the poll was only conducted in English.

Pew Study
Image: Pew Research Center

But one of the most striking findings is that the gap in walkable neighborhood preference according to extreme political views is even wider than the gap between urban and rural respondents, where 50 percent of urban residents polled want walkable neighborhoods and 25 percent of rural ones do. In other words, whether or not you actually live in an urban or rural area is less of a predictor of whether you want walkable neighborhoods than the political beliefs one holds regardless of where they live.

But, there is still a lot of disagreement on the issue, even among people who consider themselves part of the same ideological cohort.

If we put the above a slightly different way, 42 percent of liberals prefer to live in places where they have to drive everywhere. That is a very high number for the group in the survey one would think is most concerned about climate change, of which driving is a huge contributor. And while electric cars may help reduce emissions from cars significantly in the long run, they need to be accompanied by an equally significant reduction in how often and how far we drive. The most obvious and attainable solution is to live in places where we can sometimes walk to the places we need to go.

It is wrong to equate the above with the idea that everyone has to live in cities. Rural towns have and continue to thrive with houses and businesses clustered around a main street or town center built around a transportation hub to a major city. This is how much of the country looked, especially but not only in the northeast and midwest, prior to World War II. And it is how much of, say, Europe and East Asia still look. In fact, the U.S. is one of the few places where massive, suburban sprawl with mandatory single-family zoning that legally bans businesses from opening near people is the rule, norm, and general expectation. It is, also, ironically, one of the most dramatic examples in modern U.S. history of government mandates interfering with the rights of private property holders, which the Conservative movement was once ideologically opposed to.

There are all kinds of other implications from these poll results. Cars are expensive to buy and maintain, and living patterns that continue to rely on them are yet another financial burden on people who may not be able to afford them. And because building homes is expensive, there is a general housing shortage in this country, and real estate is often more of an investment than a place to live, real estate companies tend to build the largest homes they can to sell at the highest price, which means houses keep getting bigger and bigger. Meanwhile, large, detached homes use more energy than smaller or attached ones.

If the climate crisis concerns you, this is all bad news. In the two million years since our ancestors learned to walk, we have evolved to understand and manipulate our planet in ways our predecessors quite literally could not even conceive of. And yet, in some very important ways, we are going backwards.



Read the whole story
sarcozona
8 days ago
reply
graydon
25 days ago
reply
popular
27 days ago
reply
acdha
27 days ago
reply
Washington, DC
jepler
29 days ago
reply
Earth, Sol system, Western spiral arm
Share this story
Delete

25jul2021

1 Comment
Read the whole story
graydon
63 days ago
reply
Regular delights
Share this story
Delete

Faster sorted array unions by reducing branches

2 Comments and 3 Shares

When designing an index, a database or a search engine, you frequently need to compute the union of two sorted sets. When I am not using fancy low-level instructions, I have most commonly computed the union of two sorted sets using the following approach:

    v1 = first value in input 1
    v2 = first value in input 2
    while(....) {
        if(v1 < v2) {
            output v1
            advance v1
        } else if (v1 > v2) {
            output v2
            advance v2
        } else {
           output v1 == v2
           advance v1 and v2
        }
    }

I wrote this code while trying to minimize the load instructions: each input value is loaded exactly once (it is optimal). It is not that load instructions themselves are expensive, but they introduce some latency. It is not clear whether having fewer loads should help, but there is a chance that having more loads could harm the speed if they cannot be scheduled optimally.

One defect with this algorithm is that it requires many branches. Each mispredicted branch comes with a severe penalty on modern superscalar processors with deep pipelines. By the nature of the problem, it is difficult to avoid the mispredictions since the data might be random.

Branches are not necessarily bad. When we try to load data at an unknown address, speculating might be the right strategy: when we get it right, we have our data without any latency! Suppose that I am merging values from [0,1000] with values from [2000,3000], then the branches are perfectly predictable and they will serve us well. But too many mispredictions and we might be on the losing end. You will get a lot of mispredictions if you are trying this algorithm with random data.

Inspired by Andrey Pechkurov, I decided to revisit the problem. Can we use fewer branches?

Mispredicted branches in the above routine will tend to occur when we conditionally jump to a new address in the program. We can try to entice the compiler to favour ‘conditional move’ instructions. Such instructions change the value of a register based on some condition. They avoid the jump and they reduce the penalties due to mispredictions. Given sorted arrays, with no duplicated element, we consider the following code:

while ((pos1 < size1) & (pos2 < size2)) {
    v1 = input1[pos1];
    v2 = input2[pos2];
    output_buffer[pos++] = (v1 <= v2) ? v1 : v2;
    pos1 = (v1 <= v2) ? pos1 + 1 : pos1;
    pos2 = (v1 >= v2) ? pos2 + 1 : pos2;
 }

You can verify by using the assembly output that compilers are good at using conditional-move instructions with this sort of code. In particular, LLVM (clang) does what I would expect. There are still branches, but they are only related to the ‘while’ loop and they are not going to cause a significant number of mispredictions.

Of course, the processor still needs to load the right data. The address only becomes available in a definitive form just as you need to load the value. Yet we need several cycles to complete the load. It is likely to be a bottleneck, even more so in the absence of branches that can be speculated.

Our second algorithm has fewer branches, but it has more loads. Twice as many loads in fact! Modern processors can sustain more than one load per cycle, so it should not be a bottleneck if it can be scheduled well.

Testing this code in the abstract is a bit tricky. Ideally, you’d want code that stresses all code paths. In practice, if you just use random data, you will often have that the intersection between sets are small. Thus the branches are more predictable than they could be. Still, it is maybe good enough for a first benchmarking attempt.

I wrote a benchmark and ran it on the recent Apple processors as well as on an AMD Rome (Zen2) Linux box. I report the average number of nanoseconds per produced element so smaller values are better. With LLVM, there is a sizeable benefit (over 10%) on both the Apple (ARM) processor and the Zen 2 processor.  However, GCC fails to produce efficient code in the branchless mode. Thus if you plan to use the branchless version, you definitively should try compiling with LLVM. If you are a clever programmer, you might find a way to get GCC to produce code like LLVM does: if you do, please share.

system conventional union ‘branchless’ union
Apple M1, LLVM 12 2.5 2.0
AMD Zen 2, GCC 10 3.4 3.7
AMD Zen 2, LLVM 11 3.4 3.0

I expect that this code retires relatively few instructions per cycle. It means that you can probably add extra functionality for free, such as bound checking, because you have cycles to spare. You should be careful not to introduce extra work that gets in the way of the critical path, however.

As usual, your results will vary depending on your compiler and processor. Importantly, I do not claim that the branchless version will always be faster, or even that it is preferable in the real world. For real-world usage, we would like to test on actual data. My C++ code is available: you can check how it works out on your system. You should be able to modify my code to run on your data.

You should expect such a branchless approach to work well when you had lots of mispredicted branches to begin with. If your data is so regular that you a union is effectively trivial, or nearly so, then a conventional approach (with branches) will work better. In my benchmark, I merge ‘random’ data, hence the good results for the branchless approach under the LLVM compiler.

Further reading: For high speed, one would like to use SIMD instructions. If it is interesting to you, please see section 4.3 (Vectorized Unions Between Arrays) in Roaring Bitmaps: Implementation of an Optimized Software Library, Software: Practice and Experience 48 (4), 2018. Unfortunately, SIMD instructions are not always readily available.

Read the whole story
jlvanderzwan
62 days ago
reply
> with no duplicated element

In case anyone else read over the "set" part in the beginning. Doesn't this limit the use a little?
ttencate
62 days ago
It's easy to adapt the code to retain duplicates. I don't think it matters for the main point.
jepler
76 days ago
reply
It occurred to me to wonder whether the remaining ?: should be eliminated by using indexing by boolean, `T v[2] = {input1[pos1], input2[pos2]}; output_buffer[pos++] = v[v[1] <= v[0]];` but this does not improve things.
Earth, Sol system, Western spiral arm
graydon
67 days ago
reply
Share this story
Delete

The state of next-generation geothermal energy

3 Shares

What would we do with abundant energy? I dream of virtually unlimited, clean, dirt-cheap energy, but lately, we have been going in the wrong direction. As J. Storrs Hall notes, in 1978 and 1979, American per capita primary energy consumption peaked at 12 kW. In 2019, we used 10.2 kW of primary energy (and in 2020, we used 9.4 kW, a figure skewed by the pandemic economy). We are doing more with less, squeezing out more value per joule than ever before. But why settle for energy efficiency alone? With many more joules, we could create much more value and live richer lives.

A benefit of climate change is that lots of smart people are rethinking energy, but I fear they aren’t going far enough. If we want not just to replace current energy consumption with low-carbon sources, but also to, say, increase global energy output by an order of magnitude, we need to look beyond wind and solar. Nuclear fission would be an excellent option if it were not so mired in regulatory obstacles. Fusion could do it, but it still needs a lot of work. Next-generation geothermal could have the right mix of policy support, technology readiness, and resource size to make a big contribution to abundant clean energy in the near future.

Let’s talk about resource size first. Stanford’s Global Climate and Energy Project estimates crustal thermal energy reserves at 15 million zetajoules. Coal + oil + gas + methane hydrates amount to 630 zetajoules. That means there is 23,800 times as much geothermal energy in Earth’s crust as there is chemical energy in fossil fuels everywhere on the planet. Combining the planet’s reserves of uranium, seawater uranium, lithium, thorium, and fossil fuels yields 365,030 zetajoules. There is 41 times as much crustal thermal energy than energy in all those sources combined. (Total heat content of the planet, including the mantle and the core, is about three orders of magnitude higher still.)

Although today’s geothermal energy is only harvested from spots where geothermal steam has made itself available at the surface, with some creative subsurface engineering it could be produced everywhere on the planet. Like nuclear energy, geothermal runs 24/7, so it helps solve the intermittency problem posed by wind and solar. Unlike nuclear energy, it is not highly regulated, which means it could be cheap in practice as well as in theory.

At a high level, the four main next-generation geothermal concepts I will discuss do the same thing. They (1) locate and access heat, (2) transfer subsurface heat to a working fluid and bring it to the surface, and (3) exploit the heat energy at the surface through direct use or conversion to electricity. It is the second step, transferring subsurface heat to a working fluid, that is non-obvious.

What is the right working fluid? What is the best way to physically transfer the heat? Given drilling costs, what is the right target rock temperature for heat transfer? These questions are still unresolved. Different answers will give you a different technical approach. Let’s talk about the four different concepts people are working on right now, including their strengths and weaknesses, before turning to the bottlenecks in the industry.

Concept #1: Enhanced geothermal systems

Like today’s conventional geothermal (“hydrothermal”) systems, enhanced geothermal systems (EGS) feature one or more injection wells where water goes into the ground, and one or more production wells where steam comes out of the ground. Hydrothermal systems today not only need heat resources close to the surface, they require the right kind of geology in the near subsurface. The rock between the injection and production wells needs to be permeable so that the water can flow through it and acquire heat energy. The rock above that layer needs to be impermeable, so that steam doesn’t escape to the surface except through the production wells.

EGS starts with the premise of using drilling technology to access deeper heat resources. This makes it viable in more places than hydrothermal, which relies on visual evidence of heat at the surface for project siting. If you see a volcano or a geyser or a fumarole, that might be a good location for a conventional hydrothermal project. But there are only a limited number of such sites, and if we want to expand the geographic availability of geothermal we have to use deeper wells to access heat sources that are further below ground.

Once we have our deeper wells, we need a way for water to flow between them. Fortunately, since 2005, petroleum engineers have gotten good at making underground fracture networks. By using modified versions of the fracking perfected in the shale fields, geothermal engineers can create paths of tiny cracks through which water can flow between the two wells. This fracture network has a lot of surface area, which means it is relatively good for imparting heat energy to the water.

EGS has some advantages over the other next-generation geothermal concepts. From a technical perspective, it is not a big leap from existing hydrothermal practice, so the technology risk is low. In addition, the high surface area of the hot underground fracture network is good for creating steam.

Yet today’s EGS also has a disadvantage relative to the other approaches. Because the system has an open reservoir exposed to the subsurface, most EGS projects plan to use water as a working fluid. Water does not become supercritical until it reaches 374ºC (and 22 MPa). Using today’s drilling technology, EGS projects usually will not reach these temperatures, because it costs too much to drill to the required depths. Fluids in their supercritical states have higher enthalpy than in their subcritical states, so depth limitations mean EGS can’t bring as much heat energy to the surface as it could if it had access to a supercritical fluid.

Even so, EGS is promising. This year, Fervo raised a $28M Series B to pursue this approach. It also signed a deal with Google to power some of its data centers, part of the search giant’s plan to move to 100% zero-carbon energy by 2030.

Concept #2: Closed-loop geothermal systems

Imagine that, like EGS, you had an injection and a production well, but instead of relying on a network of fractures in the open subsurface to connect them, you simply connected the two wells with a pipe. The working fluid would flow down the injection well, horizontally through a lateral segment of pipe, and then up through the production well. Because such a system is closed to the subsurface, it is called a closed-loop system.

Relative to EGS, closed-loop systems have both advantages and disadvantages. A key advantage is that the working fluid can easily be something other than water. Isobutane has a critical temperature of 134.6ºC, and CO2’s is only 31.0ºC. Even with today’s drilling technology, we can reach these temperatures almost everywhere on the planet. Closed-loop systems offer the higher enthalpy associated with supercritical fluids at depths we can reach today. In addition, closed-loop systems work no matter the underlying geology, removing a risk that EGS projects face.

The big disadvantage of closed-loop systems is that pipes have much lower surface areas than fracture networks. Since heat is imparted to the working fluid by surface contact, this limits the rate at which the system can acquire energy. A solution to this is to use not just one horizontal segment, but many, like the radiator-style designs shown below. These segments can be numerous and long enough to ensure adequate heat transfer.

The problem remains, however, that these radiator-style segments are expensive to drill with today’s technology. It is possible that with experience and better drilling techniques the cost could be reduced to make this approach viable. Closed-loop startup Eavor is pursuing this approach, starting with a project in Germany taking advantage of that country’s generous geothermal subsidies.

Concept #3: Heat roots

What if you could combine the advantages of closed loops—like the ability to use a supercritical working fluid—with a way to capture the heat from a much larger surface area than that of a simple pipe? That’s the goal of Sage Geosystems’s Heat Roots concept.

Sage starts with a single vertical shaft. From the base of the shaft, they frack downwards to create a fracture pattern that gives the impression of a root system for a tree. They fill this “root” system with a convective and conductive fluid. Then, using a pipe-in-pipe system, they circulate a separate working fluid from the surface to the base of the shaft and back. At the base of the shaft, a heat exchanger takes the energy concentrated by the heat root system and imparts it to the working fluid.

This “heat roots” approach enables a lot of the benefits of closed-loop systems, like the ability to use supercritical fluids, without the main drawback of needing long horizontal pipe segments. The roots draw in and concentrate heat from greater depths than the primary shaft. In other words, closed-loop’s problem of limited surface area is solved by doing additional subsurface engineering outside of the closed loop.

A disadvantage of a monobore, pipe-in-pipe design is the limited flow rate of working fluid. In the oil and gas industry, the widest standard well diameter is 9⅝ inches. It would be non-trivial to go wider than that—you would need special drilling equipment and new casing systems. The power output of the entire system is directly proportional to the flow rate, so the monobore heat roots design is constrained in this way.

This may or may not be a problem. If the cost of constructing each individual well is low enough, then the solution would be to stamp out hundreds of thousands of these wells. What matters is the cost per watt and that the design is reproducible. It may be possible to make these or similar wells work almost anywhere by simply drilling deeply enough, although that is not yet proven.

Sage raised a Series A earlier this year and is currently working on a demonstration well in Texas. “Once we get through a successful pilot these next few months,” says Sage CTO Lance Cook, “we are off to the races.” In addition to its heat roots design, it is also studying a few other configurations.

Concept #4: Supercritical EGS

What if we had much better drilling technology? Put aside the fancy stuff, like horizontal segments—what if we could simply drill straight down into the earth much deeper and faster and cheaper than we can today?

This one capability would unlock a huge increase in geothermal power density. With depth comes higher temperatures. If we could cheaply and reliably access temperatures around 500ºC, we could make water go supercritical. This would unleash a step-change in enthalpy, without the closed loops otherwise needed for supercritical fluids. By doing EGS (concept #1) in these hotter conditions, we could get the biggest benefit of EGS—a high surface area to use to transfer heat—with one of the biggest benefits of closed-loop systems—the use of a supercritical working fluid. In addition to higher enthalpy, supercritical steam will produce higher electrical output in virtue of a higher delta-T in the generator cycle. Output of the cycle is directly proportional to the temperature differential between the steam and ambient conditions.

The benefits of producing supercritical steam at the surface go beyond these physics-based arguments. A huge potential advantage would be the ability to retrofit existing coal plants. With many coal plants shutting down in the next several years, a lot of valuable generator equipment could be lying around idle. These generators take supercritical steam as an input and use it to produce electricity. The generators don’t care whether the steam comes from a boiler fired with coal or from 15 km underground. Piping steam from a geothermal production well straight into a coal plant turbine would allow the power plant to produce the same amount of electricity as it did under coal, except with no fuel costs and no carbon emissions.

Even if free generating equipment isn’t just lying around, supercritical geothermal steam could significantly increase the output and decrease the cost of geothermal electricity. The question is whether we can achieve the necessary cost reductions in ultra-deep drilling. Rotary drill bits struggle against hard basement rock. They break and then have to be retrieved to the surface, where they are repaired and sent back downhole. This process is time-consuming and expensive. Non-rotary drilling technologies like water hammers, lasers, plasma cutters, and mm-wave directed energy have all been proposed as ways to let us drill deeper faster. By optimizing for hot, dense, hard basement rock, we could drill much deeper than we can today.

The big downside of supercritical EGS is that these advanced drilling technologies haven’t been proven yet. The big advantage is what it could enable: high-density geothermal energy anywhere on the planet. Literally every location on the planet can produce supercritical steam if you drill deep enough into the basement rock—you may have to drill 20 km to reach 500ºC temperatures in some spots, but it’s there.

Quaise is an example of a company pursuing this supercritical EGS approach. The gyrotrons used in fusion experiments produce enough energy to vaporize granite. Quaise is commercializing mm-wave directed energy technology out of MIT’s Plasma Science and Fusion Center.

Policy is suboptimal but not a deal-breaker

Unlike nuclear fission, which is regulated to near-oblivion, geothermal faces relatively few policy obstacles. I will highlight two areas where policy could easily be improved, but even if these problems are not fixed, they will likely only slow, not stop, maturation of the next-generation geothermal industry.

The first issue involves permitting. While our goal for this technology should be to enable geothermal anywhere on the planet, the natural starting point for working down the learning curve is in areas where high temperatures are closest to the surface. If you look at a map of temperature at depth in the United States, you will notice that the best spots for geothermal drilling overlap considerably with land owned by Uncle Sam.

Drilling on federal lands involves federal permitting—which involves environmental review. Environmental review, mandated by the National Environmental Policy Act any time a federal agency takes a major action that could affect the environment, can take years.

Conveniently, the oil and gas industry got themselves an exclusion from these requirements. The effects of drilling an oil and gas well on federal lands are rebuttably presumed to be insignificant, as long as certain limitations apply—for example, the surface disturbance of the well is less than 5 acres. Oil and gas wells are very similar to geothermal wells, so it makes sense that they would have very similar environmental impacts. As I have written for CGO, simply extending oil and gas’s categorical exclusion to geothermal energy is an absolute no-brainer.

This permitting issue shows that the nearly non-existent geothermal lobby is (surprise!) less effective than the oil and gas lobby. It may also be less effective than the wind and solar lobbies. Geothermal execs have complained that tax subsidies for geothermal are lower than for wind and solar. I am no tax expert, but if I am reading Section 48 of the tax code correctly, there is a 30% tax credit for utility-scale solar and only a 10% credit for a geothermal plant—that’s a big disparity. (There is also a 30% tax credit for investing in a facility to produce geothermal equipment and a 10-year 1.5¢-per-kWh subsidy for geothermal plants that break ground in 2021. [Update: It’s actually a 2.5¢/kWh subsidy because there is mandatory inflation adjustment and the basis is 1992. Hat tip: SW]).

Neither permitting barriers nor inadequate subsidization are likely to hold back geothermal forever. There are ways, however inconvenient, around the permitting obstacles, like operating on private lands. An unfavorable subsidy environment relative to solar might mean a slower start as financiers dip their toes into geothermal waters more gradually, or it might mean that projects move to Germany, where geothermal feed-in tariffs are quite generous. Even if they aren’t dealbreakers, we ought to fix these policy mistakes so that we can reap the benefits of abundant geothermal energy sooner rather than later.

Technologies that could accelerate deployment

Although some of the geothermal concepts I discussed above will work using today’s technology, there remains R&D to be done to unlock the others, and there are advances to be made that would help all players.

The first area where technical development is needed is in resource characterization—the ability to predict where the heat is in the subsurface and what geology surrounds it. Better predictions reduce project risk and reduce up-front exploration costs. Imagine you are drilling a geothermal well and it is not as hot as you expected it to be. Do you keep drilling and go deeper? Do you give up and drill somewhere else? Either way, it’s expensive. With more accurate predictions, we can keep these cost surprises under better control.

Machine learning is one possible way to crack resource characterization. The National Renewable Energy Laboratory has laid some good groundwork on machine learning and geothermal resources, and a startup called Zanskar is using what appears to be a similar approach. In addition to ML, bigger and more granular data sets as well as new sensor packages that could shed more light on subsurface conditions would be helpful.

Next: we need to harden rotary drill bits and other downhole equipment for geothermal conditions. Geothermal drilling involves higher temperature, pressure, vibration, and shock than oil and gas drilling. Since oil and gas represents the lion’s share of the drilling business, today’s bits aren’t optimized for geothermal conditions. A modern bottom hole assembly includes a drill bit and also equipment for electricity generation, energy storage, communication and telemetry, and monitoring and sensing. It’s a lot of electronics.

Fortunately, NASA and others in the space industry are already working on suitable high-temperature electronics. To land a rover on a planet like Venus or Mercury, or to send a probe into the atmosphere of a gas giant like Jupiter, we need motors, sensors, processors, and memory that will not fail soon after they encounter high heat and pressure. Venus’s average surface condition is 475ºC and 90 Earth atmospheres—if it works on Venus, it will work in all but the most demanding geothermal applications.

Third: we need to mature non-rotary drilling technologies. While polycrystalline diamond compact drill bits are now enabling next-generation geothermal applications for the first time, non-rotary concepts could allow us to cost-effectively go deeper through even harder rock. Non-rotary drilling concepts include water hammers, plasma bits, lasers, mm-wave, and even a highly speculative tungsten quasi-“rods from God” idea from Danny Hillis.

Fourth: technologies to support the use of supercritical fluids. Turbines need to be specially designed for supercritical fluids. While turbines already exist for supercritical water, new designs are necessary for lower-temperature fluids like supercritical CO2. In addition, supercritical fluids tend to be more corrosive than their subcritical counterparts, as well as under higher pressure, and so new coatings and casings may be needed to contain them in the subsurface.

There are other possible improvements, but if we can solve several of the above issues, my expectation is that we would generate a robust and self-sustaining industry that can self-fund the further development needed to make next-generation geothermal energy an absolute game-changer.

What’s next?

In an industry ruled by learning curves, what matters most is gaining experience in the field. We need all the companies working on innovative geothermal concepts to drill their demo wells and learn from them, so that they can move on to full-size wells and learn from those, so that they can operate at scale and learn from doing that, so that they can drive down costs (eventually) to almost nothing.

The rest of us should help them.

I have argued that the policy barriers, especially relative to fission, are not dealbreakers. But I continue to work to find policy solutions, because even non-dealbreaker problems can slow down progress. Policymakers who read this and want to learn more are welcome to reach out to me.

Adam Marblestone and Sam Rodriques have proposed Focused Research Organizations to tackle technological development problems not suited for either a startup, an academic team, or a national lab. Often, these problems arise when there is a high degree of coordinated system-building required and when the solutions are not immediately or directly monetizable. Some of the technology problems I described above, like producing a comprehensive dataset of subsurface conditions, developing temperature-hardened drilling equipment, or building systems to support supercritical fluids, may fit that bill. A geothermal-focused FRO supported by $50–100 million over the next 10 years could significantly accelerate progress.

If you want to learn more about progress in geothermal, I highly recommend registering for the upcoming PIVOT2021 conference, being held virtually July 19–23. It’s a comprehensive overview of the entire industry, and totally free. Yours truly is moderating the panel on regulatory and permitting challenges.

If we play our cards right, human civilization could soon have access to a virtually inexhaustible supply of cheap and clean energy. Shouldn’t we pull out all the stops to get there?

Read the whole story
acdha
77 days ago
reply
Washington, DC
graydon
78 days ago
reply
gazuga
78 days ago
reply
Edmonton
Share this story
Delete

Earth is trapping twice as much heat as it did in 2005.

1 Comment and 2 Shares

Earth is trapping twice as much heat as it did in 2005.

Read the whole story
graydon
83 days ago
reply
"quite alarming in a sense"

IN A SENSE
jlvanderzwan
82 days ago
reply
Share this story
Delete

Knowledge Graphs

1 Share
Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia D’amato, Gerard De Melo, Claudio Gutierrez, Sabrina Kirrane, José Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, Axel-Cyrille Ngonga Ngomo, Axel Polleres, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan Sequeda, Steffen Staab, Antoine Zimmermann

In this article, we provide a comprehensive introduction to knowledge graphs, which have recently garnered significant attention from both industry and academia in scenarios that require exploiting diverse, dynamic, large-scale collections of data. After some opening remarks, we motivate and contrast various graph-based data models, as well as languages used to query and validate knowledge graphs. We explain how knowledge can be represented and extracted using a combination of deductive and inductive techniques. We conclude with high-level future research directions for knowledge graphs.
Read the whole story
graydon
85 days ago
reply
Share this story
Delete
Next Page of Stories