Formalising Graphs with Coinduction, by
Donnacha Oisín Kidney and Nicolas Wu (POPL 2025).
“We argue in the paper that graphs are naturally coinductive, rather
than inductive, and that many of the problems with graphs in
functional languages go away once you give up on induction and
pattern-matching, and embrace the coinductive way of doing things.”
One can choose to focus on the car crash, or the lessons learned from the car crash.
Let’s do a little of both.
The proposition of the Living Computer Museum was initially simple, and rather amusing in a Slashdot-baity sort of way: You could apply to get an account on a real, actual ancient Mainframe hooked up to the Internet, which meant you could literally connect into real, actual ancient hardware. I assure you that to a segment of the population, this is an irresistible proposition. It’s also, ultimately, one that even the most ardent fans of “how it was” will leaf away from, because mainframes are their own wacky old world, like using a taffy-pull hook, and appeal on a day-to-day basis to a relative handful of die-hards.
But in 2011, the Living Computer Museum announced itself with the kind of slick webpage that promised computer history buffs a new wonderland.
There’s a lot to take in with this verbiage, but let’s keep going, for now.
In 2012, the Living Computer Museum opened its actual, physical doors to the public in Seattle. A year later, I visited. It was rather nice. I got a grand and lovely tour by some very polite and friendly people, some of whom I’d known for years. The rooms, well appointed, well-lit, and in the case of the machine rooms, done to a sparkling arrangement, clamored for my attention and approval.
There were computer museums for years, decades before Living Computer Museum, scattered among the United States and the world. I’ve been to many of them – sometimes as an honored guest and backstage VIP, and sometimes just because I hastily read my “find me weirdo geek stuff” Google Maps results and negotiated public transit to walk into the door and walk around like a strutting mayor to see what’s what. In that pantheon, I would put LCM in the realm of “very well appointed, especially the mainframes” but not in the realm of “nobody, ever, has ever tried anything like this”. Just off the top of my head, the Computer History Museum in Mountain View has brought not just mainframes and historical computers back from oblivion, but hosted events in which their maintainers and figures have gotten a chance to get their stories on record.
Still, one couldn’t argue with it – the LCM was a solid new outpost in Telling Computer History, and it was on my shortlist of potential homes for materials I might donate in the future. I had a lovely conversation with a member of management about how they maintained functionality and also what their contingencies were for any sort of “endgame” that might befall the endeavor. He assured me they had storage aplenty, in the extremely unlikely event they had to face a closure. Ultimately I didn’t donate my materials there, simply because I found nearer geographic or mission-aligning homes.
My next, and it turns out last, major visit to the Living Computer Museum (outside of a few drive-bys) came in 2018, when I spoke at an event being held there. This gifted me with a chance to see how things had changed in 5 years, and how they had.
Here, then, was something approaching a dream. We had the display cases of technology and the clean carpets and cathedral ceilings of available space, filled not just with computers on desks but entire computer-related environments: classroom, arcade, basement, that showed youngsters what sort of context these computers had lived in. I met Cynde Moya, who had been described to me years earlier as a hard-nosed disciplined caretaker of the materials, and who turned out to be everything they said except hard-nosed – a truly dear overseer of things physical within these walls.
This was April 2018. I lived in the world I live all the time: full of hope, equally full of expectation of doom. But hope was winning.
In October of 2018, Paul Allen was dead.
I suppose everyone has opinions on billionaries, their position in the world, what they represent, and maybe almost everyone has, spoken or unspoken, an awareness that the level of money billions represents disengages you, whether you think so or not, from humanity.
I mean, sure, they cry, laugh, stub their toe, wonder if that’s rain coming, get surprised when the killer’s revealed in a well-done noir thriller. But there’s this cloud of wealth that is ever-present and depending on how it is maintained, or manifested, it slowly bleaches out the edges of living until you realize there is a shocking serenity in their countenance, a noise-cancelling blur that surrounds them, because there are always people whose job is to be aware of them and whose job is to ensure the serenity is maintained.
And there’s the sheer spending power of a billion. I’m sure I could toss out a hundred funny examples of the sheer numerical force of a billion dollars. For example, you could have someone buy a 2025 Subaru Outback, itself the rough amount on the lower end (nationally) of a Domino’s Pizza General Manager’s yearly salary before taxes, and push it into a lake. Taking Christmas and the pusher’s birthday off, to run through a billion dollars of Subaru Outbacks would take you 61 solid years.
But that’s not even very accurate. During that 61 years of pushing Subaru Outbacks into a lake (in Neutral, of course, I’m not a monster), the remaining amount of the money not being spent to purchase lake-bound outbacks would actually beMORE. THAN. A. BILLION. You would actually be richer than you started, if you invested it in even the most brain-dead obvious multi-percent-a-year funds, or left it in any bank. That’s because time and space warp around money.
So do people.
In 2018, after getting cancer in 2009 and 1982, Allen got it again, and this third (public) bout with it ended him, at age 65. His net worth at the time of his demise is the kind that publications who make it their business to will guess at, and the guess sat around $20 billion. Everyone has kind of accepted that, but it’s not really important if it’s $50, $15 or $5 billion. It’s a lot of money.
With his billions, over the decades, Allen owned or partly owned three sports teams, at least 10 companies ranging from airplanes, scientific research, media, and space flight. He owned massive real estate, a movie theater, threw buckets of cash at schools, the arts, and ultimately, a few museums.
To manage it all, he had a company whose purpose was to manage the money, because he wasn’t going to do it himself. The company’s name is Vulcan, Inc. Even after divestment and shutdown after Allen’s death, it has 700 employees. It is one of the largest trusts in the world. It is also now called Vale Group, but I’m going to keep calling it Vulcan.
Vulcan, a company whose job is to manage money and where that goes, is still quite intensely active as an entity, even with Paul Allen’s death. His sister, Jody Allen, chairs the organization she co-founded with her brother in 1986. This company is absolutely gargantuan in its scope and range, and remember, it is a company that just manages a ton of money – that’s the company’s entire purpose.
I wish to take a short moment to not demonize Jody Allen. As the remaining sibling in charge of this company, functionally working through her brother’s holdings, many of which she clearly had no interest or extant position in, and doing so for six years and counting, can’t be anything else than the strangest mixture of pain and endless complication. The $20 billion she’s now in charge of might soften the blow, but likely not by much.
And I apologize for the whiplash here. I’m not overly interested in going down the paths of describing what Jody should be doing, or what her responsibilities are. I am not privy to whatever deep law and lore and functions buried within Vulcan’s iron heart she and her army of people are dealing with, that six years later they are still slowly divesting organizations. I know she has personally called for cullings but I am not informed about the deepest depths of how the aspects of these billions function.
I want to refocus to the fact that Vulcan is divesting itself of the Living Computer Museum.
I want to refocus to the fact that the Living Computer Museum was never a museum.
Look, don’t jump down my throat about this. I’m as shocked as you are.
I didn’t get some inkling from the phone call I had a decade before about potentially donating to Living Computer Museum. I don’t have some spidey-sense about failure or darkness – I just see it everywhere in everything and I treat every day like it’s the one before they find a lump.
If, before it closed, you had made me stand in the center of the LCM and answer whether it was a museum, I’d have happily held up my top hot and shouted “Why yes! One of the finest in all the realms!”
And for all I’d know, it was a museum. There’s no laws on what a thing can call itself regarding being a museum, exhibition, tour, or display. It’s against the law to take money to attend the museum and you get led to an empty lot, sure. But if something has the vestements and affectation of a museum, and you see the big sign saying Museum out front and you go in and there’s displays and staff and events and meetings, you would certainly think it was a museum.
Turns out it wasn’t.
The auction house Christie’s announced it was doing an auction of some of the best portions of Paul Allen’s estate, first with art, and now it would be selling off technology. To fans and studiers of the Living Computer Museum, these items seemed familiar – some of them were on display in the museum.
I found out that some of these to-be-auctioned items would be on display near where I live, so I hopped in my car, parked grandiosely illegally in front of the building, and let myself in.
On display were all sorts of computer history of the “early major days” variety, ranging from a Xerox Star to an Englebart Mouse and even a Pac-Man machine. All had descriptions, all were held out where you could get very near them assuming the guard wasn’t watching too closely, and all of it with a starry-eyed disco-ball aesthetic indicating you were in a space lounge.
So yes, items from the museum were now going to be sold to the highest bidder in a few months. These were some of them, and likely there would be more.
And then, among the relics, I found Stephen Jones.
Stephen Jones and I go a long way back. He was interviewed by me for the BBS Documentary in 2003. So that’s 20 years I’ve known the guy. At our interview session, he showed me his PXL-2000 and his Delorean, both of which I can assure you are high-end geek possessions of the time. And I found his conversation style charming, and his passion for technical subjects perfect for my film.
Stephen’s primary non-profit endeavor, the SDF Public Access UNIX system, was in full swing back then, and is still around now. “Do cool computer things, but don’t try to turn everything into a financial instrument” is not the motto, but it should be. It represents, to me, a lovely ideal of a truly living computer community, incorporating new technology as it expresses itself being something neat, or worth bringing in. Laptop Amish, if you’re stretching around for an anology, which you shouldn’t be. It’s a computer club. And he’s a big part of it.
He was also my initial “in” for the dish on the now-not-so-Living Computer Museum. We’d gone a long way, and here we were, in Christie’s in New York City, standing among offered-for-auction pieces of a gone billionaire’s museum, and maybe we needed to talk in detail, so we did. And I grilled him.
Let’s go back to that initial 2011 announcement page on the Living Computers Museum website, before it was officially an open building.
In the light of its piecemeal disassembly, a lot of this writing hits different. Here’s what I’ve picked up, from Stephen and others, and which I assume for some readers will be new information.
First, this was always intended to be an actual, non-profit, independent/independent-subsidiary museum. It never became that. It is not that. At best, the Living Computer Museum was a billionaire’s (sorry, multi-billionaire’s) collection of computers and technology.
Think of it another way: Jay Leno has a very famous garage that has many wonderful cars and vehicles in it. It is currently, as of this writing, run by a very competent manager named Bernard Juchli, who has been keeping the collection in line for over 20 years. That is, to say, that Leno has a massive collection of cars, competent staff to take care of it, facilities to show them (he has had a number of documentaries and television shows about the garage) and he could probably even open it to the public for an admission price. But you would be hard pressed to call it a “museum” in any sense beyond “it is a display people see videos about”.
As per the request in that 2011 webpage grab, people donated many computers and pieces of history to go into the museum. They expected, no doubt, that they and their children and their childen’s children could stop by displays and point and talk about the family connection to these items.
That was a mirage, a misunderstanding.
This was Paul’s collection of computers, aided by friends and fans. It was always that. The Living Computer Museum, it turns out, cost millions, over ten million a year, to operate as it was set up, and it never came close to making that amount of income from door sales and t-shirts. It never came close to even figuring out how it would.
It’s perhaps not surprising this misunderstanding could happen, and with what felt like all the time in the world, a mere five years and change of being open to the public might have seemed like a mere revving up for the grand plan of what would come next. But nothing came next. It was a collection you could walk through in a really, really nice display case.
And now it’s time to sell the collection.
Before I end up on a huge rant about what it means to be a museum, let me at least give you some relative good news.
The current rough plan is that the big-ticket items, like this Apple I, will be sold at auction. They’ll make beaucoup bucks, go into private collections or maybe even other institutions, and that’s that for the headliners. But that leaves a lot of other stuff.
SDF has a rough plan to raise funds and digitize software, documentation and other related items that are not part of these big-ticket sales (which will be relatively few) so that unique bits of history that are not headline-grabbing trinkets will, potentially, have a chance at being shared with the public.
And I’ve stepped into it. I’ve offered to take software and documentation and store it at the Internet Archive, and of course to work with SDF on ways to digitize/rip/scan materials and, also, host them at the Internet Archive.
It’s all preliminary, but my card is in the hat. Things are not going to be discarded, if I have anything to do with it. A lot can change, and we’ve already seen expectation vs. outcome with this whole experience, but rest assured, I’ve at least tried to provide one last safety net for the kind of history that easily finds itself gone because nobody steps forward. Internet Archive has stepped forward. Stay tuned to see how that works out.
But you know, what did we learn here? What did we actually come away with in the realm of lessons or teachings about what issues this whole mishegaas has churned up?
I have found, perhaps not to my surprise, that a few people in my orbit are displeased I have punched the dead billionaire, ostensibly about all the good the dead billionaire did. But I wish to point out the dead billionaire did give away all that money and yet still had 20 billion dollars at the end, money that went off and did not do as many nice things as the money given away. So much contributed, tax-free, and yet there’s even more. Perhaps this is not the best place to point out that every billionaire is, in many ways, a failure of society, but maybe we can put a pin in it when we have our first trillionaire, assuming that criticizing a trillionaire is not made illegal at that soonish juncture.
I did have some people say, in not so many words, that of course the Living Computer Museum would collapse once its Master of the House was no longer in charge, and I would take this time to point out that Gordon Bell, co-founder of the Computer History Museum, passed on in May of 2024 and yet the Computer History Museum lives and breathes in his absence. It wasn’t a hard problem.
It’s easy to call me ungrateful, but my ungratefulness comes from the narrative people weave near the power of money, like those myths formed around gods and devils, to explain why things are how they are. I appreciate life and its little whorls and eddies that capture us by surprise and we sink or swim. I do not appreciate acting like someone spending a lot of money on something that slightly amuses them earns them the greatest respect. Especially when, as I do now, I walk among a literal army of people working to clean up the mess left behind, after their march to oblivion.
OTTAWA – It was with a mix of embarrassment and concern that economists announced yesterday that almost the entire North American GDP, long thought to be a mature economy, is actually just 26,000 Slack messages in a trench coat.
“Upon initial analysis, the North American GDP appeared like it was grown, however when given a gentle shove and seeing how easily it toppled over, we realized what we thought was a highly advanced technical economy was in fact just tens of thousands of Slack messages stuffed inside a large garment,” explained a Senior Economist at the Brookings Institution.
According to economists the trench coat also included thousands of emails, project management software tickets, Google Docs comment threads, and other forms of useless make-work intended to create the illusion of a robust economy.
“It’s a simple premise really: when taken on their own, it’s easy to see these are not mature markers of economic measurement; however when a number of them sit on each others’ shoulders in large enough quantity and wrap themselves in a large wrapper like a trench coat or a tech company, they are able to pass themselves off as a fully grown industrial economy and sneak into the OECD, G8, and other adult economic circles they’re usually not allowed in to.”
While some economists expressed that this is cause for concern, others were more optimistic.
“These Slack messages might seem small now but they really show a ton of potential,” explains Erika Redding, an analyst for a venture capital fund. “Maybe one day they will grow up and make something of themselves and you know… create something,” added Redding.
“This period of economic growth is often just a phase,” explains Economic Psychologist Gregory Langford. “Economies need time to grow and realize their dreams of turning Quantum AI powered micro-targeted digital synthetic influencer advertising into infinite money will probably never happen, and eventually settle down and do something more realistic, like making steel beams, and become real contributing members of the economy.”
“What’s important is that we never lose confidence in them, or else they could have a total meltdown.”
According to economists, the 26,000 Slack Messages, which had long successfully passed themselves off as economic growth, were caught when trying to pay with a parent company’s credit card which was already overdrawn by 6 trillion dollars.
I’ve been writing Go pretty casually for years – the backends for all of my
playgrounds (nginx, dns, memory, more DNS) are written in Go, but many of those projects are just a few hundred lines and I don’t come back to those codebases much.
I thought I more or less understood the basics of the language, but this week
I’ve been writing a lot more Go than usual while working on some upgrades to
Mess with DNS, and ran into a bug that revealed I
was missing a very basic concept!
Then I posted about this on Mastodon and someone linked me to this very cool
site (and book) called 100 Go Mistakes and How To Avoid Them by Teiva Harsanyi. It just came out in 2022 so it’s relatively new.
I decided to read through the site to see what else I was missing, and found
a couple of other misconceptions I had about Go. I’ll talk about some of the
mistakes that jumped out to me the most, but really the whole
100 Go Mistakes site is great and I’d recommend reading it.
Here’s the initial mistake that started me on this journey:
mistake 1: not understanding that structs are copied on assignment
I fixed the bug by changing it to something like this (play.go.dev link), which returns a
reference to the item in the array we’re looking for instead of a copy.
func findThing(things []Thing, name string) *Thing {
for i := range things {
if things[i].Name == name {
return &things[i]
}
}
return nil
}
why didn’t I realize this?
When I learned that I was mistaken about how assignment worked in Go I was
really taken aback, like – it’s such a basic fact about the language works!
If I was wrong about that then what ELSE am I wrong about in Go????
My best guess for what happened is:
I’ve heard for my whole life that when you define a function,
you need to think about whether its arguments are passed by reference or
by value
So I’d thought about this in Go, and I knew that if you pass a struct as a
value to a function, it gets copied – if you want to pass a reference then
you have to pass a pointer
But somehow it never occurred to me that you need to think about the same
thing for assignments, perhaps because in most of the other languages I
use (Python, JS, Java) I think everything is a reference anyway. Except for
in Rust, where you do have values that you make copies of but I think most of the time I had to run .clone() explicitly.
(though apparently structs will be automatically copied on assignment if the struct implements the Copy trait)
Also obviously I just don’t write that much Go so I guess it’s never come
up.
When you subset a slice with x[2:3], the original slice and the sub-slice
share the same backing array, so if you append to the new slice, it can
unintentionally change the old slice:
x := []int{1, 2, 3, 4, 5}
y := x[2:3]
y = append(y, 555)
fmt.Println(x)
I don’t think this has ever actually happened to me, but it’s alarming and I’m
very happy to know about it.
Apparently you can avoid this problem by changing y := x[2:3] to y :=
x[2:3:3], which restricts the new slice’s capacity so that appending to it
will re-allocate a new slice. Here’s some code on play.go.dev that does that.
mistake 3: not understanding the different types of method receivers (#42)
This one isn’t a “mistake” exactly, but it’s been a source of confusion for me
and it’s pretty simple so I’m glad to have it cleared up.
In Go you can declare methods in 2 different ways:
func (t Thing) Function() (a “value receiver”)
func (t *Thing) Function() (a “pointer receiver”)
My understanding now is that basically:
If you want the method to mutate the struct t, you need a pointer receiver.
If you want to make sure the method doesn’t mutate the struct t, use a value receiver.
Explanation #42 has a
bunch of other interesting details though. There’s definitely still something
I’m missing about value vs pointer receivers (I got a compile error related to
them a couple of times in the last week that I still don’t understand), but
hopefully I’ll run into that error again soon and I can figure it out.
there are a lots of notes about how to use contexts, channels, goroutines,
mutexes, sync.WaitGroup, etc. I’m sure I have something to learn about all of
those but today is not the day I’m going to learn them.
Also there are some things that have tripped me up in the past, like:
I really appreciated this “100 common mistakes” format – it made it really
easy for me to skim through the mistakes and very quickly mentally classify
them into:
yep, I know that
not interested in that one right now
WOW WAIT I DID NOT KNOW THAT, THAT IS VERY USEFUL!!!!
It looks like “100 Common Mistakes” is a series of books from Manning and they
also have “100 Java Mistakes” and an upcoming “100 SQL Server Mistakes”.
Also I enjoyed what I’ve read of Effective Python by Brett Slatkin, which has a similar “here are a bunch of
short Python style tips” structure where you can quickly skim it and take
what’s useful to you. There’s also Effective C++, Effective Java, and probably
more.
there is literally no way you can design a PL with mutation semantics that users won't regularly misinterpret
(this is not to criticize Julia here, it's just a real bummer of PL design, every point in the design space has a remarkable number of conceptual footguns! for every user who thinks things _do_ remain aliased, there's an equal number who think they _don't_, and the behavior varies by language so there's tons of programs in every language with bugs due to believing the wrong one)
arXiv:2408.03492v1 Announce Type: new
Abstract: In this paper we demonstrate how logic programming systems and Automated first-order logic Theorem Provers (ATPs) can improve the accuracy of Large Language Models (LLMs) for logical reasoning tasks where the baseline performance is given by direct LLM solutions. We first evaluate LLM reasoning on steamroller problems using the PRONTOQA benchmark. We show how accuracy can be improved with a neuro-symbolic architecture where the LLM acts solely as a front-end for translating a given problem into a formal logic language and an automated reasoning engine is called for solving it. However, this approach critically hinges on the correctness of the LLM translation. To assess this translation correctness, we secondly define a framework of syntactic and semantic error categories. We implemented the framework and used it to identify errors that LLMs make in the benchmark domain. Based on these findings, we thirdly extended our method with capabilities for automatically correcting syntactic and semantic errors. For semantic error correction we integrate first-order logic ATPs, which is our main and novel contribution. We demonstrate that this approach reduces semantic errors significantly and further increases the accurracy of LLM logical reasoning.
arXiv:2408.03350v1 Announce Type: new
Abstract: We introduce miniCTX, which tests a model's ability to prove formal mathematical theorems that depend on new definitions, lemmas, or other contextual information that was not observed during training. miniCTX contains theorems sourced from real Lean projects and textbooks, each associated with a context that can span tens of thousands of tokens. Models are tasked with proving a theorem given access to code from the theorem's repository, which contains context that is helpful or needed for the proof. As a baseline for miniCTX, we introduce file-tuning, a simple recipe that trains a model to generate a proof step conditioned on the preceding file contents. File-tuning substantially outperforms the traditional neural theorem proving approach that fine-tunes on states alone. Additionally, our file-tuned model improves performance on the standard miniF2F benchmark, achieving a pass rate of 33.61%, which is a new state-of-the-art for 1.3B parameter models. Alongside miniCTX, we offer ntp-toolkit for automatically extracting and annotating theorem proving data, making it easy to add new projects into miniCTX to ensure that contexts are not seen during training. miniCTX offers a challenging and realistic perspective on evaluating neural theorem provers.