761 Comments
User's avatar
David Piepgrass's avatar

"What The Western Mind Doesn't Get About Putin's War"

I love this bit:

"The nuclear risk could be low, it could be higher, but to say that it's for effective purposes zero simply makes no sense. I tell you what, it's an expression of a prejudice that, in fact, expresses so many things that we get wrong, and this is a really important prejudice, probably the most important prejudice we've mentioned on this channel, and it's this: humans tend to think that the future will be like the recent past.

Humans tend to think that the future will be like the recent past. It won't. It won't.

The future will be like the past, and that's very different to the recent past."

...

The previous stuff we discussed — you know, putin's plan for the continuation of this war, for a second invasion of Ukraine, potentially — that's to say we're not used to thinking of somebody with a crazed world view that's partly prudential, partly sort of soaked in mysterious fake spiritual civilizational ideas ... engaging in brutal territorial war, and we need to understand that's not an aberration, that's history. History is full of these things. Putin is *normal* historically. He seems abnormal to us because we are confusing the recent history with human history."

https://www.youtube.com/watch?v=DozlFOCb4nQ

Expand full comment
David Piepgrass's avatar

And this:

> Putin wants to reformat the international order, and he can't do that because nobody will let him.

> ... but moving beyond this point we've got a really thorny dilemma, and that's that if Putin does really, really badly and experiences military defeat in the Donbass battle, he will not stop the escalation. If anything he will say this just proves that I am fighting against NATO, that's why I'm losing in this Ukraine war, because it's not a war with Ukraine and I knew I was fighting against NATO... and it's time to escalate. The problem we've got is that the opposite scenario where Putin stops doing so disastrously and has a modicum of success in the battle of the Donbass... that also won't stop him from escalating... because this is not just about Ukraine. Even full regime change in Ukraine wouldn't have been enough, and Putin is getting nothing like it, so it will be difficult for Putin to stop even if he manages to achieve a bit more...[1]

> this mean it means that the proposals you're hearing from many quarters to make more concessions to Putin so that he at least stops and goes back into his box, that's not a viable proposal... the worse Putin does the more he'll escalate, and the better Putin does the more he'll escalate, and it's very important therefore to acknowledge that we don't have any kind of key that we can turn to put this problem back into its box.

> What's the correct approach? Well, the correct approach is about evaluating nuclear risk while supplying Ukraine militarily as hard and as fast as possible and doing everything, globally, with as big a global alliance as possible, to isolate the Putin regime. One of the central aims here is to bring about that the Russian people are more likely to see a future without Putin (which at the moment they don't, the Russian people see no future without Putin) and they've also got to see clearly that there is no future with Putin. This is very, very far from the picture we've got today, but we've got to push the situation in that direction. And the way we do that is probably not by making concessions, and not by sort of standing back and allowing Ukraine to give its best go with minimal support, it probably means maximally supporting Ukraine short of direct military engagement with Russian forces, and it means maximizing our alliance against the Putin regime globally.

https://www.youtube.com/watch?v=kGwqEYKXz4Q

I think something important is missing here though. I don't know what. But sanctions and a defeat in Ukraine don't seem like they will inspire Russia toward new leadership. To the contrary, while sanctions somewhat disempower the Putin regime, both in Ukraine and domestically, I think sanctions will make ordinary Russians feel even more disempowered, and Putin's propaganda will continue having a lot of success selling the idea that this is all the fault of NATO and Western Nazis. How can the Russian people be given a vision of a future without Putin, within their new high-censorship environment?

[1] I don't think Vlad appreciated how poorly the Russian military continued to fare in Ukraine, not only because this is not his area of expertise, but also because this fact wasn't very clear until after the video was published. I think he's correct that Putin is willing to escalate, albeit is held back by risk of regime collapse if he attempts general mobilization, but Putin simply doesn't have the resources to escalate a conventional war right now. At most, I suppose Putin might do a lot more mass murder in occupied territories, and consider a very limited use of WMDs. But his conventional forces are weak compared to NATO-assisted Ukrainians. I was quite worried Russia might complete an encirclement in Donbass, but their efforts were underwhelming over the last 6 weeks, and on May 8-9 they suffered a major, preventable defeat while trying to cross the Siverskyi-Donets River, which demonstrates that management of Russian operations has not improved much since their defeat in Kyiv. Also, luckily, Western governments have largely chosen to take the path Vlad is suggesting.

Expand full comment
David Piepgrass's avatar

Also this:

> I would like to make a statement in my capacity as a moral philosopher about human beings who are killed during war, and what I want to say is that there's a prejudice we fall into, and it's this: when the numbers of people dead killed in war gets above a certain number, we tend to care less about how many it is who have been killed. And there's something wrong with that. And what's wrong with it is that numbers matter ethically.

> What I mean by this isn't that there is some philosophical theory that tells us that killing more people is worse than killing few people, that utilitarian philosophies argue for this, no. I'm not saying that.

> What I'm saying is that killing 10 people is a crime 10 times worse than killing one person. What I'm saying is that ethically we have to insist, and it's politically constructive for us to insist, on it being the case that when the Russians kill a thousand Ukrainians, that's a crime ten times worse than the crime, the appalling crime, of killing a hundred Ukrainians.

> Numbers ethically matter.

> This is because every human life is real, and every human life matters equally.

> This hasn't been something that human beings used to think in the past. But one of the features of the modern world, that's to say the western world, over the last three centuries, but also much of the rest of the world, one of the central features — maybe the central feature — of the way we ethically relate to the world over the last two or three hundred years is that we think that each person is of equal moral worth. And that this is as fundamental and ethical a feeling as anything that we have got, and that's why actually modern slavery has seemed so abhorrent to us, because we already knew better. We felt like we knew better.

> During war we often lose perspective when we move from five thousand to ten thousand to twenty thousand to fifty thousand dead, but each number matters.

> Harry and Susan and Ricky and Samantha are all people whose lives are real, and whether two of them are killed or four of them are killed makes a world of difference. What's the difference? Well, the difference is that out of the four, *two live*.

> That's a world of difference and that's why numbers matter. Numbers during war matter, and the more people you kill, the ethically worse your predicament is. Numbers matter such that a thousand dead is always twice worse than 500 dead, 10 times worse than a hundred dead, a hundred times worse than ten dead, a thousand times worse than one dead. That's something we've got to insist on, because we get anesthetized and start feeling that when numbers cross a certain level it's all equally as bad. And I think it's super healthy to say no, it's not.

https://www.youtube.com/watch?v=7eHetWKCHic

From my perspective he's kind of turned the typical conversation about utilitarianism around here, and correctly so: it's not that we value human life because we believe in utilitarianism, no; rather we we believe in utilitarianism because we value human life. The causality arrow flows *from* caring *to* utilitarianism.

And then, you know, there's this lovely juxtaposition of that with videos like "Why small problems matter (even during 🇺🇦 war)" (https://www.youtube.com/watch?v=ncycNILFA4E) because, you know, the little things matter too. The video isn't saying anything profound but still, it's lovely.

Expand full comment
NLeseul's avatar

I'm a bit late to this post, but I do play a UI engineer for video games in real life, so that second point is pretty pertinent to me.

Something that's been on my mind recently is the fact that there just aren't any standardized tools or processes for building UI that all developers in the UI space are expected to be familiar with and can build upon. Any time you start work on a new application, you're dealing with some sprawling tech stack of random UI framework layers plus a huge mess of ad hoc code, and you have to spend a lot of time learning the quirks of that particular combination of tools. Any experience you might have previously gained is probably only loosely relevant, because anything you've worked on previously was probably built using a completely different methodology. So you'll probably spend your first couple of years trying to make changes that are as small as possible, so you don't break anything in the application's fragile framework, until you learn its specific quirks well enough to be more confident in making riskier changes. And by then... well, you've probably moved on to the next project anyway.

By contrast, if you're working on 3d graphics, there are only a couple of frameworks that you'll ever really need to be familiar with. If you know how to use OpenGL/D3D and their associated shader languages, or if you know how to build and optimize models in 3DS Max/Maya/Blender, you'll already know most of what you need to know to start working on any new project. No game developer is going to be dumb enough to build their own in-house polygon rasterization library, let alone their own mesh editing/character animation tool. And because those basic frameworks are so ubiquitous, computer manufacturers have been able to build specialized hardware (3D graphics accelerators) that implements their APIs and data formats in an extremely optimized and streamlined way.

And I think the basic reason for this is that UI frameworks look deceptively easy to build. Rasterizing an arbitrary 3D mesh into pixels on a screen is a pretty hard problem requiring quite a bit of deep mathematical knowledge, but almost any developer can come up with an idea for a better way to put flat colored rectangles on the screen. So, given the choice, they'll probably happily start writing their own framework, rather than trying to work around or improve one that already exists. So the UI space is littered with these half-finished frameworks, none of which ever really reach the point of maturity, and none of which work particularly well outside of the environment and language where the original developer preferred to work. Every app developer just picks one that vaguely fits their particular problem space, and then builds their own extra framework layers on top of it to work around its limitations.

And this pattern holds even outside the world of random nerds building open-source tools in their basements. Even the big players in the software world can't manage to come up with a sensible standard for UI, even within their own ecosystem. Apple, Microsoft, and Google are constantly announcing new UI frameworks, and even whole new languages for building UI in those frameworks, every few months.

And it's not like there's even anything all that unique about any particular UI framework. The basic problems of UI have been pretty much the same since the days of OS/2 and the first Macintosh, and most UI frameworks ultimately converge on pretty much the same set of features. But they're all mutually incompatible, and knowledge we gain from working with them is just as incompatible, so all our energy goes into trying to figure out how to accomplish the same simple thing in a hundred different frameworks instead of figuring how to do that thing better and faster in just one or two frameworks.

Expand full comment
Grum's avatar

Another thing to consider might be that most UIs use vector graphics, whereas games are rendering triangles. UIs are all about straight, crisp, clean lines - graphics cards don't actually do this very well. In fact CPU rendering can come out looking slightly sharper in certain UI frameworks.

See this detailed explanation https://softwareengineering.stackexchange.com/questions/191472/why-have-hardware-accelerated-vector-graphics-not-taken-off

I do agree that poor coding and bad frameworks are probably responsible for the majority of the unresponsive UIs out there.

As an aside Qt is a good, well established cross-platform UI framework with hardware acceleration support. Though yes it's a bit bulky, requires extra tooling, huge libraries, a MOC compile step, and even its own declarative language qml - which is yet another UI language to learn. I've found it's a lot faster than most alternatives if your graphics drivers are playing nice, and you learn how use the scene graph to debug what's being redrawn.

Expand full comment
David Piepgrass's avatar

As a Microsoft developer I've been annoyed for over a decade about the awful UI framework they came up with based on "XAML".

It was an idiotic idea. Compared to WinForms it was four steps forward and four steps back. I have no complaints about the steps forward. It's more stylable/skinnable, it has abstractions more powerful than those in WinForms,, you can do more with layout, it even has hardware acceleration. But why the f*** did they take four steps back? WinForms was easy to use (for the tasks it was optimized for anyway) and discoverable (not hard to figure out how do most things), it had a friendly drag-and-drop designer, it had statically-typed code to construct the UI, it was fast. They threw all of that away! Why! Now it's XAML, a brand new language the developer is forced to learn, a dynamically typed language so it can fail with runtime type errors, and the debugger can't step through it because it's not normal code. The designer was taken away. Important APIs are difficult to discover (without StackOverflow we'd be screwed). It was clearly a money pit built by a large number of developers (not a smarter design like you might get from a smaller team). And it is extremely inefficient; as I recall, I tried turning off virtualization and found that a simple ListBox of 10,000 4-character items consumed 80 MB of RAM and took 4 seconds (!) to process a single keystroke. A traditional WinForms ListBox probably consumes nearly 1000x less RAM and need about 1000x less CPU time.

And this is just one example. I feel like most of the things MS has built in the last 10-15 years have been inexcusably shitty, with some obvious exceptions, e.g. Visual Studio Code and TypeScript. Notably, the cash cow Azure is not an exception. Why do they make their stuff shitty? No idea. Wish they had put me in charge of .NET 10 years ago, I'd have shown 'em how it's done. So avoid Azure. Also AWS is terrible. DigitalOcean is good.

Why doesn't this happen in games? I think the answer is twofold:

(1) the game industry is very competitive, and games are forced by environmental constraints to use all available resources efficiently (and as NL mentions, they've succeeded spectacularly). Note that the non-games industry is less competitive (e.g. you see a lot of network-effect advantage, like how Google gobbled up this huge ad market and Facebook got to me The Main social network - it's not uncompetitive, but it is certainly less competitive). And they don't need efficient code, so they don't bother.

(2) the games industry still writes a lot of shitty code, maybe even more than outside the game industry. They have quality code where it's really needed, like rendering a billion triangles, but when it comes to making quests, they can just hire an army of below-average programmers who love games and have them quickly spit out bug-ridden quests on glitch-ridden maps. A key thing about games is that this kind of code is tossed out into the world and then no one has to ever deal with it again. People pay some money, play the game for awhile, then move on with their lives. As long as it's not so buggy that the player can't load their last save point and get past the glitchy part, you know, it can be hideously bad code and still be fine.

Expand full comment
Gunflint's avatar

Some interesting wordplay in the Friday XWord.

Clue: Summons before congress?

Answer: OBBGLPNYY

Expand full comment
Ilverin Curunethir's avatar

RE software speed

The primary reason is because the software you use your eyes to look at is designed and written for the human timescale (in contrast to high-frequency trading algorithms, etc). Back when computers were slower, they did fewer things, because the designers+programmers couldn't cram any more stuff into a second. Now that computers are faster, more stuff (features) is crammed in. There's less pressure to cut features for performance because the performance is already tolerable.

For example, Windows 10 both searches the internet and your computer whenever you use windows search (part of Cortana) (also as part of the '18 spring update, you can no longer turn this off).

Also, everything has tracking now because it can be afforded (even including the single-player video games you play). Where and when you click, when you scroll, how long you kept the tab open.

It's not that in the past websites didn't want to track you, it's that they couldn't afford to force you to do it because it would be unacceptable performance and you would leave.

In addition, there's not only tracking from one source but also multiple sources (although extra trackers usually do less tracking like only IP address/clickthrough instead of other things like scroll tracking). Why not let both google and facebook track the users on latimes.com? They'll both pay.

Post-script: Do you think gwern reads these comments? I bet not, because there's no sorting by quality=there's no upvotes/downvotes. The more skilled people are, the higher their opportunity cost of reading every comment, and if there's no filtering so they can spend a little of their valuable time reading one or a few comments, then they might skip the comments entirely. Due to this, the community quality here on this substack may have no hope of being as good as say lesswrong (I mentioned gwern specifically because I think he's a friend of the blog).

Expand full comment
Rolaran's avatar

Apologies if this is considered spammy, but does anyone happen to remember seeing a picture that was the Gramsci "the old world is dying" quote put into the "Gru explains the plan" meme? I'm certain I saw it on a post here but I can't remember which one.

Expand full comment
Cam Peters's avatar

I wrote the review for David Deutsch's The Beginning of Infinity. Any feedback from people who read that review would be appreciated. Feel free to drop a comment.

Also, would it be possible to see the ratings for the non-finalists? I'd be interested to see how it ranked.

Expand full comment
beleester's avatar

It's a day that ends in "y", so a clever-seeming cryptocurrency idea has abruptly lost all its value. Matt Levine has a good writeup of what went wrong:

https://www.bloomberg.com/opinion/articles/2022-05-11/terra-flops

TL;DR: An algorithmic stablecoin works by giving people an incentive to do arbitrage - if the coin's price moves away from the peg, then people can buy or sell tokens and dollars to move it back to the peg and make a profit. Sounds reasonable - so long as the tokens are worth more than zero dollars, then there should be some number you can exchange to make a dollar's worth of stablecoin. But if the prices drop suddenly, then people might notice that the tokens they're buying are actually worthless, stop buying them entirely, and then nobody can do arbitrage and the whole scheme unravels.

Expand full comment
Gunflint's avatar

TIL about Curtis Yarvin. Thanks Mr Douthat. Now I understand ACX a bit better.

Sorry about the loss of your wife Curtis. It’s a terrible loss. There are no words for something like that.

Expand full comment
David Piepgrass's avatar

I don't recall seeing Curtis Yarvin / Moldbug at ACX....

Expand full comment
Gunflint's avatar

Ross Douthat referenced him in an oped piece. A link to a Tablet article. I’m new in town, trying to get a handle on the culture and have read discussions about his thinking here. I hear the Matrix red pill thing a lot here too. The Tablet article credits him for bringing that idea into the discussion.

Expand full comment
Kei's avatar

What is the right way to look at inflation?

I regularly see people use the YoY price increase as the standard metric for inflation, but it seems like this misses a good deal. If we are looking for a point estimate in how prices are changing right now, it seems something like a MoM price increase, maybe adjusted for seasonality, would be more reasonable. If we are looking at generally how high prices are, YoY might make more sense, although if we have very high price increases for a year, and then are static for a year, we'll have 0% inflation for that static year even though prices are very high in an absolute sense.

Are these concerns reasonable? Are there better metrics of inflation people should look at?

Expand full comment
Bullseye's avatar

> if we have very high price increases for a year, and then are static for a year, we'll have 0% inflation for that static year even though prices are very high in an absolute sense.

If prices and wages don't change, there's no inflation. If some measure says otherwise, it's measuring something other than inflation. Prices can't be high or low in an absolute sense, only high or low compared to something else.

Expand full comment
Aurelia Song's avatar

My friend Mike Darwin is starting the Biopreservation Institute (a nonprofit research organization) and we need a COO-type person to help get everything started!

Mike used to be the president of Alcor and he's been involved in some of the most impressive research in cryonics over the last 30 years, such as reviving dogs after 16 minutes of warm ischemia.

The Biopreservation Institute's goal is to put cryonics on a modern, evidence based, scientific footing. We want to define rigorous standards for preservation and quality control mechanisms to make sure each preservation meets those standards. We already have extensive funding for the first year of the project with follow-up funding conditioned on further good results.

If that sounds like a good mission to you and you want to get started on the ground floor, then email me at r@nectome.com and we can discuss the next steps. (I'm on the board of BPI which is why I'm helping with recruiting). The Biopreservation Institute is located in Southern California and it's definitely going to be an in-person job. Happy to respond to comments in this thread as well.

Expand full comment
Aurelia Song's avatar

Main reasons for the "latency problem":

1) the video game is using a GPU to do these computations, which is a massively parallel device that can to some degree compute each pixel in the image and each physical element in the simulation independently of the others. The 2D app is likely using the CPU to draw to the screen. This is (brutally approximated) like using a thousandth of the GPU's ability to draw to the screen.

2) A lot of the delay in a program STARTING is the program having to load information from the disk into RAM to compute the next step in an extremely long set of instructions to get the program fully running. The video game also takes several seconds to get started as it pulls information from the disk / internet and loads it into faster memory like RAM / the L2 cache, etc. The analogy here is that if I already have the book open in my hands, turned to page 82, with my finger on the second paragraph, and you ask me to read the second paragraph, then I can respond in seconds, but if you ask me to read the second paragraph on page 82 of a book that's in the library it might literally take me 45 minutes to respond as I physically go to the library, look up the book, and find the relevant information. However AFTER I've done that if you ask me to read the next sentence after that I'm back to just taking a few seconds to respond. The situation with computers copying information from a spinning disk hard drive into registers that live on the CPU / texture processors in the GPU is, I think, even worse than the book analogy.

However, we COULD make computers that could boot up instantly and run most programs instantly as well. It would involve some combination of pre-loading everything and carefully optimizing every step of the process. It doesn't happen in practice because it's very expensive.

Expand full comment
Wency's avatar

What puzzles me is when a program like Excel is lagging badly yet Task Manager shows every type of resource usage is low. I’m not a tech expert, but I guess this has something to do with inefficient use of multiple cores -- one core is taxed to the limit but the others are idle.

Expand full comment
Jonathan's avatar

The library analogy is excellent! Would definitely use this to explain things to friends.

Also most people forget even on a fast SSD game boot times still take 20s+ initially and a few seconds per stage, this does seem like quite the unfair comparison of boot time for 2D vs draw for 3D

Expand full comment
Andrew Cutler's avatar

Self promotion: I started writing about NLP and personality structure, which I figure interests this crowd. The first post is about how the Big Five are word vectors: https://vectors.substack.com/p/the-big-five-are-word-vectors

Expand full comment
Pepe's avatar

Did anyone sign up for the forecasting tournament mentioned in the previous open thread? If so, did you get a confirmation email after submitting the signup form?

Expand full comment
Sandro's avatar

Grip strength as a diagnostic tool for depression [1], any thoughts?

A lot of strength output is neurological, so maybe depression suppresses neural drive. Neural drive is suppressed automatically to prevent injury, eg. as you accumulate fatigue while exercising, or when sore from exercise, your nervous system output becomes attenuated to prevent injury.

Depression is also connected to pain [2], which presumably is also connected to these circuits that suppress neural drive to avoid injury.

If exercise works to alleviate some depression (there is some contention here), perhaps this is all connected via a common mechanism.

[1] https://pubmed.ncbi.nlm.nih.gov/34354204/

[2] https://www.health.harvard.edu/healthbeat/the-pain-anxiety-depression-connection

Expand full comment
Sandro's avatar

There are also work suggesting that physical strength might account for sex differences in rates of depression:

https://www.psypost.org/2022/05/physical-strength-may-partially-account-for-the-sex-difference-in-depression-study-suggests-63076

Expand full comment
Andrew Cutler's avatar

Depression and strength are so broad they must have many shared mechanisms. Here's a list of 40 theories about twitter: https://twitter.com/dawso007/status/1523323209126486016

Some must apply to strength as well.

Expand full comment
Ferien's avatar

When my father had heart attack, I took an android tablet to call ambulance... and got messages like "%brandname% weather isn't responding", "google play services isn't responding", "system UI isn't responding"

I think none of engineers designing that model used it, and this maybe part of why this happens

Expand full comment
Theo's avatar

Any no one wants to actually call 911 to test it out on a real device. You can test regular phone calls, but 911 is pretty magical.

Expand full comment
anonymous's avatar

I worked on Android telephony (the part that makes calls) for 5 years and you absolutely can test calling 911, and that’s what we did. Regular people can even schedule test calls.

Expand full comment
JDDT's avatar

Writing efficient code is simple; but there is no market for selling "simple" to people.

A specific example is helpful. Let's talk about loading from or saving stuff to a file. We've been doing this for a very long time and it's very simple. You write a version header, then you go over your data writing everything out. Your file loading code just verifies the version header is as expected, and then is the mirror image of the saving code. Now that computers are fast enough to actually fuzz test this, it's easier than ever to get this right. On top of that you can work to optimise the layout of the data; how it is loaded, and so on. It is also easy to cope with changes to the data format -- just change the version in the header and choose whether you have an error "you need a different version to load this"; or you support multiple versions and add code to migrate the data.

This has been an optimally solved problem forever.

And yet the experience of many programmers today is that this is actually really difficult. Firstly, there's no way to sell this solution. So, if you go searching around for how to do this, you'll find a lot of people trying to sell their solution -- some magic Reflection driven nonsense that claims to be able to solve this difficult problem for you -- and it's all gaslighting -- there is no difficult problem; just a lot of marketing to convince you that you need what they're offering, and what they're offering is great. So you end up using Newtonsoft.Json -- which is a really solid piece of work -- it's great -- no disrespect. But compared to rolling-your-own (as I described above) it's a disaster: it's very slow (compared to rolling your own) because it's doing something dynamic rather than static, and when you (inevitably) have to customise what it is doing, you a down a horrible rabbit hole, doing things much more complicated than if you had rolled your own -- and generally very inefficient.

There aren't many old people around in the industry to teach newcomers to ignore misinformation and do things as simply as possible. The way you progress in top tier of tech is by jumping from company to company, never having to deal with the long-tail of your terrible decisions or really learning from them. Motivated newcomers will go read what people are up to at coding conferences; and learn about all these awesome cool modern languages and go write something up in Python, having been told it is simple and forward.

Many programmers NEVER write apps directly to the OS API; For example, using CreateWindow in C to make a Window. They should. Until you've done this, it's hard to understand how fast computers actually are: watching youtube teaches you to make decisions that are horrifyingly slow and complicated and then use very advanced performance techniques to get something running half as fast (using twice as much battery) as if you'd just written it simply in C in the first place. There is no reason for any app on modern hardware to not show a window INSTANTLY except for the drag factor of whatever platform they're pulling in. I remember this video https://www.youtube.com/watch?v=GC-0tCy4P1U

The "people don't want to pay" line is nonsense. I have spent weeks trying to untangle stuff written at this high level. In terms of writing it, debugging it, maintaining it, and trying to make it faster -- this stuff COSTS a huge amount. Writing simple code simply and ignoring all innovations in computer languages of the past 30 years, is both a huge cost saving, and a huge performance win. Where this "don't want to pay" attitude comes from, is that the amount of work required to make modern-software performant costs a lot -- because we're already in the pit and trying to dig ourselves out.

I highly recommend Jonathan Blow's old talk on this https://www.youtube.com/watch?v=ZSRHeXYDLko

and also anything by Molly Rocket. For example this video "How fast should an unoptimized terminal run?" https://www.youtube.com/watch?v=hxM8QmyZXtg -- where we compares Microsoft's initial release of Windows Terminal (written in the super modern way) with something he wrote maximally naively that runs A LOT faster (because written simply).

It's highly frustrating.

Expand full comment
Viliam's avatar

We can do more now than we could in the past. As a consequence, we are expected to do more, just because we can. Twenty years ago, the customer would be happy to have an application that does X, no matter how. Today, the customer wants to have an application that does X, runs in a web browser, connects to a database server on a different machine, can be deployed in the cloud, authenticates users, does log rotation, produces reports on different metrics, integrates with M$ SharePoint, uses continuous deployment... in other words, only a small part of the application is actually about X, the most of it is building infrastructure.

Hey, all of these things are nice, but it is 10x the work, and the customer is not going to pay 10x the money. More importantly, the employer is not going to pay 10x the money, regardless of how much the customer pays.

Managers have learned how to squeeze more (short-term) productivity from the developers. Twenty years ago, it was like "let those nerds do their magic at their own speed and hope that one day they will deliver the mystical product beyond our understanding". Today you have JIRA tickets, daily "agile" meetings, other meetings, more meetings, constant distractions, and often working on two or three projects in parallel. Keeps you busy. Also prevents you from focusing on any specific aspect, learning it deeply, and then doing it right.

Agile development is the opposite of deep focus. If you have two weeks to implement a "user story" that involves database, web pages, business logic, and three meetings to synchronize with other stakeholders, how much of that time will you spend figuring out how to do something more efficiently? You are jumping from one topic to another like crazy. Then comes another "user story", again with database, web pages, business logic, and whatever. Then again. So ultimately, during the year, you will spend total three months doing the database, three months doing web pages, three months doing business logic, and three months doing everything else, which in theory is a lot of time, enough to get everything right. But those months are divided into hundred tiny intervals after which you need to switch to a different topic. Which is why you don't do the things right.

I assume the concept of agile development works best if you have some superstar coders who already know everything -- maybe the guys who invented it actually were like that. Or maybe they understood that when they need to learn something new and experiment with it, then you stop doing sprints until you actually learn it well. Dunno. All I know is that when I need to keep running at maximum speed, and there is something new that I don't really understand, I will be lucky to make it work at all, but things like efficiency will get sacrificed. Teaching developers how to do things, and how to do them right? Hahaha, that's what your free time is for... I mean the part of your free time you are not already spending building your github portfolio to show at the next job interview.

At the core, it's goodharting. Companies optimize for keeping people busy, because that is their proxy for productivity. Busy people don't have time to learn how to do things properly.

Expand full comment
JDDT's avatar

I'll add that the serialization example is nice because it shows how recent-language-innovations destroy something quite simple, like our serialization system:

So say we're using Object-Oriented Programming -- so a lot of our data's implementation is tied up behind opaque pointers. That's then a right nightmare to deserialize because you have to reconstruct these objects when in OOP you aren't meant to know what they were in the first place.

Or functional programming -- so a lot of our data is behind functions or hidden in lazy evaluations.

There's similar issues with every modern language innovation -- they make everything slower, and more complicated and costly to develop, but they gaslight you into thinking they're helping you and stop you leaving this abusive relationship.

Expand full comment
heiner's avatar

This isn't far from being true.

Rust and Go are counterexamples to the last paragraph though.

Expand full comment
Muireall Prase's avatar

Since neither review of Unsettled (What Climate Science Tells Us, What It Doesn’t, and Why It Matters) is a finalist, I'll post my comment on Chapter 5 (Hyping the Heat) now: https://muireall.space/heat/ — I think Koonin's rhetoric is unjustified, at least here.

Expand full comment
Cam Peters's avatar

How do you know which ones are finalists? Did Scott post the list?

Expand full comment
Mark Atwood's avatar

Slow software is 95% "JavaShit/React/React Native/Electron/MS Office Apps - are utter crap, and almost nobody remembers how to write fast app, and certainly nobody wants to pay for them". I say this as professional software guy who spends too much of his time reading the foundational code of these termite piles. This is slowly changing, I've played with GUI and TUI apps written in Flutter and in GoLang and in Rust that do a good job of reminding you of how good modern computers are, when you don't stuff with the architecture of sewage on them.

Expand full comment
Chris's avatar

As someone who works with EHR, I very much feel the pain and frustration of waiting for our comps to load the info that we need. I understand the comment quoted in the post/email on a personal level.

Expand full comment
David Schaengold's avatar

self-promotion: I am working on a new social network designed around having better conversations. It's a bit like emailing in public. Conversations are one-to-one, and the first message in a conversation remains private until a reply comes through. Works entirely through email, too, if you want it to. It's called Radiopaper.

Given that Scott hosts the best comment section on the internet, I thought there might be some interest here: radiopaper.com/explore

Expand full comment
RobRoy's avatar

So... If I were to email my friend to talk about the Ukraine War, Radio paper basically just lets us publish it and make it available for public consumption?

Expand full comment
David Schaengold's avatar

Yep! It also lets you open up those messages to third-party comments, which, like regular messages, will not appear on the website until you approve them or reply to them.

And we're hoping to add some of the classic social features like following, notifications, etc., in the near term, to make the site a place people want to hang out, too.

Expand full comment
drosophilist's avatar

Hi Scott,

Thank you again for organizing the book review contest!

I would really love to get some feedback on my (non-finalist) review. Would it be possible for non-finalists to see our scores? This was my first time writing a book review and I'm really curious about how people thought I did/how I could improve for next time.

PS. I reviewed "The Knowledge" by Lewis Dartnell, so if any of you have read it, I would love to see your comments!

Expand full comment
Cam Peters's avatar

Second this. Finding out the scores for all the entries would be valuable feedback for the non-finalists.

Expand full comment
Mickey Mondegreen's avatar

I read your review of "The Knowledge," and thought it was good - certainly worth reading. There weren't many jokes (so I didn't give it as high a rating as the finalists) but it was an interesting title and your writing is solid.

Expand full comment
Matt's avatar

From https://www.gwern.net/Modus

"Probably the most famous current example in science is the still-controversial Bell’s theorem, where several premises lead to an experimentally-falsified conclusion, therefore, by modus tollens, one of the premises is wrong—but which? There is no general agreement on which to reject, leading to:

Superdeterminism: rejection of the assumption of statistical independence between choice of measurement & measurement (ie. the universe conspires so the experimenter always just happens to pick the ‘right’ thing to measure and gets the right measurement)

De Broglie–Bohm theory: rejection of assumption of local variables, in favor of universe-wide variables (ie. the universe conspires to link particles, no matter how distant, to make the measurement come out right)

Transactional interpretation: rejection of the speed of light as a limit, allowing FTL/​superluminal communication (ie. the universe conspires to let two linked particles communicate instantaneously to make the measurement come out right)

Many-Worlds interpretation: rejection of there being a single measurement in favor of every possible measurement (ie. the universe takes every possible path, ensuring it comes out right)"

Superdeterminism seems silly to me, or maybe I just don't understand it, but the other possibilities seem reasonable.

De Broglie–Bohm seems mostly/partly equivalent to Transactional. Wouldn't a global variable effectively be the same thing as transmitting information faster than light?

Transactional (my preference) needs superluminal transmission but it could get this if there are backwards in time particles. Is that the pilot wave thing? Also would the uncertainty principle allow for this? For example if a particle doesn't have a definite location is space could it also be said that it doesn't have a definite location in time? If a particle with mass is traveling very close to the speed of light could it quantum tunnel through the lightspeed barrier to become superluminal while at no time actually travelling at the speed of light (which is the thing that's actually forbidden.)

Many-Worlds is probably the favorite at this point. But to me it seems to have pretty serious problems. Like if every time there is a quantum branching and both branches become realized then there should be an equal chance of ending up in either one but there's usually not. So you end up needing to introduce a new variable, say reality juice, to try and account for this and then try to explain why this "reality juice" corresponds to the chance that you'll end up in any particular reality.

Anyway thoughts?

Expand full comment
Carl Pham's avatar

The obvious flaw in Bell's Inequality is that the derivation depends on using *classical* probabilities. You then compare it to the parallel result from quantum mechanics, which uses probability *amplitudes* -- and of course you can get disagreement. You're basically comparing a probability calculation you do with interfering waves to a probability calculation you do with a cloud of ideal gas particles. Naturally they'll be different -- the entire world of diffraction and interference is absent from the latter.

So Bell's Theorem rules out (1) local variable theories that (2) also use classical particle dynamics and have no wave nature. I've never seen why, given how firmly embedded wave-particle duality is in our physics, on a very sound empirical basis, we would leap to the suspicion that (1) is the problem.

Expand full comment
Matt's avatar

Wow! That's pretty damming. I'd actually be interested in seeing a refutation of this if there is one and there are any takers. I am not an expert but Bell's Inequality is so well established and this seems like far too obvious a flaw to have been missed.

Expand full comment
Carl Pham's avatar

I spoke too sloppily. There's no flaw *in Bell's inequality itself*, the derivation is simple and pretty foolproof. What I meant is the "obvious flaw in thinking it says anything interesting about the measurement problem." Bell's Inequality relies on classical probability calculus -- whereas if there is one thing we can be 100% sure about, it's that our universe is based around the probability calculus of waves -- adding probability *amplitudes* instead of probabilities. Our universe is made entirely of fields, so far as we know, and any "particles" and classical probability physics is an emergent phenomenon on which one cannot count -- it is not fundamental.

Expand full comment
The Ancient Geek's avatar

It's not egregious to translate quantum mechanical measure into classical probability. All experimental QM works that way. In fact , one could argue that complex valued measures aren't real, since they aren't observed.

Expand full comment
Matt's avatar

Can Bell's Inequality be rederived using only quantum probability calculus?

Expand full comment
Melvin's avatar

The "reality juice" isn't really a new variable, it's just the wavefunction. Everything in Many Worlds is just the good old wavefunction propagating via the good old Schroedinger equation. The only "new" thing required is some assumptions about what the subjective experience of the observer/s will be.

My main problem with Many Worlds is that it all makes a lot of sense in extremely simple toy cases like "observer measures an electron's spin and gets entangled with it so now the system is in the state 1/sqrt(2)*|spin up>|observer sees spin up> + 1/sqrt(2)|spin down>|observer sees spin down>.... but once we start taking it beyond incredibly simple toy cases it gets a lot more difficult.

Expand full comment
Matt's avatar

"The only "new" thing required is some assumptions about what the subjective experience of the observer/s will be."

Right. That's the "reality fluid". Is there some way to get to get the subjective experience to match the observed reality without it?

Expand full comment
Melvin's avatar

All interpretations of quantum mechanics contain something along the lines of "the probability is the square of the wavefunction because it is", so this problem isn't specific to MWI.

Also, "a way to get subjective experience to match observed reality" is outside the scope of any sort of physics right now. We don't know how and why conscious experience arises from physical reality, that's the big problem of consciousness.

Expand full comment
Matt's avatar

"All interpretations of quantum mechanics contain something along the lines of "the probability is the square of the wavefunction because it is", so this problem isn't specific to MWI."

Well yeah but once you accept that it's pretty straightforward to map that onto subjective experience unless you choose MWI.

In MWI every outcome that is not forbidden is required and actually happens in some branch of the wave function. In other interpretations not everything that can happen actually does happen. Like if outcome A happens 20 percent of the time and outcome B happens 80 percent of the time then there no real mystery why we subjectively perceive outcome A 20 percent of the time and outcome B 80 percent of the time.

But in MWI both outcomes happen 100 percent of the time and a version of the observer is in both branches so why would we still expect subjectively to see those probabilities? It seems like MWI (at least under my naïve implementation) would mask off the original probabilities, replacing them with 1/n where n is the number of possible worlds you could inhabit in the next instant.

Expand full comment
smocc's avatar

The relevant physics question is "given that I am currently experiencing the subset of reality where A is true, what is the probability that in the future I will experience B." That is, the symmetry between all possible states is broken by the fact that we are always talking about conditional probabilities.

The standard quantum answer to my question above is the Born Rule. The probability that you will end up experiencing B given that you are experiencing A now is |<B|A>|^2. David Deutsch has a paper that argues that this rule is the optimal one in some Bayesian sense -- if you tried to predict the future using a different rule you'd be wrong more. (I think he smuggles in more than he thinks; for example, where did the inner product come from, and how was it chosen?)

Expand full comment
Matt's avatar

It is certainly an important physics question but I actually don't see the relevance to the issue I'm describing.

I am aware that 1/n is empirically not the right answer. My argument is that MWI implies that that rule is what we would subjectively observe on the shortest possible timescales (regardless of what the 'real' underlying (or conditional) probabilities actually are), and since we don't that counts as a strike against MWI. But caveat, I am not an expert in this and maybe there's a perfectly cromulent explanation.

Expand full comment
Ravi D'Elia's avatar

Can you elaborate on how more complicated cases get more difficult? Computationally I get it, but isn't it basically the same complexity as collapse is pre-collapse?

Expand full comment
Melvin's avatar

I have to be careful how I phrase this, as human language isn't great for talking about these sorts of things and it's easy to get confused. But let me have a go.

The main problems, I think, come in when we start thinking about how to map conscious mind states of the observer onto actual quantum states.

In the toy example I discussed, where an observer observes an electron spin and becomes entangled with it, this is simple, the observer is a simple quantum system with two states.

But let's take just one step beyond this, and have the observer observe a continuous variable; let's say the number of seconds before a particular nucleus decays. Now instead of two versions of the observer we have infinitely many. Or do we? Is the version of me that observed the nucleus decay some arbitrarily small instant later _really_ a different version of me?

Then we run into even worse problems when we quit treating observers as particles, and remember that human brains are macroscopic objects, it takes time for information to travel across them, and so neurons on one side of my brain may be in a quantum superposition as far as neurons on the other side of my brain are concerned. How does _that_ messy quantum system turn into a conscious experience?

Expand full comment
Kemiu's avatar

Many-Worlds states that all (uncountably infinitely many of) the branches were real all along, and that the amplitudes of the wave function is the "thickness" of the branches (aka "reality juice" - good one!).

Why would there be a problem with a scientific theory claiming that mathematical construct X (wavefunction amplitudes / curving spacetime) corresponds to observation Y (probability of measuring some branch / acceleration towards the ground)? This is how scientific theories work, right?

Edit: maybe this resolves the disconnect: many-worlds does not use any "new" variables — it simply keeps all the existing variables in the accepted mathematical models for quantum mechanics. If you were to "remove" amplitudes from quantum mechanics you wouldn't have quantum mechanics any more, you'd have classical mechanics!

Expand full comment
Matt's avatar

"Why would there be a problem with a scientific theory claiming that mathematical construct X (wavefunction amplitudes / curving spacetime) corresponds to observation Y (probability of measuring some branch / acceleration towards the ground)? This is how scientific theories work, right?"

Well there's no problem with that per say except that probabilities are usually in the map not the territory. Moving the probability to the territory in the way Many Worlds does seems problematic to me.

Like if you are about to flip a coin you might say the odds are fifty-fifty of it coming up heads. And that's true in the way we usually mean it but it's not applicable in Many Worlds since both possibilities really actually happen in different branches of the wave function. So each observer ends up in every branch and observes every outcome and I don't see how that manages not to break the entire concept of probability as it relates to Many Words. Hence my presumption of the need for a separate variable like "reality fluid".

Of course an explanation as to how probability can continue to work when observers are copied into every branch of the wavefunction would also suffice.

Expand full comment
chip's avatar

There's a probability distribution over what you will observe when making an experiment, which in many worlds is really a probability distribution over which of the descendant copies/variants you end up "being". You get into questions of how different two physical systems have to be to stop being the same person.

I think that once we have a good, rigorous understanding of how a big quantum mess like a brain is conscious, and we can quantify differences between minds, we'll be able to do this sort of calculus at least in principle. Some descendant copy of you observes each possible outcome of the experiment, but they exist in different proportions in the wave function; somehow this maps to the likelihood of "you" "becoming" any of them.

This doesn't really have any inherent quantum weirdness, you could replicate the problem with rapidly forking simulated humans, like in the "AI captures your soul by simulating 100 copies of you it'll torture if you don't let it out" thought experiment.

Expand full comment
Matt's avatar

"Some descendant copy of you observes each possible outcome of the experiment, but they exist in different proportions in the wave function; somehow this maps to the likelihood of "you" "becoming" any of them."

Okay so I guess it is an open question then? Also this doesn't seem like it would be an issue for other interpretations of QM. Is that right?

Expand full comment
Jack Wilson's avatar

In 1800, the population of the US was ~5 million, smaller than that of Virgina today. In 1865, the US pop was 35 million, smaller than Texas today.

Today devolving power to the states amounts to giving most state governments power over more people than existed in the USA at its founding. It seems to me, if you believe “locals should decide local issues”, you should favor devolving real power to much smaller levels than state governments.

A massive culture war issue like abortion getting decided by a state as big as Texas, where about 55% of the population is red tribe while 45% of the population is blue tribe, is a recipe for decades of conflict and hatred. One can argue we’ve already had exactly that over the abortion issue for five decades in this country, but the fight is about to become much more geographically concentrated. Some states are going to sit on the sidelines and laugh, while others are turned into the political equivalent of gladiator arenas.

It seems like people who believe local governments should decide such issues, should define local governments as cities not states, given that major cities are now much bigger in population than large states were 150 years ago.

Most of us can live in the same culture which makes our laws, if an urban government gets to make one law while an ex-urban government another. If you don’t want to live in a place where abortion is legal, live in the suburbs (you probably do already). Otherwise, you can live in an urban core.

That way it would be easy for most people to live where their values match their laws, and hopefully we can avoid a conflagration.

Expand full comment
Deiseach's avatar

And then what? A big blue city in Texas votes to keep abortion legal, while the red areas around it vote to ban abortion?

You don't get to change the rules when your side is losing/about to lose/perceived to be losing. If you (general 'you') weren't recommending this fine-grained set of preferences on behalf of decisions you disagreed with, then you don't get to gerrymander now.

The same people calling for the elimination of the Electoral College because elections (and decisions) should be called on "majority wins" are now discovering the benefits of giving representation to minor voices? Pick a method and stick to it.

Expand full comment
Julian's avatar

>A big blue city in Texas votes to keep abortion legal, while the red areas around it vote to ban abortion

This kind of stuff already does happen, though maybe not on such a contentious issue as abortion. Minimum wage, taxes are big ones. My city is very blue in the middle of a purple county which leans red. My city had mask mandates in place much sooner than the county and kept them longer than the county did as well. I don't think this is a bad outcome.

Expand full comment
spandrel's avatar

I think there are a number of reasons that government decisions should be made at the lowest level practical; primarily, to increase accountability, transparency, and participation. (I'll note this idea - subsidiarity - has been a tenet of Green Party for decades, for these reasons.) The devil is in the term 'practical' - how do we know what is the 'lowest level practical'?

For instance, it is not practical for every locality to codify their own automobile safety standards - these things are hugely expensive to work out, and in the end no one would make cars if there were hundreds of different standards to meet for any market worth serving. On the other hand, it is highly practical for every locality to work out their own zoning (or not). Many things the federal government does are could practically be done by states, and would probably work better if they were - health insurance comes to mind, transportation (why not let some states invest heavily in trains, others in highways?). The question raised by this post is what is the 'lowest level practical' for abortion?

The concern about letting states decide is that it seems clear that in some states, abortion will be soon be codified as tantamount to murder, while in others, it will be a medical procedure. If someone 'commits abortion' in a red state and then flees to a blue state, how long do we think before blue states start refusing to extradite them? What if it's a blue state resident that is suspected of 'committing abortion' while visiting a red state but has returned home? I think the practical effect of letting the states decide will be the kinds of political acrimony that led to the Civil War. At least, it's not going to be pretty.

Expand full comment
Jack Wilson's avatar

I don't see why abortion laws can't become like alcohol prohibition laws. Once a huge national issue, prohibition is now a municipal one. There are still plenty of dry towns and counties in the Bible Belt. Doesn't prevent anyone from getting booze in the next city/county, but at least the locals get to choose to live in a locale that is consistent with their religious beliefs.

If that can happen with alcohol, seems like it could happen with abortion... one day.

Expand full comment
drosophilist's avatar

Hi Jack,

This is a interesting idea, but I think it would fail on many levels, including perverse incentives and adverse selection effects.

Example A: The City-State of Boston has extremely strict environmental and CO2 emission restrictions, because it's populated by liberals who take climate change very seriously. The State of Upper Appalachia has zero environmental regulations. Result: all polluters and heavy CO2 emitters who can afford to do so move out of Boston and into Upper Appalachia; Boston's economy craters, climate change continues unabated.

Example B: The City-State of San Francisco has an extremely generous social safety net funded by high taxes. The State of Ozarks has a super stingy social safety net and very low taxes. Result: Able-bodied adults move to the Ozarks to work, enjoy a low tax rate, and as soon as they lose their job/become sick or disabled/have a child who requires specialized therapy and care, they move to the City-State of San Francisco, which promptly collapses under the weight of subsidizing unemployed/disabled/sick people from all the other Cities and States.

Expand full comment
Moosetopher's avatar

Why do you think these are problems? Both examples demonstrate how unlimited social programs are impossible without slavery. You could just as easily point out how Sri Lanka's current problems are the result of believing in the untrue assumption that "organic" fertilizers work just as well as "chemical ones" (but then somehow decide this proves that chemical fertilizers need to be banned worldwide.)

Expand full comment
Julian's avatar

Example A already happens. Larimer County CO is very blue and has strict regulations agains fracking. Neighboring Weld County is very Red and has very lax regulations around fracking. The result is a lot of wells are built in Weld and not in Larimer.

Expand full comment
drosophilist's avatar

True, so the proposal to break up the U.S. into a bunch of city-states would take this problem and turn it up to eleven.

Not to mention that the logistical issues of breaking up the U.S. would be a nightmare. (Does every city-state get its own nuclear ICBM?)

Expand full comment
Julian's avatar

In some ways breaking them up would make it easier (for this type of example, ignoring all the other issues). Maybe not easier but more clear cut on how to deal with the issue. The US is the only large country with this kind of federal structure, but we have many examples of international agreements on these issues that could be used as examples if the US were more federated than it currently is.

Expand full comment
chip's avatar

Job creation, drosophilist! Think of the billions of man-hours handling the quadratic interrelations of the 2000 new American city-states!

Expand full comment
Acymetric's avatar

>Not to mention that the logistical issues of breaking up the U.S. would be a nightmare.

I feel like someone should start working on this, just in case.

Expand full comment
Acymetric's avatar

You also occasionally see states sue each other over things like pollution drifting over state lines which isn't quite the same thing but seems related.

Expand full comment
Jude's avatar

This makes sense for issues whose moral weight can be reasonably "contained" at a lower level, but I'm not sure you could pull this off for abortion.

Humans exist in several "spheres" of responsibility. Most people consider their highest level of responsibility to be within their family. If your children are fighting, you will intervene, whereas if you saw the neighbor kids fighting, you might volunteer a quick word of advice but you probably wouldn't try to get involved beyond that. You would intervene (or even invoke the next level of responsibility up by calling the cops) if it progressed to real abuse or domestic violence.

If you heard that slavery was being practiced in the United States again, you would probably protest and support the government arresting the perpetrators. At one point in our history, millions of people were willing to take up arms and die over the issue. But if you heard slavery was happening in, for instance, Sudan, you would be upset, but you probably wouldn't support sending troops to Sudan to deal with the issue. Much like the quick word to the neighbor kids, you might sign a petition condemning it and donate money to an organization that tried to put pressure on the Sudanese government, but all that stops short of what you would do if it was happening in the United States. On the other hand, there are genocides and crimes against humanity so terrible (e.g. the Holocaust) that you might be convinced to take responsibility to fight and die to stop them based on nothing other than shared humanity. We are currently struggling to weigh how terrible the situation in Ukraine is against the relative level of responsibility we have to Ukraine.

Given the rhetoric involved, I think any state-based solution to abortion will be extremely fragile. The pro-life movement's assertion is that fetuses are persons and thus abortion is a form of murder. The pro-choice movement's claims about women's bodily autonomy - and the risks of black market abortions - are only a step or two down on the scale of atrocities.

A state-by-state compromise might be workable for a short time if both sides are powerful enough to force a compromise (as with slavery at the beginning of the country's history), but if there is any imbalance of power there will quickly be a conflict with a winner and loser.

I just don't think it's likely that people dwelling together in the same sovereign nation - sharing laws, cultural institutions, financial support, and national defense - are likely to tolerate either institutionalized murder or the abridgement of fundamental rights if given the opportunity.

Expand full comment
Julian's avatar

I agree with your assessment of "spheres" of responsibility, but I am not sure about your assessment of the abortion situation. The alternative to state level legislation/regulation is federal/national level legislation/regulation which is what we have had since Roe. This has proved to be very fragile as well.

Using the spheres logic, people in Texas would care a lot more about abortions happening in Texas than in Maine. So in theory they should be satisfied by knowing no abortions are happening in Texas even if they are happening in Maine.

The vast majority of people in the US think there are instances where abortion should be legal: https://www.pewresearch.org/religion/2022/05/06/americas-abortion-quandary/ and a majority think it should be default legal with few or no exceptions. Only 8% take the extremist view that it should be illegal all the time. The abortion debate (like most moral debates in this country) is dominated by the extremes. A win for either extreme view will be fragile at a federal level.

Expand full comment
JohnPaul's avatar

The cities should push to become states in their own right. Most states can afford to split into 5 or 6 states.

Expand full comment
Erica Rall's avatar

I've been of this opinion myself for some time, that definitely large states and likely many medium states as well should be split into several states each, along a combination of regional and urban/rural lines.

For example, I'd split California into something like eight or ten states, names TBD:

- North coast (Napa through Del Norte counties)

- East Bay (not sure if this should include Sacramento and surrounding counties or if that should be its own state)

- South Bay (SF, San Mateo, Santa Clara, and maybe Santa Cruz counties). I'm tempted to make San Francisco a state all by itself, but that's probably not ideal.

- Central Coast (Santa Cruz or Monterrey county through Ventura county)

- Los Angeles County

- Orange County and San Diego

- West Arizona (Kern, San Bernadino, Riverside, and Imperial counties)

- West Nevada (north of Kern and south of Sacramento)

- Jefferson (north of Sacramento)

The logistics of dividing up the state's debt and ownership/control of the UC and CSU systems and the state's water infrastructure would be daunting, besides political obstacles for implementing the proposal.

If done on a nationwide basis, it would go a long way towards mitigating malapportionment of the Senate and the Electoral College, and would reduce the scope for gerrymandering mischief (although the latter is better handled by an act of Congress mandating multimember districts with some vaguely proportional election system (Quota Borda, STV, or just plain party-list proportional), in addition to making state populations more manageable and more homogenous in interests.

Expand full comment
FLWAB's avatar

Long live the great state of Jefferson! May the Double Cross fly from every flagpole!

(My dad is from Siskiyou County, so we heard a bit about Jefferson growing up.)

https://en.wikipedia.org/wiki/Jefferson_%28proposed_Pacific_state%29?msclkid=83f33da4cfdf11ecaff0d5f884358204

Expand full comment
Moosetopher's avatar

Splitting off cities into their own independent political entities would accommodate the urban/rural divide. Allowing the sophisticates to trade with the podunks for their food, power and natural resources without being about to force them to be provided would be an interesting dynamic.

Expand full comment
anish's avatar

On software performance: (one perspective)

There are broadly 2 types of problems in software. Embarrassingly parallel [1] vs serial problems. Games and GPU tasks fall in the former, everyday software usually falls in the latter. Serial problems do not parallelize, and will take just as long on 100 processors as they would on 1. For the last 20 years, individual processors have stopped scaling [2]. So, the only way to get performance improvements is to parallelize, but it rewards problems disproportionately. Serial problems stay slow, while embarrassingly parallel problems continue improving in leaps and bounds. This is exactly why impossible seeming tasks in Deep Learning or crypto computations are suddenly feasible, while sorting a bunch of items still takes just as long as it did in 1990.

P.S:

The lack of rich markdown or text-box resizing in substack comments in unforgivable.

[1] https://en.wikipedia.org/wiki/Embarrassingly_parallel

[2] https://en.wikipedia.org/wiki/Dennard_scaling#Relation_with_Moore's_law_and_computing_performance

Expand full comment
Axioms's avatar

This is a very concise explanation of what roughly 100 other comments have mentioned, with good examples.

It is unclear whether stuff like the DOTS from Unity will give large improvements in serial processing though it seems likely and then you have the Microsoft Direct Storage thing although that is really mostly adjacent to multicore by shifting certain stuff to the GPU more efficiently. But other than that there's really no consumer/commerial available options for grinding more performance out of a CPU.

Very interested if 10 years from now almost everybody will be writing data first programs for extremely serial tasks.

Expand full comment
Jeremy Goldberg's avatar

Slow software:

I think commercial software tends to be as fast as it needs to be, and no faster.

Optimizing for speed requires developer time, which is expensive.

Expand full comment
Sam's avatar

To expand on this:

Yes a delay of a few hundred milliseconds is horrible relative to what a 3D game is doing. But the question is "does that delay actually matter to the user?"

In the case of a 3D game that time is a huge deal. But in the case of clicking a button on a website you probably don't notice the difference between 15ms and 150ms, you only start noticing as you get above 1 second or so, at which point things are labeled "slow."

As you said, it's expensive to optimize for speed and all software development is done with heavy cost/benefit analysis these days.

Also worth mentioning that the 3D video games are likely using code running on your local machine for all that rendering while the button click on a website is likely making a roundtrip to a server somewhere to exchange data.

Expand full comment
Alexander Buhl's avatar

To your question about slow software:

Computers are now ~4 core, ~8 wide, ~3GHz powerhouses. Javascript, Python, Java, C#, basically all modern languages trade "ease of use" and "maintainability" (citation needed) for a factor of 0.1 in runtime speed, single core, single wide. If you write software in any language that isn't compiled to machine code, you now have a new baseline speed of 1x1x0.3 (core, wide, GHz) compared to 4x8x3. That's why basically all modern software is many orders of magnitude slower than 20 years ago.

This insight has brought people together already, under the name Handmade Network. Everything there takes 0 seconds to startup and was written by competent people who care about speed and not wasting 100x the time and energy (and electricity!) that, say, Python would.

Javascript is the only option for web for now, which is a reason to change that, not excuse it and continue!

Compilers are bad at making normal code 8 wide and parallel, which we should fix, not excuse it and continue!

Same for operating systems, Graphics APIs, drivers, debuggers, build systems, etc etc.

Further Resources:

- SIMD (what I mean by "wide")

- Handmade.network

- Jon Blow: Preventing the Collapse of Civilization (talks about the above specifically)

- Jeff&Casey: The Evils of Non-native Programming

AMA

Expand full comment
Ferien's avatar

Oh, thanks. I should read something of handmade.network.

... of the languages listed, only Javascript doesn't support multithreading.

C# can be compiled in machine code and C++ can be JIT-compiled, and it doesn't explain difference in performance.

Expand full comment
Alexander Buhl's avatar

I didn't get your third paragraph. You can't do memory management in C#, right? You just write and pray; of course compiling it doesn't help much.

Yes, some of these support "multithreading", but you're still running a JIT and a bytecode engine and a garbage collector and several pointer indirections to call class methods and closures... I programmed a full oop bytecode language, I know how they work! There's very little you can do once your baseline is that pile of separation from the hardware. Writing wide, threaded code is no easier in any of these while giving up a lot.

Expand full comment
Ferien's avatar

I meant that C++ being JITed is still as fast, JIT or native is not that relevant.

Multiple indirections are due to another object model and not being compiled to machine code or not.

C# has unsafe keywords and pointers, so you can do memory management, it's just not considered primary use case.

Expand full comment
Kemiu's avatar

Note: Python can actually be regular fast with "import numba". For the project I used it for, it was pretty easy to achieve straight C (not optimized C) levels of performance. The Python community does explicitly value speed (although behind ease of use), and there is real effort put into making the fast way to do things the default way to do things (e.g. list comprehensions) and writing libraries in C.

In my experience, the real cause here is that the web is eating native software -- web culture deprioritizes speed, application interfaces jammed into webpages are inherently slow, and every request to a web-sever introduces a perceptible delay.

Although applications can always be faster with more work, AFAIK if you make a new Apple app today with the "default" Apple technologies (e.g. SwiftUI) your app will feel super fast and appropriately utilize your hardware. I presume that Android and Windows are the same.

Expand full comment
Ferien's avatar

That's correct for e.g. doing operations on martices and vectors, which 80% programmers never use. But e.g. walking DOM tree cannot be improved in such a way (I mean, making a fast C++/C/asm code and calling it from Python)

Expand full comment
Alexander Buhl's avatar

Regarding numba:

Can it be parallel? Wide?

If it introduces compilation into your interpreted language, while having single core, single wide c code as its asymptote... then we're still between factor 32-320 slower! I like the idea of making it as easy as possible to compile python to arm64 but we need to face the reality of that (8x4)x being inaccessible to current interpreted languages. And that's the only part of the 8x4x3 equation that's ever going to get any higher! Cores will stay at 3-4GHz forever because speed of light. Literally.

Expand full comment
Ferien's avatar

Taking a quick glance, numba does support multi-core.

Couldn't you have, in theory, smaller but dumber cores with higher frequency?

Expand full comment
Alexander Buhl's avatar

c = 30 billion cm/s

CPU: 3 billion ticks/s

Light that has already been emitted, travelling uninterruptedly can travel 10cm (4') per clock tick.

If you went much further towards 30GHz even light wouldn't be able to cross the chip during a clock cycle, much less electricitry going through billions of logic gates.

Cores will not get faster. We need to go wide and parallel and we need to solve the memory-to-L1 delay. The speed of any individual core doesn't even get you anything when it takes 300 clock cycles to fetch something from memory, during which the cpu mostly sits idle.

Expand full comment
Ferien's avatar

It's possible to have smaller cores or make them non-flat, reducing diameter given same number of transistors. Cooling would be a problem though ;)

Yes, correct about external memory latency.

Expand full comment
George H.'s avatar

"Unsettled" https://www.amazon.com/dp/B08JQKQGD5/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1 Was reviewed for the book contest, but not a finalist. I liked the review(s) and bought the book. It does seem that (some) climate scientists, are trying to sell us something, and not tell us all of the science. (The rest of this is a back of the envelope calculation of the warming.) Anyway I was thinking about the numbers presented in the book. The first is that the sun adds energy at an average rate of ~250 W/m^2 The second number is that humans and our CO2 emissions have added about ~2 W/m^2 to the energy budget. Let's say it's 2.5 W/m^2 so it's a 1% effect. Now the average temperature of the Earth is ~14 C or 287 K (Kelvin). (Let's call it 300 K (easy numbers). If the temperature was proportional to the energy input then 1% rise in energy would be a 1% rise in temperature or ~3 K. But it's not. A black body radiator should follow Stefan-Boltzmann's law

https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law Energy = T^4, or temperature is the 1/4th power of the energy. For small changes this means (using a Taylor series expansion) delta T = delta E * 1/4. Or the amount humans have warmed the planet is about 0.75 C (K). That seems fine to me. And to hold human caused warming below 1.5 C we can double the amount of CO2 that we've already put into the atmosphere. (The rest of the observed warming comes from natural processes that are not under our control.) And the human caused global warming hysteria, should come to an end. Any comments. (Read the book or at least the book review.)

Expand full comment
Loris's avatar

So there are various things you could say about that.

But I think perhaps the most significant point is that what you're calculating here is based on historical emissions. Let us assume that this "back-of-the-envelope" calculation isn't missing any important factors, and is close to the true value.

It's still not reassuring, because carbon emissions have been increasing approximately exponentially since the industrial revolution began. I'm sure you can check for yourself, but you could start at https://ourworldindata.org/co2-emissions#cumulative-co2-emissions

(check out the interactive graph of cumulative world emissions).

The doubling time is something like 35 years, or perhaps a bit less.

So in 2020 the total CO2 emissions were 1.6 trillion tonnes, and (going by that graph) half of that has been emitted since 1989. So if we want to "only" double the amount of anthropogenic CO2 in the atmosphere, there isn't very long to act, particularly since everything is currently moving in the opposite direction.

Given the nature of Moloch, I think it's pretty clear that's a big ask.

Expand full comment
Carl Pham's avatar

Extrapolating exponentials is almost always wrong, because they grow and die so fast their long-term behavior is more typically determined by exogenous limits. That's why people who extrapolated the growth of COVID from its early rise ended up fabulously wrong. It's why if you extrapolate the growth in sales of Tesla out 50 years, you'll find they'll build the weight of the Moon in Model 3s every year or something equally silly. It's why the growth in revenue of a start-up is never sustained.

In this case, one obvious limitation is world population, which is certainly no longer growing exponentially, and if the most advanced countries (e.g. Japan, Europe, the US) are any reliable indication, likely to flatten or even start to fall in the next half century. The other obvious problem with extrapolating from the last two centuries is rapid industrialization: when a country first industrializes, energy use per capita explodes. But it does not *keep* rising so steeply, and indeed -- since energy inputs cost money -- it's more typical for advanced industries to start using *less* energy per unit output, and, given stabilizing population, necessarily per capita as well.

Now, it's always possible that we could all start wanting an exponentially growing number of iPhones in our homes, or products and services that become exponentially more energy intensive to create or supply -- everybody soon needs his own personal 100 MW power source to power his gravity-warping augmented reality field, or interstellar-capable cell phone, something like that -- but these don't seem a priori likely.

Expand full comment
Loris's avatar

I agree that one shouldn't naively extrapolate forever. But if one sees a strong trend over a long term, and there's no indication of any other effect, the most sensible short-term prediction is continuation of that trend.

I don't think population is particularly relevant to this in the short term. Industrialisation, certainly is. But we're not near the end of that. There's more than enough fossil fuel to burn, and plenty of industrialising countries.

You seem to have the idea that I'm arguing that we'll experience infinite growth, forever. But I'm not. If you read what I've written in the context of what it's responding to, you'll see that I'm saying that we can't be complacent about past emissions being within an acceptable range, since those emissions predominantly happened very recently, and we'd have to change what the whole world is doing pretty quickly to keep them within tolerances.

In general, and barring catastrophe, exponential growth of pretty much anything doesn't just immediately stop. Instead it tapers off. The doubling time gradually gets longer and the level eventually reaches a plateau. We can't necessarily afford that. If, as in the back-of-the-envelope calculation in the original post, CO2 emissions to date have lead to an average worldwide 0.75 degree C increase, 1.5 degrees is the target, and increases over that are increasingly bad, even just sticking at current levels isn't good enough. And the world as a whole isn't even managing that.

Expand full comment
George H.'s avatar

Yeah 35 years to double (our CO2) seems about right. That gives us plenty of time to build more nuclear power plants. (let's get started. :^)

Expand full comment
Loris's avatar

I'm not averse to that, but I think it might still be a bit of a scramble. I think it's pretty important to recognise that there is an issue with the way we're going, so as to be able to deal with it.

If the problem with "climate scientists" is that they've identified a legitimate and very significant problem which will develop in the future and which needs preventative actions, and are working to bring that to the attention of the world - I have to say I don't think it's a problem at all.

Expand full comment
George H.'s avatar

Have you read the book or the book review?

Expand full comment
Loris's avatar

No, I haven't, I'm responding to what you've said.

If the book talks about people misrepresenting the science, that's fine and I don't have anything against it.

It just seemed to me that the calculation you gave was misleading. Not that it's wrong, just that given other information I've gone over, it's not cause for complacency. Given the speed at which the world can collectively act, there is very much a need for urgency in dealing with e.g. CO2 emissions.

Expand full comment
George H.'s avatar

I'm quoting then from the introduction (Taken from the book review) <quote>

He lists four discoveries:

Humans exert a growing, but physically small, warming influence on the climate. The deficiencies of climate data challenge our ability to untangle the response to human influences from poorly understood natural changes.

The results from the multitude of climate models disagree with, or even contradict, each other and many kinds of observations. A vague “expert judgment” was sometimes applied to adjust model results and obfuscate shortcomings.

Government and UN press releases and summaries do not accurately reflect the reports themselves. There was a consensus at the meeting on some important issues, but not at all the strong consensus the media promulgates. Distinguished climate experts (including report authors themselves) are embarrassed by some media portrayals of the science. This was somewhat shocking.

And the kicker (emphasis mine):

In short, the science is insufficient to make useful projections about how the climate will change over the coming decades, much less what effect our actions will have on it.

<unquote>

There may be less urgency than you think.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

I’m fairly sure that the rate of change is slowing significantly though.

Expand full comment
Loris's avatar

In what sense?

Pandemic-related measures may have caused a temporary decrease in emissions, and maybe there's not been quite as much growth -but I've not seen anything about a sustained decrease.

In fact there's some reports to the contrary:

https://www.iea.org/news/global-co2-emissions-rebounded-to-their-highest-level-in-history-in-2021

Remember, to stick to a doubling (the target, with just a 1.5 degree C rise), one approach would be to reduce emissions at the rate they increased. If we 'stick' for any length of time, we'd have to go down much faster later.

I would suggest that realistically, that isn't likely to happen.

To be clear - it's not a hard cut-off. I'm not trying to scaremonger.

But it's reasonable to wonder- just how far are we going to overshoot that target?

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Well that’s a bit cherry picking since it’s from covid to post covid. Also coal became more predominant - hence the 6%. It’s not universal but the increase in carbon free technologies is on the cusp of becoming dominant - the U.K. for instance saw a 30% drop from 2000-2020. Which means it is past peak carbon. Most of the major polluters are non western these days, however the technology is there for all countries to catch up.

Expand full comment
Loris's avatar

I think you've fixated on something the consequences don't care about. The most recent data available is the highest ever recorded - that's not cherry-picking. It doesn't matter why the world's emissions went up, just that they did.

A few small countries reducing emissions at something approaching the necessary overall rate (2% per year) doesn't offset large countries going up significantly. Basically every country needs to wind back emissions to make a dent.

If the technology is 'there', why is it that /any/ countries are still ramping up? Doesn't it seem suspicious that the countries which don't care about the environment are ignoring them? If the tech was actually better (i.e. cheaper) than what came before, it would replace across the board!

It's clearly not there /yet/.

Sure we can hope, but treating it as a fait accompli is a mistake. Work needs to be done.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Firstly I said the “rate of increase” is falling, as a response to your claim of acceleration. And the U.K. is not a “small country” in terms of GDP - it’s 5th in the world. In fact the EU has managed the same feat - a 20% reduction in 20 years despite a large GDP increase in that time. It is true that the US is an outlier amongst developed nations and that the Chinese are taking up some slack, but policy changes in both countries means they are coming on board too. Zero net is achievable in the time frame. 2021 was disappointing but also an outlier

Expand full comment
Thor Odinson's avatar

That level of back-of-the-envelope maths is only good for getting to within an order of magnitude, so you shouldn't be surprised to disagree with more complete models by a factor of 2 or 3.

Expand full comment
George H.'s avatar

Huh, did you read the book or the review? Being a back of the envelope calc. doesn't mean a factor of ten error. In this case you can doubt either the numbers. (total solar (and geo-thermal) energy, man-made contribution, or average temp. Or the exponent E~T^4, otherwise it's a small change and a ratio and please tell me what factors are changing so much with a 1% change in the energy input.

Expand full comment
User's avatar
Comment deleted
May 9, 2022Edited
Comment deleted
Expand full comment
George H.'s avatar

Humans are a ~1% change in the energy in (stored), there are lots of feedbacks, but which ones change by a lot with only a 1% change in the energy in?

Expand full comment
User's avatar
Comment deleted
May 10, 2022Edited
Comment deleted
Expand full comment
George H.'s avatar

Yeah positive feed back lead to things blowing up, or hitting the power supply rail. Positive feedback in the climate that only turns on with this last 1% change seems unlikely, (though possible) The climate models are all over the place. (did you read the book or the review? (I liked the second review better.)) A spherical earth is a pretty good approximation. :^)

Expand full comment
Carl Pham's avatar

The existence of positive feedback loops in a system that has been stable for a hundred million years is very unlikely. If climate had large responses to small perturbations, it should have swung wildly around chaotically all through Earth's history.

Edited to add: that doesn't mean there can't be positive feedback results on a *short* time scale -- almost certainly there are. But the long-term stability of climate implies there are larger negative feedback loops that will limit the effect of any short-term positive feedback loops.

Expand full comment
Concavenator's avatar

It has. Earth has been drifting in and out of glaciations dozens of time in the last million years alone. In the Eocene epoch, 50 million years ago, there were jungles in Greenland and crocodiles at the north pole. At the end of the Permian, during the largest mass extinction in the fossil record, the global temperature was as much as 10°C above today, abd this was a few million years after an ice age. At least thrice in the hundred million years before the Cambrian the Earth was covered in solid ice down to the Equator, and the third time was immediately followed by a period as hot as the late Permian. Over geologic time, the climate is absolutely //not// stable, not in the sense relevant here.

Expand full comment
Carl Pham's avatar

Your definition of "wild swings" is exquisitely too sensitive. If you want to see what a wild swing in climate is like, consider Venus or Mars. A swing of 1-5K out of a typical average of 300K is quite small, and glaciation[1] is a pretty mild change in climate -- which, by the way, almost all species survived just fine. I find it quite unlikely a new Ice Age would significantly threaten human civilization, let alone survival. We're supposed to be capable of living on Mars, right? Living on a glaciated Earth is a far lighter challenge.

I do agree that Snowball Earth *was* a big swing, assuming it happened, but so far as I know nobody suggested it happened less than ~1 billion years ago, or is likely to repeat.

------

[1] The USGS suggests at the last glacial maximum about 25% of the Earth's land area was covered in ice, versus 11% today.

Expand full comment
Nancy Lebovitz's avatar

https://clearerthinkingpodcast.com/episode/103

One hour interview with a deradicalized jihadi recruiter who died recently.

More than a little interesting, since I'd wondered about the mental state of jihadist recruiters, though perhaps I was more curious about handlers.

Jesse Morton had human motivations, though some of them were extreme.

His father abandoned the family, his mother was violently, compulsively, constantly abusive. He asked for help a few times. H didn't get help.

He didn't just end up hating his mother or the people who didn't help. He hated America because he felt betrayed. And because he had PTSD, which can lead to rage of various sorts.

So he got recruited and became a speaker to Americans. He was energetic and effective. (How come I never hear deradicalization stories from people who were ordinary shlubs in radical movements?)

His motivations were past trauma, the usual community and purpose, and pleasure at being good at what he was doing. He was in a community which was good at keeping people from seeing they were committing atrocities.

And he got caught, and was deradicalized, and liked America pretty well. (Past tense because he died recently.) He did covert intelligence work against jihadists until The Washington Post outed him.

He still loves the Koran, but thinks the viciousness came in the hadiths, the later interpretations of the Koran.

Since he couldn't be undercover any more, he went public and did deradicalization generally instead of specializing in jihadists.

He believed that people get deradicalized when they're extended more compassion than they deserve. I think I've seen a few examples of people who deradicalize from spontaneous insight, but no examples of people who deradicalize from being argued with or insulted.

He didn't get into the question of what you do if you don't have the internal resources to be compassionate.

Expand full comment
David Piepgrass's avatar

I wish I knew how to be compassionate to the legions of crazies promoting extreme views on the internet, when the only thing we know about them is the crazy stuff they say.

Actually I tried my best with my hyperMAGA father, e.g. reminding him about how not-crazy he acted when I was a kid. Sent a copy of Scout Mindset, even. It was completely useless, he learned nothing from the book and just commented that the author "overthinks things". It seems I have a long way to go in the art of deradicalization.

Expand full comment
FLWAB's avatar

>(How come I never hear deradicalization stories from people who were ordinary shlubs in radical movements?)

Ordinary shlubs don't interview well. If you were competent and charismatic enough to rise at least a little in an organization then you're probably competent and charismatic enough to get an interesting interview out of.

Expand full comment
cromulent's avatar

I became deradicalized through argument/ insight, although it was about religion in general and so leaving islam lead to leaving radical islam as a side effect. Back in the days of new atheism being big. But it was 'nice' arguments/ debates so compassion or at least lack of hostility is still important yeah. Don't know why you haven't come across ordinary schlubs from radical environments but it does happen. probably to do with ordinary schlubs not being the types to be able to get famous enough for you to come across them afterwards.

In any case, in a similar vein i'd recommend 'i'm not a monster' podcast series about an american who went to join ISIS and 'the return: life after ISIS' documentary about a british girl who went to join ISIS (shamima begum) as well as the episode on the 'red handed podcast' also about shamima.

Expand full comment
Matan's avatar

Random question here:

Does anyone know of someplace to watch groups play intrigue games?

Games like the board games diplomacy, dune and game of thrones but any recommendations will be much appreciated!

Expand full comment
Axioms's avatar

https://www.youtube.com/watch?v=Js_WINTqtjo

This was an excellent series about people playing the famous Avalon Hill Dune.

Expand full comment
User's avatar
Comment deleted
May 9, 2022Edited
Comment deleted
Expand full comment
Matan's avatar

Wow thanks!

Not what I was looking for but I'll definitely check it out

Expand full comment
Fika monster's avatar

I got an apple watch SE last week (yes, very middle class i know), and im wondering if people here have tips and tricks

Main reasons i got it:

-Health (exercise more and better)

-Be less sedentary (ideally want to move every 25 minutes, but the Watch is set on every 50 minutes(

-compensate for my adhd/autism brains poor executive functioning

-Be better at building habits, minimize effort required. maybe use beeminder with it in the future.

The main things it has done for me so far is to illustrate that i sleep too little, make me be a bit less sedentary, Motivate me to have slightly longer workouts, and find my phone much more easily. i have Some notifications on it, pretty minimal, but seeing my calender there helps.

On another note: i want to build good habits, but i get overwhelmed when choosing a habit app: I want something that works with apple watch/iphone and easily puts my exercise and such in it without me having to constantly check and update it.

Expand full comment
Melvin's avatar

Why do you feel the need to apologise for your middle class behaviour? Do you see yourself as a member of another class? Do you wish for others to see you as a member of another class?

Expand full comment
Domo Sapiens's avatar

Have you tried Tangerine? It looks to be pretty exhaustive and well integrated. I'm just starting to try it, so no experience to share.

Other than that, aim high with your calorie goal per day: No use if its too easy to reach. But since you can't put in "rest days" you either have to not care about the gamification aspect of the completion awards or find a reasonable middle ground. For me 500cal is the absolute minimum as a goal but I will go back to about 1000. I was feeling best when I did that for a year.

Expand full comment
Fika monster's avatar

I'll give tangerine a look

Expand full comment
Andrew Gough's avatar

Documentary on 2020 voter fraud;

https://2000mules.com/

Expand full comment
User's avatar
Comment deleted
May 9, 2022
Comment deleted
Expand full comment
Andrew Gough's avatar

Cell phone location data is accurate enough to let me know where my door dash order is.

Expand full comment
Axioms's avatar

There were like 3 Republican political operatives and like one black woman who really thought she was allowed to vote after prison or w/e and that was all the 2020 voter fraud. Unbelievable that people think voter fraud ever event gets to triple digits much less 4 or 5 digits to swing federal elections.

Expand full comment
Freedom's avatar

I don't know if it's fraud per se, but is it the case that rejected mail-in ballots went from like 6% to 0.1% because they just stopped rejecting signatures? And also it was documented that people were paid to harvest ballots and bring them to drop boxes which was against the law? Those are two things I have heard but have not heard rebutted.

Expand full comment
Axioms's avatar

Well the most prominent ballot harvesting operation and I think the only one that impacted a race, in fact I think they re-ran it, was a Republican operation in North Carolina.

On the issue of signatures it is important to understand that signatures are basically worthless. I personally would be almost unable to use banks or credit cards if they applied similar standards to that as Republicans want to apply to voting.

There's no strong evidence that signatures are holding back fraud but there is plenty of evidence it is impacting legitimate voters. Similarly for all the other Republican "anti-fraud" efforts. A lot of these checks are also quite open to interpretation. So you could easily engage in biased decision making about what seems fishy.

Expand full comment
Melvin's avatar

Isn't this exactly what Republicans have been saying? Signatures are useless for preventing fraud?

And therefore mass mail-voting is hilariously insecure -- even more so than ordinary US elections?

Expand full comment
John Schilling's avatar

That mail-in voting is hilariously insecure does not mean that large-scale fraud is occurring. Almost every house in the United States is hilariously insecure against burglary, yet in many neighborhoods burglary is rare.

Being worried that a malicious actor might try to subvert US elections in the future is one thing, but if you're going to claim that a particular group has *actually done this*, you need much better evidence than "well it's massively insecure so they could have!".

Expand full comment
Skivverus's avatar

I mean, Tammany Hall was a thing. And it's certainly not "unbelievable" that electronic voting machines could have backdoors or vulnerabilities allowing for "fixing" their totals. See, for instance, https://xkcd.com/463/

The catch is that voter fraud only really matters (so long as the Electoral College sticks around, at least) in swing states, and there - one hopes - whichever organization is committing the fraud has an opposition capable of checking for it.

Open question these days whether or not such accusations would be believed, though.

Expand full comment
Axioms's avatar

I mean sure, and as I mentioned in another comment in theory you *could* employ some of the very interpretational anti-fraud stuff like signatures to make big data decisions on who to exclude if you wanted to help one side.

It isn't unbelievable that some kind of fraud could happen it is unbelievable that people think it does.

Expand full comment
Melvin's avatar

> It isn't unbelievable that some kind of fraud could happen it is unbelievable that people think it does

I used to work in fraud prevention so I'm puzzled by this sort of thought. If a form of fraud can happen, it most certainly does happen.

I'm really puzzled as to why anyone would think it didn't. The 2020 election season featured people burning down department stores, storming the capitol, multiple attempted assassinations... are people under the impression that nobody ever thought to steal a ballot paper from someone's mailbox?

Expand full comment
Skivverus's avatar

From the mailbox? No, you wouldn't get scale that way, at least not without being too obvious. The place to look for fraud at scale would be wherever the ballots are delivered, where you could, say, have a bunch of extra pre-printed, pre-"signed" ballots stashed away somewhere on the premises, which you bring out after you've kicked out the opposing party's observers (or whenever you can sneak them in without said observers noticing). You "count" those, instead of whoever's you had originally.

Bonus points: swap them out between the initial count and a recount, then accuse the *other* party of fraud.

Expand full comment
B Civil's avatar

Yeah, 11,000 them in GA alone. And every plundered mailbox belonged to a Trump voter.

There otta be a law

Expand full comment
Moosetopher's avatar

After all, the only reason GWB was re-elected was hacked Diebold voting machines in OH!

Expand full comment
Axioms's avatar

Man this subject was very big in my region back then. Mostly speculative rather than with die hard believers. But tons of paper ballot advocates.

Expand full comment
Edward Scizorhands's avatar

We did get good reforms out of it, and out of the chad issue from 2000.

Right now in most (not all) of the US we have something pretty close to the best of both worlds, where people use a machine to fill out a ballot, which is counted by a different machine.

There's still work to be done, and it requires constant vigilance, and people need to take security seriously and not only when their opponents win, and mail-in ballots should be discouraged.

Expand full comment
Bullseye's avatar

I'm still in favor of paper ballots. Even if Diebold wasn't doing anything shady, there's always the potential for a future voting machine to have dishonest or incompetent programming.

Expand full comment
Jim's avatar

I once heard a story that David Gelernter made the argument for an online avatar of ourselves, which understands our preferences and pre-screens our actual activities. Suppose you like italian food and overseas travel - you tell your online avatar this, and then rather than you spending hours of your own time online searching for italian/travel things, it trawls the web and does that for you, and only notifies you with new and relevant content, or comes to you with a pre-filled out itinerary for a flight to Rome and potential bookings for a dozen excellent restaurants when there are cheap flights and when you're going to have some leave time you can use. (Otherwise it doesn't tell you.)

I say all of this because a) this is a service that I'd like to use despite the creepiness, and b) I think about this every time I'm sitting there endlessly trawling through some mediocre online shop trying to see if they have shoes my size (bc often they don't, and then I've just wasted my life trawling crappy online stores). But also, internet services aren't designed to *reduce* the amount of time I spent online, they're trying to maximise it. After all, Instagram is loaded with ads and sponsored content, and yet they're trying to keep you engaged as much as possible, and web content companies openly talk about increasing their eyeball time with their investors - are they not incentivised enough to actually sell you something? Is the going rate for advertising too high relative to the kickback from purchasing something? (I can't help noticing that Apple shows me my time spent on my device, but then Apple has already been paid when I bought the thing. Facebook isn't an upfront purchase. )

I feel like someone is missing a trick here, or there's a business model which isn't getting deployed, but maybe there's a good reason for that. Sci fi routinely depicts tech heavy worlds which show people less connected to devices. I want to argue that there's a very high abstract demand for ways to get us offline, though I'll admit that's not necessarily how you'd actually behave when presented with the option in front of you.

Expand full comment
B Civil's avatar

Hire a good butler.

Expand full comment
Lumberheart's avatar

Your first paragraph just sounds like a personal assistant, but AI instead of a real person. I've heard that some higher-end credit cards also offer those services to cardholders (but I can't find where I might have heard that).

As for advertizing and time spent on websites, keep in mind that the goal of Instagram and other social media sites isn't to sell you products. Their goal is to sell advertizing space to the companies that do - and the longer you're using the service, the more ads you see and the more they get paid. If you click an ad and buy something with that referral link, even better. But sometimes you can't get immediate action, such as with restaurants or movies, and you just want to increase brand awareness.

I wish more websites would do what Tumblr has recently done and implement subscription-based ad-free options (as opposed to Twitter Blue, which explicitly *doesn't* remove ads and only fixes inherent flaws in how people use/view long tweets).

Expand full comment
Bugmaster's avatar

AAA video games have some pretty stiff hardware requirements; realistically, you need a GPU that is at most a couple years old. On the other hand, user interaction in these games is usually pretty limited. Yes, the worlds they render can be quite detailed, but the user can only execute a few logically simple (though admittedly computationally expensive) and predictable actions. On top of that, once the game is released, its functionality is pretty much set in stone (plus or minus a few DLCs). Business software is the exact opposite of that. It has to run on lowest-common-denominator hardware, especially cheap (and extremely low-power) laptops. It has to execute many complex business-logic functions simultaneously. And it has to be easy to extend and maintain, as new functionality comes on the market. On top of that, unlike your game, business software has to be interoperable with other business software, not just with itself.

So, would it be possible to build e.g. a GPU-accelerated word processor that runs like a video game ? Definitely, but very few people would buy it.

Expand full comment
Hamish Todd's avatar

The UI thing is not true at all, lots of games are more complex than word processors. Simple example that comes to mind is people routinely building turing complete systems in minecraft. Or consider Blender, which does what games do and has a wildly complex UI but is much better than most of the world's software because it is coded in the style of a video game.

Apps with basic UI and no interoperability like train-booking systems chug all the time.

As to the GPU thing, Scott said "millions of polygons and complex interacting physics". That describes all triple-A games that were made a decade ago, and they run on bog-standard GPUs today.

Folks interested in this degradation might be interested by the thoughts of Jonathan Blow:

https://www.youtube.com/watch?v=oJ4GcZs7y6g

https://www.youtube.com/watch?v=pW-SOdj4Kkk

Expand full comment
Bugmaster's avatar

You say "degradation", but business software was never particularly fast -- at least, not once you got to the point where word processors became practical (I loved the original MultiEdit for DOS, though). It only seems slow today in comparison to video games, which *have* become faster (due to GPU acceleration, mostly). That said, I also disagree with you on UI/UX. Yes, you can achieve complex results in Minecraft; but it's actual UI/UX is vastly simpler than that of e.g. Word. Blender is somewhat of an exception, since it is designed specifically as a 3D graphics program. Unlike Word, it is unusable without a reasonably powerful GPU; like Word, its actual UI/UX interactions (clicking buttons, moving sliders, etc.) are reasonably fast but not stellar -- by contrast with e.g. rotating objects in the 3D scene. Train-booking systems are a completely different beast (if I understand you correctly) -- these are Web applications, whose entire logic runs on the server, and is thus subject to network latency (in addition to all the abovementioned constraints due to running in the browser). Keep in mind that real-time video games become virtually unplayable once latency hits 100ms or more.

Expand full comment
Jack Wilson's avatar

It's a common trope that popular culture has fragmented over the past two decades, but isn't that merely a continuation of a centuries-long trend? A thousand years ago, in Christendom, in the Islamic world, the Hindu and Buddhist world, the stories from the respective religious books were basically the popular culture of the time and place. No?

500 years later, some other books and plays got printed in Europe, and more literature became part of popular culture, including rediscovered culture from ancient Greece and Rome. Then came printed sheet music, operas and symphonies. Over the centuries, more books, pamphlets and newspapers became a part of popular culture.

I suppose it's hard to say that much literary and art culture in the 18th century was "popular culture", since the median member of society at the time didn't have access to it from their pig farm, but what has changed since then that now allows the pig farmer or would-have-been pig farmer to consume artistic culture, be it high or low? Isn't it mostly that we have raised the economic status of the pig farmer so that he may now enjoy the opera if he chooses and not that we have brought the opera down to the level of the pig?

In the 20th century we get movies, radio, sound recordings. Then network TV, cable, the www. Well before streaming channels and Spotify, the popular culture of movie stars and singers had begun to eclipse the popular memory of Biblical stories -- once the key reference point for all pop culture in the West. An entire canon of secular Literature developed in parallel to the religious literature which had once dominated everything.

My question is whether the fragmentation continues for the next thousand years or whether some consolidation of cultural memory returns which replaces small stories with big myths.

Expand full comment
Nick O'Connor's avatar

I don't think it's so much that there has been a movement towards fragmentation over the past thousand years, more that as culture and technology have changed, mass culture has integrated and fragmented as a result of that, over a number of axes (on a global scale, on a national scale, on a local scale, for the political, cultural, or economic elite, for the population as a whole, in relation to a group's level of integration with its cultural past, etc.).

So for example most mass culture used to be oral and hyper-local, with adjacent villages having different though related cultures (for this reason, I don't think it's right to say that their cultures were just the stories from their religious books). The spread of literacy, radio, TV, cheap/quick/safe transport, migration, etc. has ended a lot of this. The imposition of national languages in Europe on mutually unintelligible dialects again led to integration of popular culture on a national level, while the increased importance of these languages fragmented an elite pan-European culture based on the use of Latin. The dominance of Hollywood and American popular culture has integrated global culture over the last hundred years, as has the broader globalisation over the past few hundred; the rise of China and India, as non-Western powers, to global prominence may well fragment global mass culture.

If I had to guess, I'd say that the current trend (different places around the world sharing more mass culture, at the same time as different people in the same place share less mass culture with people living in the same place) will continue, which is a very boring answer. Maybe a slightly more interesting one is that I think it really matters that people living in the same place have the same culture, so places that manage to sustain this (even if as part of a mass global culture) will do better than those that don't, and there will be a significant backlash against the subredditisation of culture. This will be confused by a lot of people with a backlash against global mass cultures, which won't really happen.

Expand full comment
Stephen Lindsay's avatar

Fragmentation can’t continue indefinitely. At some point there is so little holding the failed society together that it either adopts the culture of some more attractive outside society or re-coalesces around some new unifying force from within (usually developed at the frontiers of the original fragmenting society). The new unifying force has always been a religion historically. Will be interesting to see what form that takes in our future.

Expand full comment
Bullseye's avatar

People couldn't watch movies or read printed books in the middle ages, but they still told stories and watched plays. I think pop culture was probably very fragmented, with every country telling different stories, and every story had countless variations as different people told it. Every Christian country heard stories about Christ, but only England heard about Robin Hood. At least one priest complained that people liked Robin Hood better.

Expand full comment
Nancy Lebovitz's avatar

And they sang, and assume they played music. For that matter, games are part of popular culture.

Expand full comment
B Civil's avatar

I think music is the big one.

“Folk”music ( in the broadest possible sense)

Endless variations in narrative and melody. Moral and behavioral instruction, the works…

Expand full comment
Eremolalos's avatar

Who here knows something about how you succeed as a realtor? I’m trying to help a young guy who’s attempting to do that, and has made very little money in his first year of working. He’s a smart, nice-looking polite guy in his late 20’s, and I’m sure he comes across to his clients as honest, conscientious and intelligent, all of which he is. He has looked into how one gets sales, and has managed to get a bit of internet professional presence going. He pays for various services that give you various kinds of leads. And he works his ass off to help his clients and learn more about how the business works. But he’s getting nowhere.

I myself am impractical and about as uninterested in business as it is possible to be. But my intuition is that there are 2 main reasons why he’s getting nowhere. The first is that the profession of realtor may be dying out, the way travel agents and malls died out. More people find things via online searches, etc., and do not want a realtor in the mix, taking a chunk of money. The other is that he just is not old enough and rich enough. He does not have anyone in his social network who is in a position to buy a house, so he’s not in a position to do “networking” to find buyers. And he drives an econobox — I’m sure his clients can tell that he himself has never bought a house, and is not in a position to do so now.

When he first started out he was doing rentals, where you make small sums, but now he is doing house sales. Apparently it is much better to represent sellers than buyers, but sellers are hard to get. He mostly represents buyers, and spends lots of time showing them place after place.

Is there hope? If so, where the hell is it?

Expand full comment
Andrew Flicker's avatar

Age is going to be a challenge, for sure. A lot of people are hiring a realtor because they want a trusted, experienced agent- a young kid driving an econobox does not give that impression. My mother is a successful fulltime realtor (she used to teach, and quit teaching when she made more money working realty part-time than my dad did in his full-time job). This, despite her getting into it as a middle-aged woman in a rural area!

A few things that I'm aware of that led to her success:

- Just crazy hard work. She has a great work ethic, and put in a ton of hours, went to training, hit the books, networked, etc.

- Willingness to travel. Sure, you'll lose money on gas, but widening her effective range significantly helped her access to clients.

- Participation in other orgs for networking - Rotary, Chamber of Commerce, Realtor orgs, etc. She was rotating in out of leadership roles in the Chamber of Commerce very quickly because in most small-to-mid size cities they suffer from a dearth of smart, interested people.

- Working with banks to give second-opinion appraisals. These aren't big money, but you get to see a LOT of houses, it's a small amount of steady income, you learn your areas better, and it helps build relationships.

- Working with a mentor. She did a lot of time with a more experienced broker after her first couple of years, and joined a Remax group led by her. Sure, you lose a ton of % to the brokerage, but hard to beat the network, free advertising, mentorship, etc.

- Getting into real estate as soon as you can with your own capital - again, more experience. If you can't make money with your own capital, why trust you to help others? etc.

Expand full comment
Freedom's avatar

The key to success for a realtor is the size of your network. If you want to succeed, you should be spending most of your time networking- going to networking events, going to Rotary/Kiwanis etc. events, that kind of thing. Of course it helps to have a large personal network. Then connect with the people you meet and touch base with them periodically online. The key is to be top-of-mind when they need a realtor, because there are a lot of realtors and it is often difficult to tell the good from the bad. Mostly what they want is someone who is comfortable to work with.

Expand full comment
Erica Rall's avatar

Part of the problem is that there are tons of realtors competing for nowhere near enough clients to keep them employed anywhere near full-time.

From what I've seen as a homeowner who's dabbled in real estate investing, a significant part of how realtors build their client base seems to be through hosting open houses. A ton of the foot traffic at an open house is casual sight-seers who are at most just starting to think about buying or selling a home (often both: if you already own a home, buying a new one usually is paired with selling the old one), and every open house I've been to, the hosting realtor is very, very quick to feel people out for this and pitch themselves as potential agents to anyone who isn't in the market for that particular home but might be looking to buy or sell in the medium-term future.

One pattern I've seen is that sometimes junior realtors host open houses for homes where the actual seller's agent is often busy with other active clients on weekends. I'm not sure is this is a routine practice at specific brokerages (I saw this from Keller Williams a couple times) or if it's just a function from the seller's agent being busy enough that he faces a tradeoff between trolling for new clients and working for existing clients. So if your friend can find a position with a brokerage that does this or can form professional connections with agents who have relatively full plates in terms of workload and client base, that'd be one way to build a client base.

Another avenue would be to get work by offering discounts on seller's agent commissions and rebates on buyer's agent commissions. There are brokerages that do this routinely (most notably Redfin), or your friend might be able to do this on his own depending on his arrangement with his broker.

Expand full comment
Eremolalos's avatar

Thank you Eric, and the other practical people who have commented.

Expand full comment
Jack Wilson's avatar

The one person I know who did very well in real estate had a great mentor he credits for much of his success. It's a field with no barriers to entry, so one likely needs a real ace in the hole to have half a chance. That's my two cents.

Expand full comment
hnau's avatar

In computing, speed gets optimized if and only if it pays. AAA video games compete heavily on being responsive, enjoyable, and gorgeous. Hardware manufacturers compete on performance benchmarks, especially for compute-intensive stuff like GPUs (which generally get used either to run those games or, even more speed-dependently, to mine crypto). Amazon knows exactly how much revenue it loses if its average load time increases by 10ms, and that number has more zeroes than you might expect.

But your average boring 2D software is pretty latency-insensitive below a certain threshold. Consumers come for the functionality, not the blazing speed. 100-200ms is about the threshold of what a user can perceive; even 1-2s is tolerable if it's not happening constantly. What are consumers going to do, run apples-to-oranges latency comparisons with other tools and post the detailed analysis in product reviews?

As far as the mechanics of making things faster go: caching dominates all other strategies in most contexts. Games, hardware architectures, and FAANGs use it a ton. But it's resource-intensive and notoriously hard to get right. No one's going to bother engineering it unless the payoffs far outweigh the risks.

(Source: I'm actively involved in cache design for a SaaS app where the threshold of latencies we care enough to look into sits in the multi-second range... and that's an order-of-magnitude improvement over previous-generation competitors)

Expand full comment
Ferien's avatar

Doesn't like this 100-200 ms refer to reaction measurement which includes time of human brain sending signal to arm and finger actually making a press? This is not a threshold of what a user can perceive.

Expand full comment
Maynard Handley's avatar

The 3D SW comment is problematic because it confuses two different issues: compute vs IO.

The primary reason SW is slow to open, and slow to respond the first time you perform a new function (whether pressing a menu or anything else) is because of IO issues. This becomes ever more of a problem the lower end your computing is, from high performance flash through low performance flash through old-style hard drives.

And while there are many things that can be done to make IO faster, it's the area of computing that demands the most backward compatibility -- poor decisions made in the mid 1970s can, to some extent, be worked around, but cannot simply be ignored, because while people will tolerate some degree of forced upgrading of SW, and a lot of "new game simply doesn't play on old card", they will tolerate very little of "your files from five years ago can no longer be read".

After this most important issue of IO, there is the basic fact that different problems have different structures. Those skilled in the art of understanding how modern CPUs work (which is, admittedly, a set vastly smaller than those claiming such expertise on the internet) appreciate that, to a substantial extent, the gating factor in the performance of one class of algorithms ("latency code", much of the stuff you run on your CPU) is serialization - step Z cannot happen before step Y which cannot happen before step X and so on. The gating factor in a different set of algorithms is simply how much compute engines you can throw at the problem ("throughput code", most of the stuff you run on a GPU [or NPU, or VPU, or ISP]).

You can't make a baby faster by throwing more women at the problem, but you can create an army of babies faster starting with an army of women. If your problem more like "I need to create one baby, and I need to wait nine months for it to happen" or is it more like "I need to create ten thousand babies every day, so I can just create a pipeline of dorms of women, each scheduled to give birth as appropriate every day"?

Expand full comment
quiet_NaN's avatar

> poor decisions made in the mid 1970s can, to some extent, be worked around, but cannot simply be ignored, because [...] they will tolerate very little of "your files from five years ago can no longer be read".

What specific poor decisions do you mean here?

For substack comments, I would guess that the main factor for the delay is indeed the network IO. Loading comments on demand instead of all at once with the article turns a very simple system (with the drawback that you need to refresh manually to get new comments) into something more complex.

Expand full comment
Maynard Handley's avatar

I assumed the slowness in question was based on local storage, not network.

Local storage problematic past decision include things like

- the 512B sector (upgraded far too late to 4K sectors, with the whole "non-aligned" nonsense").

- pretty much every decision made regarding ATA, starting with addressing

- the entire UNIX (and then C) IO API

- the fact that C and UNIX (and thus two generations now of programmers) does not appreciate the difference between PERSISTENCE and ORDERING and thus forces flushes far more often than necessary to obtain ordering guarantees.

Many of these were eventually fixed (too late, and after much pain), many of them (eg 4K x86 pages, APIs, ordering vs persistence) are still not fixed.

Expand full comment
Ferien's avatar

x86 supports 4M pages starting from Pentium Pro which is more than 25 years old.

How would 4k pages affect any of performance? OS is under no obligation so load small 4k pages sequentially, it can emulate larger page size by speculatively loading pages.

Expand full comment
Maynard Handley's avatar

Large pages are completely irrelevant to the issue.

Of course sequential sectors on disk are possible (not just in IO, but also in the VM system as a whole, eg COLT, which is apparently implemented in some form on AMD).

But enforcing 4K as the baseline sector size (just like 512B before) has strong implications for the amount of overhead (in finding your place on a disk, or in breaking pages into smaller units with flash) and in ECC design. Until the market *as a whole* is willing to move on to larger minimum allocation units, that overhead has to be paid by every disk design, regardless of whether the OS using the disk in fact plans to use minimum 128kB (or whatever) chunks.

Expand full comment
Kei's avatar

For any nuclear history buffs out there - when do you think the first nuclear weapon would've been made if the Manhattan project never happened?

From my read of wikipedia, it seems that while the Russians did make a nuclear bomb by 1949, they were helped along by outside intelligence, and only redoubled their efforts after 1945. The article also mentions that some Russian physicists had been skeptical a nuke was possible. Separately, the UK only made a nuclear bomb by 1952, and I think were also helped along by the work on the Manhattan project.

I understand this requires some assumptions around what the US and American physicists do in lieu of the Manhattan project, so feel free to make any reasonable assumptions.

Expand full comment
B Civil's avatar

I think you underestimate the desire at the time to end the war quickly and decisively.

I find it hard to come up with a list of that time

I seriously doubt that a cost benefit analysis would have meant the bomb was too expensive.

Expand full comment
quiet_NaN's avatar

Your analysis has the benefit of hindsight. The Manhattan Project cost some 2G$ (of 1939). From what I understand, the total GDP of the US 1939-1945 was about 977G$ (of 1939), so they spend about 2/1000th of their GDP on the project.

For comparison, that is about 32 times as much (compared to the GDP) as Switzerland spends on CERN. Considering that the bomb was not ready to be used before German capitulation, it is hard not to think of the Manhattan Project as a gamble by the US government. It paid off for the US -- especially wrt the cold war, but nobody in 1939 could know that.

If instead, the Einstein–Szilárd letter to president Roosevelt had concluded that influential non-emigrated German physicists were to focused on their "Deutsche Physik" to even consider such an an idea, Roosevelt might well have concluded that having physicists work on improving radar would help the war effort more.

I think the Western allies were also the the only group of countries where physicists could propose such a moonshot project knowing it might fail:

"Dear Comrade Stalin, after an exhaustive research of the fission cross sections of different isotopes, we have to conclude that nuclear weapons are impossible. As a consolation, we have discovered two new elements which we named Sovietium and Stalinium. Please accept my personal regrets that our initial optimism was misplaced and our sincere thanks for funding our research. Best of luck with the war effort."

By contrast, I could totally see Fermi or Szilárd writing such a letter to Roosevelt if needed -- while it certainly would not have furthered their career, they very likely would have kept their lives and tenure.

Expand full comment
B Civil's avatar

The total cost of the Manhattan project was $23 billion in 2020 currency. I don’t know how that was spread out over the six or seven years of its lifespan, But as most of the money was spent on infrastructure for manufacturing, and isolating uranium isotopes I have to assume it was rather weighted to the end of its timeline.

It was pretty clear that the Germans were pursuing it so I really can’t imagine leaving it off the table. I reserve my right to be totally wrong.

Contemporaneous estimates of loss of life in an invasion of the Japanese homeland vary. The US military was projected to suffer somewhere between 300,000 to 1 million fatalities and the loss of Japanese life was estimated between two and 10 million.

Expand full comment
Thor Odinson's avatar

I'll try: I think the minimal change required for "no Manhattan project" is one where the USG decides it costs too much to invest in during the war; none of the other Allies had the necessary funds themselves. There's probably still a mini Manhattan project working out the theory side of it, but no investment in the centrifuges required to purify the Uranium.

In this world, I think the Cold War would soon enough trigger a full-scale Manhattan project, probably delivering the bomb by 1950 at the latest. The USSR is probably less far behind, but given how cash strapped they are I expect they'd still be slower than the Americans.

The bigger question IMO is whether the Cold War would have gone hot absent nuclear weapons - there were a few tense points early in the Cold War, and maybe Russia would have been more aggressive if America had no nukes to threaten it with. It probably gets more of Manchuria from Japan's surrender, and might manage to take West Berlin. I don't expect that the extra economic gains there would enable it to win the Cold War, but they might prolong it.

Expand full comment
Gulistan's avatar

I submitted a proposal to Future Fund back in March and haven’t heard anything. Do they respond to all proposals? Or should I take the silence as a “no”?

Expand full comment
ColdButtonIssues's avatar

That's late. Everyone I know heard back. Also see https://twitter.com/EAheadlines/status/1512087862149206020

Expand full comment
Gulistan's avatar

Thanks!

Expand full comment
Cosimo Giusti's avatar

Recently I received an emergency medical diagnosis of Cannabinoid Hyperemesis. The ER doc said it's being taught in medical schools, now. The short explanation is that habitual marijuana smokers can build up cannabinoids over several years of use, and experience a toxic reaction. It can result in spells of vomiting and extreme abdominal pain, a loss of weight and muscle cramps. Some who experience the phenomenon take hot showers to relieve the discomfort.

The literature I've found online -- from the Mayo Clinic, the American College of Gastroenterology, Analytical Cannabis, Business Insider, and CNN -- supports the theory, although the tone of much of it sounds like Reefer Madness 2.0. As several of the symptoms described are similar to my own, I've reduced my use by around 70%, and switched to a pipe from joints.

Has anyone else experienced Cannabinoid toxicity or Hyperemesis?

Expand full comment
Acymetric's avatar

>The short explanation is that habitual marijuana smokers can build up cannabinoids over several years of use, and experience a toxic reaction.

I know this was the "short" explanation, but it feels like there must be more to the story given the number of long-time heavy smokers who never experience this.

Less importantly, this is one of those things that I'd never heard of until about a week ago and now I'm seeing it everywhere which makes it both interesting and slightly suspicious.

Expand full comment
Cosimo Giusti's avatar

I just heard of it a week ago, too. Makes me suspicious. Is this just the backlash to legalization? An earlier ER doc insisted a hernia was the source, another blamed an intestinal blockage. I don't do fat, alcohol, tobacco, or oxycodone. Still, a cannabinoid fast may be worth doing.

Expand full comment
User's avatar
Comment deleted
May 10, 2022Edited
Comment deleted
Expand full comment
Cosimo Giusti's avatar

I think you're right regarding stopping cannabis altogether. I tried briefly reducing consumption by 70%, but it had no effect. After a week of diarrhea, I had another flare-up. I didn't call for an ambulance, due to the possibility I had brought it on myself, but just rode it out. It lasted 20 hours and cost me 10 lbs. or more. So I packed up 3 ounces of weed and set it in my closet. It's joined my untaken Percocet. maybe they'll try to disable each other. Thanks.

Expand full comment
Dustin's avatar

Lots of people very confident in what The Problem with software is. My best guess is it's a mish-mash of a ton of different things from technical to social.

One thing I would say is that IME (working with/under/over say a dozen such people), a not-insignificant proportion of video game developers are excellent at developing performant software...that is barely maintainable!

Maybe this is because video games often have short shelf lives after release so they haven't had to develop those skills? Just speculating.

Another speculation...is there a bit of tradeoff between maintainability and performance? I can think of some occasions where this is the case, but I'm not sure if it's a generalizable Law Of Computing.

Expand full comment
pozorvlak's avatar

It's 100% possible to write software that is both slow and unmaintainable, but yes, in general these two requirements are in tension. To a first approximation, you build maintainable software by introducing layers of abstractions and making everything uniform; you build fast software by eliminating abstractions and special-casing performance-sensitive sections.

Expand full comment
Nah's avatar

re. videogames: I think lots of heavy lifting gets done by engine devs/Nvidia/having 50 bazillion compute units you can call on whenever.

As long as you don't fuck with it too much, anything you build in unreal will perform excellently on a range of hardware; and I'm told by people that should know that the dudes at epic will consult with studios without needing their arms twisted.

Expand full comment
Erusian's avatar

I'm debating picking up a new language. I spend a lot of time listening to tapes as I do stuff. I'm getting bored of radio and I've been running low on book recommendations and recorded lectures (though I'll take them if you have them). I figure picking up a language is one of the few skills I can significantly learn by listening.

The question is: which language? Anyone have any recommendations?

Expand full comment
B Civil's avatar

This is not a recommendation; the question just provoked my thinking. If I did it, it would be Yiddish.. Just for the sheer crack of it.

Expand full comment
Alex Power's avatar

If you are an adult in the United States and you want a practical skill, you should probably pick Spanish.

Other suggestions:

* if you want to help refugee communities, either Dari or Pashto would be useful.

* If you want a language to learn quickly, Bahasa Indonesia has a lot of English cognates (as Dutch loan words) and a simple grammar.

* If you want a language very different from English, Korean is a good choice.

Expand full comment
Alex Power's avatar

As a negative recommendation, I will repeat one of atgabara's points: both Standard Arabic and Mandarin Chinese are particularly difficult for English-speaking adults to learn. Unless you have a personal interest in the culture, I would recommend against those languages.

Expand full comment
User's avatar
Comment deleted
May 9, 2022
Comment deleted
Expand full comment
Forge the Sky's avatar

This is partly a problem with English speakers having quite a bit of difficulty with the tonality of Mandarin - if you don't get it very substantially right, Mandarin has so few sounds that, even with correct pronunciation of the sounds it becomes an ambiguous garble. But it is also a problem of the Chinese having extremely strong priors against westerners speaking Mandarin well. I've heard stories of expert or even native western-looking people being unable to talk to some Chinese people outside of cities - they just stare perplexedly. Like when someone says something unexpected to you and even if it's said quite clearly, it doesn't 'click' until they say it a few times to you.

Expand full comment
B Civil's avatar

A lot like French Canadians in that respect.

Expand full comment
Forge_The_Sky's avatar

Hah, I've spoken (bad) French to both Frenchmen and Quebecois. It seems to me that Frenchmen tend to know you're trying to speak French and are appalled. Quebecois go from warm and friendly to extremely grumpy that you have elected to speak to them in some incomprehensible sister tongue.

Expand full comment
Nir Rosen's avatar

I would choose a language based on expected future use. For example, if you live in an area with significant other language - learn that. Canada would have French, Texas would have spanish, and if you really into anime then japanese is for you

Expand full comment
atgabara's avatar

I would also say Spanish as a first choice.

Someone put together a Power Language Index that measures how "powerful" a language is based on number of speakers, size of the economy where the language is spoken, quantity of media, etc.: http://www.kailchan.ca/2017/05/the-worlds-most-powerful-languages/

English unsurprisingly comes out on top by a lot. If you didn't already speak that, that would be by far the best choice. Mandarin Chinese is a clear but distant second, and the top 6 is filled out by French, Spanish, Arabic, and Russian.

However, Russian takes longer to learn than Spanish/French, and Mandarin/Arabic take even longer.

So if you adjust for the amount of time it would take to learn the language (https://www.state.gov/foreign-language-training/), then the clear top 2 (besides English) are Spanish and French, in that order. In the second tier are Russian, German, Portuguese, Mandarin, and Italian.

In terms of audio programs, I would recommend going through two types in tandem:

1) Language Transfer, Michel Thomas, and/or Paul Noble

These all take a similar approach and teach the basic structure of the language. LT is completely free, and you may be able to get MT and/or PN from your library. If you can get all three, it may make sense to go through all of them, since they may cover slightly different areas or explain things in slightly different ways, etc. And the parts that are the same would be good for solidifying that knowledge.

2) Pimsleur

This is a different type of program and complements the first type well. It focuses more on key phrases that are generally useful but are especially relevant for travel, as well as pronunciation. You may also be able to get this from your library, otherwise they have a subscription option.

Expand full comment
Viktor Hatch's avatar

Learning by mostly listening to audio is very possible, but I think you would need a pretty decent floor of existing knowledge, or else you have nothing to bootstrap up from. With videos or even better, with real life, there's so much more context.

If you pick a closely related language to one you already know you can probably get to that floor pretty quickly and do fine with audio. If you pick an unrelated language, you can still do it, but you'll have to be seek out specific material. (Starting with something like the Pimsleur courses, which are audio only, and start from 0.) And then probably some kind of learning focused content that is made for learners.

That said, learning a language takes so freaking long that I can't imagine making this decision based on anything except that language containing experiences you will enjoy doing for the rest of your whole life. That could be TV shows, books, or other media, it could be countries you want to visit, even just restaurants, family or friends you want to communicate with, whatever.

I was considering learning Chinese and I sampled a bunch of Netflix TV content. Even after my learning was 'done', watching shows would be how I would likely maintain my new language knowledge. And I just wasn't enjoying the Chinese shows as much as other language content. The vibe just wasn't there. I could have found better stuff but with other languages, enjoyable content was effortless, and I was already watching it with subtitles.

It's kind of meme that westerners learn Japanese for anime, but if you already spend a ton of your time in another language, that really is the best language to pick if you want to learn one. Same would go for Spanish Soap Operas, Chinese Web/Light Novels, Korean TV/WebToons, Russian literature... any iconic language content.

Expand full comment
raj's avatar

Maybe my advice is too obvious, but chinese and spanish seem like the two candidates from a utility perspective, and spanish is significantly easier to learn. So, spanish. (Conditioned on where you imagined traveling in the future of course)

Expand full comment
dionysus's avatar

One that a lot of people speak, and that you can find plenty of interesting content in. Learning a language from textbooks gets old quickly; but talking to your friend, watching movies, and reading books in the target language may not.

Expand full comment
Resident Contrarian's avatar

Korean is fun, if only because it has a really good writing system so you don't have to deal with normal other-Asian-language hobbled-together-at-random-for-millenia brokenness.

Expand full comment
Erusian's avatar

Thanks! I've heard good things about Korean. Both in terms of the language itself and that they have a culture/economy that's both unique and pretty open to Americans in terms of learning/cultural exports/etc.

Expand full comment
Resident Contrarian's avatar

I'm broadly a fan of their culture, at least as I see it from a migug saram perspective.

Expand full comment
Erusian's avatar

Most of my experience of Koreans (and most cultures) are on the trade routes. And yeah, they've seemed like decent people. And much less Galapagos syndrome.

It probably helps that I'm used to high context cultures and that drinking while barbecuing is important socially in both cultures.

Expand full comment
Emilio Bumachar's avatar

Well, which ones do you know already?

Expand full comment
Erusian's avatar

I'm purposefully keeping that vague so I can hear recommendations. If a bunch of people recommend languages I know and not many I don't I can decide the the enterprise is a bad idea. Though feel free to tell me if you think that's a bad approach.

Expand full comment
Wasserschweinchen's avatar

I think that's a bad approach, as both the benefits and the costs of learning a language are heavily context dependent. If you're Tunisian and you live in Malta, you should learn Maltese. If you're Kazakh and are thinking of emigrating, probably Turkish would be a good idea. If you're Estonian, learning Finnish and moving across they gulf might be a good career move. And so on.

Expand full comment
Erusian's avatar

Oh, I'm perfectly happy to tell you biographical details. I meant telling you the languages I already know.

Expand full comment
Wasserschweinchen's avatar

Biography affects the benefits whereas language knowledge affects the costs, so I think both need to be known for any reasonable input to be made. E.g. anyone who knows a mainland Scandinavian language should learn the other two, and almost no one who doesn't speak Tunisian or some closely related language should learn Maltese.

Expand full comment
User's avatar
Comment deleted
May 8, 2022
Comment deleted
Expand full comment
Erusian's avatar

Thanks!

Expand full comment
jane flowers's avatar

at ~80 pages i’m not at all surprised my book review didn’t make the cut lol but i’m dying to know if anyone actually did read their way through the entire thing. hat’s off to you if you did

Expand full comment
dionysus's avatar

These book reviews really ought to have an upper limit. At 80 pages, you didn't write a book review; you wrote a whole book. While it's impressive, it's a qualitatively different type of artistic creation than a 5 page book review.

Expand full comment
jane flowers's avatar

fair enough, i mostly let it pour out of me, without much thought to length, or anything really. though to me at least, given its angle, the length of the review felt warranted. a bit from a summary i sent to a friend:

"...The book is well-done, and quite successful in the type of analysis it aims for, but after living in the world it studies for a while, more and more I felt the 10,000-foot view of nightlife was a not-even-wrong framing. Nothing I saw or experienced collapsed neatly into any one abstraction or school of sociological thought; it seemed to me that enumeration, rather than abstraction, was the only tool available to do sociology here, and this became the entire tack I adopted for my review (and is partially to blame for the length)..."

Expand full comment
LT's avatar

I did not read the whole thing, but I've already mentioned I really liked what I did read! See here:

https://astralcodexten.substack.com/p/open-thread-221/comment/6225378?s=r

By chance I had read the book already so I wasn't as willing to read another 80 pages. But I can recommend to others if you haven't read Very Important People, instead of reading the book just read this 80 page review!

Expand full comment
jane flowers's avatar

aw thanks! glad to hear you liked it!

Expand full comment
xh79's avatar

Regarding the question of relative application performance, the "simple answer" is that this is an apples-to-oranges comparison, and the difference primarily stems from the way a Von Neumann machine works and the memory hierarchy of any modern computer.

The involved answer touches on many aspects of computer system design and architecture:

Stored programs need to be loaded into main memory before they can be run. The place that they are loaded from is the "disk" (persistent storage). The memory hierarchy in a typical computer has very fast registers directly on the CPU that can store values during computation, but very few of these, then a layer of RAM which is about 1000x slower to access than the registers (as a rule of thumb), and a layer of persistent storage that is about 1000x slower than the RAM. When you open the "boring software," some version of the program has to be copied from the disk to main memory before it can even begin to run on the CPU. (Moving data is the slowest thing that computers do, generally speaking.) The application designer gets to choose what the program loads when it is run, and depending on what language it was written in and how it was compiled, the language runtime may insert a lot of other stuff into memory in order for the program to actually run. This explains most of the startup cost of any program, nothing to do with graphics per se, just data transfer between the disk and RAM.

Hardware device access is mediated by the operating system in a modern computer, essentially for security reasons. Programs can request services from the OS to do things like access the disk, allocate memory, etc., but these kernel calls are typically more expensive in terms of computation than other instructions a program might run. To the extent that a program needs to make a lot of kernel requests as it sets itself up for execution, this could marginally affect the startup time, though the disk access is the main element causing a delay.

On the other side of the comparison, GPUs are purpose-built coprocessors that are designed to be good at massively data-parallel tasks, like rendering 3D graphics or performing linear algebra computations on arbitrary data. They face their own startup costs due to the need to transfer data from the disk via RAM to the GPU's on-board memory, and this cost is similarly high in comparison to the cost of computation. For example, if you run a task on a GPU that doesn't take advantage of the massive data parallelism, you can wind up spending quite literally 99% of your wall time waiting for data transfer while the parallel computation itself takes microseconds. For an appropriate task, however, the data transfer costs can be amortized over more computation, and as new data need to be loaded onto the device, computation can continue to be performed on data that are already resident on the device, hiding the subsequent memory access latency. (This is why you have to wait while a 3D game starts up for various data to be loaded into the memory of the GPU and the computer itself, but then once it is running it can become quite smooth.)

As a final point, modern computing performance is being driven more by cache performance than computational speed or algorithmic complexity, at the margin -- because memories are much slower to buffer and use than on-chip caches or registers, applications that can be designed in a cache-friendly way and have higher hit rates on faster caches can experience better performance running a more "computationally complex" algorithm than code running a simpler algorithm but making poorer use of caches. Both the CPU and GPU have multiple cache layers. In the given example, a new program being launched is unlikely to be able to take advantage of any cached data (other than possibly shared libraries that it may link), but a graphical application that is in-flight is almost by definition operating on data that is either cached or loaded into one of the numerous memories that modern GPUs include on the board.

So, this is not particularly mysterious, but certainly opaque if one is not aware of what is happening under the hood of a machine.

Expand full comment
burritosol's avatar

Congratulations to all of the selections in the book review contest (and all the non-selections too)! I plan on reading all of the finalist entries as they appear on the blog.

Does any one have any favorite entries among the non-finalists? I only read a few of the entries. Of the ones I read, my favorite was definitely The Outlier, but that one can't count towards my question, because it ended up being a finalist. Among the other ones I read, I most enjoyed the review of Cracks in the Ivory Tower.

Expand full comment
Ryan R's avatar

Is there a list of finalists somewhere? I seem to have missed the announcement...

Expand full comment
burritosol's avatar

Scott shared the list in a comment earlier in the thread. Here's the list:

Consciousness And The Brain, Making Nature, The Anti-Politics Machine, The Castrato, The Dawn Of Everything (EH), The Future Of Fusion, The Illusion Of Grand Strategy, The Internationalists, The Outlier, The Righteous Mind (BW), The Society Of The Spectacle, Viral

Expand full comment
burritosol's avatar

BTW, my own review was on Trans: When Ideology meets Reality. I'm happy with how it came out, but given the strength of the competition, am not surprised that I wasn't a finalist. I definitely plan on submitting again next year if there is another contest. My goals for next year will be (1) to invest more time, (2) choose a better book, (3) stay away from culture wars, and (4) focus on a less personal topic.

Expand full comment
Mickey Mondegreen's avatar

There certainly was some strong competition! I read & ranked all of 'em: thought your "Trans" review was quite good. I gave it (and "Cracks in the Ivory Tower," and "Albion: In Twelve Books") the same ranking as six of the finalists. (I scored "The High Frontier" one notch down from those, along with the two finalists I ranked lowest.) The choice of book definitely makes a big difference: notably, all the finalists are non-fiction, and many of them have some explicit tie-in to recurring topics on ACX, like Artificial Intelligence, Effective Altruism, or "Big Overarching Theories about Civilization and Where It's Headed." Aside from my own review ("1587"), the ones I liked best that didn't make the final cut were: "Now It Can Be Told," "Kora in Hell," "More Work for Mother," "The Condition of Postmodernity," and "The Reckoning."

Expand full comment
burritosol's avatar

Thanks! I'm happy at least somebody got to read it :)

Expand full comment
lalaithion's avatar

Software engineers do not write software in a vacuum, they write it as part of a business. If the business needs to render an entire 3d scene in under 20 ms, you spend 10 million dollars a year on optimizing render pipelines. If the business needs to render a couple buttons, and it's okay if they take a few seconds to load, you ask your engineers to spend 1 day on it.

Expand full comment
Elijah's avatar

As a game dev, exactly this. Software performance is driven by the requirements not the theoretical technical capabilities.

Expand full comment
nifty775's avatar

Do people have strong takes on the famous German apprenticeship system? How could we apply that here in the US? To my understanding the German state works hand-in-hand with local manufacturing employers to train future workers- an example of 'ordoliberalism' where the state & private industry work closely together on shared goals.

What's the reason we don't have anything similar in the US? Contrary to popular belief, the US is still the world's second largest manufacturer. I doubt the federal government would do a particularly good job of this, but individual states could certainly work hand-in-hand with say Boeing, Lockheed, various steel plants etc. to train skilled workers. I think state-level American politicians are a little more pragmatic and less ideological, so regardless of free market positioning, most states do in fact want more skilled blue-collar labor. What's the reason this hasn't happened in America in any kind of systematic, large-scale way like they do in Germany?

Expand full comment
sclmlw's avatar

I read a few years ago that among all the (true - at least at the time) claims by politicians that the US is bleeding manufacturing jobs, that the manufacturing industry continues to grow. My understanding is that a lot of this is due to automation.

I think the problem with even state governments getting into the "let's grow jobs in this sector" approach is that they have an incentive to create supply but no incentive to respond to demand.

Expand full comment
Doctor Hammer's avatar

There is sort of a strange pattern with manufacturing jobs in the US. Since the 90's employment sees a series of short, sharp drops followed by steady growth, until fairly recently when the last drop takes us fairly close to zero, and there hasn't been a drop since, not counting the COVID valley. (You can see the BLS graph here, but you will have to play with the dates https://data.bls.gov/pdq/SurveyOutputServlet )

From a lot of companies I spoke with, increasing automation was a response to difficulty in finding workers, not the other way around. Almost no students go into the trades after high school.

Expand full comment
Schweinepriester's avatar

There's a historical background from medieval times that didn't seem to have made it to the new world. For hundreds of years, masters of some skill like blacksmiths, bakers, masons, accepted some kids, who had no future in their own families´businesses as apprentices. The apprentices were cheap workers and got food, a place to stay and opportunity to learn how to do stuff. After a few years they could graduate to "Geselle", which meant they got a right to higher wages and had to leave to make room for new cheap apprentices. The gesellen took up some hiking from one place to another, seeking work. Within the "Zunft" there was some codex entitling them for a certain amount of help from masters to keep them from starving if there was no hiring. Some died, some took up something else, some happened to be in a job when the master died or retired and could take over the master´s business, maybe by marrying his widow. This got more elaborated and comparatively recently apprentices got sent to school besides their work to get a better understanding of their trades. Worked out alright, I guess.

About the US, I remember Arnold Schwarzenegger describing building chimneys of sorts with his buddy Francesco Columbo who was a mason by trade. It looked like no one cared about their credentials then. In a "Zunft" system there would have to be a master of masonry about.

Expand full comment
Moosetopher's avatar

There is a variant here wherin a company drops a lot of money at a nearby community or four-year college to create a specific program to provide them with workers. Globalfoundries has done that in Albany, and I taught in Laboratory Technology department at Del Mar College that generated lab techs for the refineries in Corpus Christi TX.

GF (and I assume other waferfabs) also have explicit paid apprenticeship programs to create Maintenance Techs since there is no college degree that will prepare you for the actual job.

Expand full comment
Maynard Handley's avatar

The US (in the past...) has done things like this. An interesting example is the Civilian Conservation Corps as one on the New Deal programs. While this looks like basic labor work, in fact it acted as a training grounds for future officers of the WW2 military in that the program was designed, not exactly as a military set up, but with an essentially male membership working in hard dangerous conditions, arranging much of their program by themselves and with leadership arising within the ranks and problems needed to be solved.

As for why the US cannot do this today, well apart from all the obvious issues I won't even list because it's too damn depressing, there is the overwhelming fact that local differences are no longer acceptable. A program like this cannot be created for the entire US because the US is so varied, but programs that differ from one locality to another are unacceptable because [well, you fill in the reasons]. The US is simultaneously

- massively different from one city to the next (let alone one state to the next) AND

- has a politico-cultural class unwilling to tolerate ANY difference in anything (behavior, outcomes, laws, rewards) from one hyper-locality to the next

This is clearly an impossible combination, and we are seeing that impossibility play out in real time before our eyes.

You might say that The New Deal programs were in fact the precise opposite to the above. Yes and no. A lot of terrible ideas evolved out of the New Deal (more at this meta-level of enforced conformity than in precise details) but most of the 30s (and even 40s) apparently Federal-level stuff was actually implemented (and often implemented very differently) at the local level. This even went for things like draft boards, which were a lot more flexible than could happen today, which was one of the reasons the WW2 draft was so unproblematic. The closest we have to that today is still very local control of elections (and of course people are working hard to eliminate this).

The same people who scream the loudest about cultural diversity and suchlike are also the loudest voices for absolutely uniform laws and standards... Go figure...

Expand full comment
Doctor Hammer's avatar

I can’t remember the guy’s name, but the same fellow set up an apprenticeship program in the UK and South Carolina. I believe the website is apprenticeshipcarolina.com for the SC program. I don’t know why it hasn’t caught on more, as SC seems to be doing really well in industry, while e.g. PA is really struggling for workers and other states are even worse off. My sense is that the standards in k-12 are so low and colleges accept just about every person who graduates, so lots of kids go to college who are not going to benefit at all but are told that that’s how to get a good job. I have heard so many people calling technical jobs dead end, despite the fact that most pay as much or more than office jobs. Sure, your job title stays the same for a long time, but if you want money instead of a fancy title...

Anyway if I think of the fellow’s name I will post it here.

Expand full comment
Axioms's avatar

Plenty of left wingers, including some, but not all, debt cancellation advocates, support this. The problem is that the people with power in the media, academia, and government are PMCs and they really only care about PMC issues and PMC friendly solutions. Everyone should go to "college", ideally a 4 year sleepaway and not commuter, live the college lifestyle, and so forth. You'd have to build a pro-apprentice organization from the ground up and convince Democrats when most of them come from law, finance, or media. It would be very difficult.

Expand full comment
nifty775's avatar

I don't hate this argument, but you'd still think it'd benefit an individual state's politicians to have more good-paying jobs for blue collar workers. Also, the US used to not be so dominated by the PMC and in fact unions were quite strong here for a time. Politicians had to regularly court union support to win election. Yet we never developed Germany's system

Expand full comment
B Civil's avatar

The relationship between labor and management in Germany is much less hostile.

Expand full comment
Deiseach's avatar

I wonder if it's fear of unions behind it? No politician, regardless of party, is going to want to create a powerbase which will make demands and have enough clout to throw its weight around and get what it wants.

Expand full comment
Axioms's avatar

For Republicans sure but probably not for Dems. The issue is that there just isn't a constituency. Politicians don't initiate things in America. You need an organization with real support who pushes them to do something.

Expand full comment
Axioms's avatar

Unions were always much more under assault here than in Germany. All of Europe got a social democratic reset after WW2 whereas America didn't. Blue collar workers are often Republicans especially outside of the cities. Republicans do not want to move anywhere near to Germany.

Expand full comment
Nancy Lebovitz's avatar

PMC?

I've heard that there was a coalition of trade unions (who didn't want competition from new people in the trades or possibly people who weren't from union families), parents, and guidance counselors. Part of it was probably status issues, and I wouldn't be surprised if guidance counsellors didn't want their jobs to be more complicated.

Expand full comment
Joe's avatar

Professional-Managerial Class.

Expand full comment
Noah's Titanium Spine's avatar

Re: #2, the short answer is that most software engineers are very, very bad at making software.

See https://www.youtube.com/watch?v=ZSRHeXYDLko

Or see the minor kerfuffle over Casey Muratori's bug report against Windows Terminal regarding its poor performance, which culminated in this https://github.com/cmuratori/refterm along with a couple of videos diving into exactly what techniques were used to achieve ~1000x(!) performance gains.

The slightly longer answer is that a long time ago, programming was very difficult and required a high level of skill to build even basic software with an acceptable level of performance. Advances in hardware and software have made it much easier, the profession has expanded greatly, and so the average practitioner has _substantially_ less skill. Combine that with market dynamics that reward time-to-market and punish quality, and you get... today's software, which barely works at all.

Except in games! The highest-dollar games ("AAA titles") push the hardware to its absolute limits, as described in the quoted comment. Since the market rewards the "biggest" games with the best graphics, largest and most detailed worlds, etc etc, game developers have to be _actually competent_ in order to create successful products. Though even this market pressure is somewhat attenuated by Unity, Unreal, and other off-the-shelf game engines that do the hard parts and let smaller studios focus on gameplay, art, story, etc.

There's a lot more to it than this, but that's the basic reason games are awesome and get better all the time, and all other software is garbage and getting worse.

Expand full comment
Pete's avatar

Lots of latency issues for desktop software are caused not by incompetence but by intentional design choices to sacrifice huge amounts of performance for other needs - mostly, the speed and cost of developing new features or cheaper cross-platform compatibility by moving everything to a packaged web browser.

People making games start with a non-negotiable requirement to push fancy graphics at a good frame rate. People making desktop software don't have that strict requirement, so they eagerly trade off performance to optimize other things, and modern development techniques have provided a *lot* of new opportunities to trade away performance for cost/speed/simplicity of development, and even the most competent superstars would be doing that intentionally.

Many projects do have 'low hanging fruit' that might get huge performance improvements without a technical tradeoff, however, I'd argue that in most cases that again is an intentional organizational tradeoff (not incompetence) to have people not spend time even on these low hanging performance fruit because the organization has knowingly chosen to prioritize other tasks like new features, or a colorful redesign.

Expand full comment
Thor Odinson's avatar

I'll grant that it might be a 'deliberate' decision by managers, but I can still think the product quality would be much better if one spent slightly less time on colour schemes or pointless redesigns and more time on making the damn thing work

Expand full comment
Elijah's avatar

I pretty strongly disagree with Casey's take. The reality is that software performance is driven by the requirements not the theoretical technical capabilities of the hardware. Software runs slowly not because the engineers are bad, but because they are not tasked with making it run well.

Expand full comment
John johnson's avatar

I am extremely disappointed in ACX'ers making the argument that it's not due to engineer incompetence. It does not take longer to write performant code at the speeds we're talking about. This topic started from the fact that substack is unable to show black text on white background without several seconds of latency. That IS NOT something that comes from an actual tradeoff. That is just pure incompetence

Expand full comment
Elijah's avatar

The trade off is dev time (very expensive) for user experience (hard for companies to put a monetary value on). It's a bad tradeoff but it's not usually made at the individual engineer level.

Expand full comment
lalaithion's avatar

For those not in the industry, this is *definitely* an extremist take.

I partially agree with it, but it's still an extremist take.

Expand full comment
Noah's Titanium Spine's avatar

Yes it is, but extremism here is *necessary* because 99% of the profession is actively making things worse. That includes me, on a lot of days.

Expand full comment
eldomtom2's avatar

You must be living in a fantasy world if you think games are more polished and bug-free today.

Expand full comment
Nah's avatar

They are, in relation to complexity imo.

Expand full comment
Noah's Titanium Spine's avatar

I didn't make that claim.

Expand full comment
a real dog's avatar

The average practitioner is still quite skilled, but they don't have time to deal with performance. Software that would once require hundreds of people is now glued together from libraries and frameworks by a ten person team . We live in the era of shovelware and MVPs, and no product manager would allow deferring features for performance work if the current performance is "good enough".

Expand full comment
Maynard Handley's avatar

"The average practitioner is still quite skilled"

I deny this claim.

I think that 30 even 20 years ago, most of those working in SW (at least on the PC and workstation side) had a sense of pride in their work which exhibited itself in multiple was from trying to keep up with the state of the art, to caring about (and being at least marginally competent in) performance, to trying to search for and fix bugs.

But the business has been flooded by people with a different mindset, either bizarre expectations that (in spite of their mediocrity) they will make it rich in software, or people who see software as just another job to hate, not a calling. Both of these produce software every bit as terrible as you would expect.

In a perfect world at least the large companies would be able to filter these people out before hiring them; in our world, very different concerns have begun to dominate the hiring process, and once you no longer have A people hiring other A people, but B people (who can't even recognize the difference between A and C) hiring C people, you're basically doomed.

Software is an area where you can, in fact, tell objectively who is better and who is worse. But B class people do not want such tests to be part of the hiring (or subsequent evaluation) process, and they have largely managed to impose their will on most of the large companies. If you think I am being melodramatic, look at how Twitter responds in fury whenever someone posts something in area of claims that some programmers are better than others, or that tech interviews (or equivalents, like a take-home project) should be more demanding...

This hasn't taken over the ENTIRE software world yet. Some places with a strong founder still demand (and achieve) excellence, for example Wolfram. But honestly, I'm amazed at how bad it is.

Apple is a particularly striking example because of the bifurcation -- one half of the organization can, in four years, design the best SoC in the world; while the other half of the organization, in four years, cannot fix multiple on-going problems in areas like HomeKit or Shortcuts, and apparently does not even have the professional ethics and pride to even try.

Expand full comment
Julian's avatar

>Software is an area where you can, in fact, tell objectively who is better and who is worse.

You are going to need to prove this out a lot more. I contend you cannot and can never tell objectively who is better and who is worse at developing software. I don't think we have the tools to measure such a thing and that the person that is best in one situation may not be the best in another.

Expand full comment
Thor Odinson's avatar

While ranking different flavours of "very good" might be hard, distinguishing between code monkeys and people actually using their brains isn't (some of the later could but don't care to, some of them just don't have the skills)

Expand full comment
Julian's avatar

What criteria would you use to distinguish between those two groups? I have worked with people I would say would fall into both groups, but that hasn't been very predictive of their ability to product the necessary software at the needed level of proficiency.

Expand full comment
Maynard Handley's avatar

So you contend that whether it's various ACM Awards, or internal company recognition (eg Apple Distinguished Engineer) what's happening is purely a game of politics and popularity, not some sort of recognition of unusual competence?

OK, then...

Like I said, A's can recognize A's.

https://www.lesswrong.com/posts/kXSETKZ3X9oidMozA/the-level-above-mine

Expand full comment
Julian's avatar

No what I am saying is that asking who is "better" at creating software is like asking who is the best athlete: Eliud Kipchoge or Michael Jordan. They are both the "best" depending on how you define athlete. You may define good software based on some technical standard while I may define it by user satisfaction or business objectives or some other standard.

What is the method you would use to "tell objectively who is better and who is worse"?

Expand full comment
Maynard Handley's avatar

Nobody asked for an *ordering* of skills, all that was asked for was a proof of above-average competence. This is trivial.

I have recently spoken to a number of college students all taking some form of CS degrees, and I ask each of them specific areas they love. When someone cannot articulate that they're especially interested in some sub-field (compilers? codecs? voice synthesis? ...), cannot summon up any enthusiasm for anything, that's a clear sign that we're not dealing with above average.

OK, you say, some people are shy. Sure. And shy people can work on a project, by themselves if they like. The world is full of interesting computer-adjacent projects just waiting to be done. And VERY FEW people actually taking them up.

The M1, for example, is a fascinating chip. Virgin territory for understanding a completely new design. And yet I can count on the fingers on one hand the people who have actually made some attempt to try to figure it out. Hell, even after I and a few other pioneers have laid out a roadmap, it's been astonishing to me *just how* lethargic has been the take up among others.

If you're a person who thinks "that's interesting, let me see what I can learn" above average; if you're a person who thinks "that's interesting, I'll wait, as long as it takes, for someone to tell me how it works" well...

Or, very different field, comskip is open source software that scan sa video file to locate commercials. It's used by various products to skip the commercials when you play back recorded content. It's *OK* but not very good. It also seems like a prime candidate for machine learning in some way, a great starter project for someone wanting to experiment with various ML technologies and APIs.

Great people don't need to have their greatness exposed; it's obvious. Talking to them will expose some extreme enthusiasm; they don't need to be told to go do a project because they've been working on their own projects since they were teens and will happily show them to you.

If you want to deny that human excellence exists, and that it is not distributed equally, go have that argument with someone else. I've seen enough human excellence (rare) and mediocrity (common) in my life, and in history, to have no time for that nonsense.

Expand full comment
Gašo's avatar

>In a perfect world at least the large companies would be able to filter these people out before hiring them; in our world, very different concerns have begun to dominate the hiring process, and once you no longer have A people hiring other A people, but B people (who can't even recognize the difference between A and C) hiring C people, you're basically doomed.

Oh, yes, that! An excellent piece of related lore is the 2006 article by Spolsky, relating how in 1992 he was reviewed by the "type A" Bill Gates, rather than by the "type B" Jim Manzi:

https://www.joelonsoftware.com/2006/06/16/my-first-billg-review/

Since 2006 onward, there's been an uninterrupted bloat of "type B" managers at the expense of "type A". Except the MBA in the 1990s probably had some actual skills...

the central bit, if you won't read the whole story:

--

“Can you imagine if Jim Manzi had been in that meeting?” someone asked. “‘What’s a date function?’ Manzi would have asked.”

Jim Manzi was the MBA-type running Lotus into the ground.

It was a good point. Bill Gates was amazingly technical. He understood Variants, and COM objects, and IDispatch and why Automation is different than vtables and why this might lead to dual interfaces. He worried about date functions. He didn’t meddle in software if he trusted the people who were working on it, but you couldn’t bullshit him for a minute because he was a programmer. A real, actual, programmer.

Watching non-programmers trying to run software companies is like watching someone who doesn’t know how to surf trying to surf.

Expand full comment
a real dog's avatar

I enjoy programming, perhaps as a fresh grad I'd still call it a calling. But right now it's just a job. My personal interests went elsewhere, and programming languages are just tools to build stuff. Do you expect your plumber to be incredibly excited about plumbing?

I see where you're coming from wrt the get rich quick people, though it surprised me that some of them are quite capable - it seems the starting point doesn't matter all that much, you either have the IQ/talent/neurotype to make it or you don't. I'd say web dev is full of really incompetent people but web dev needs so many employees they have to scrape the bottom of the barrel. Blame VCs.

The existence of 10X/0.1X contributors is common knowledge, Twitter is not real life. In every team I've been in, people have also been polite enough not to point to the 0.1X guy, because he occasionally has good insights anyway, has a family to feed, and as long as he doesn't get in the way of more capable people it's all good.

Expand full comment
Thor Odinson's avatar

the 0.1X people are only a problem if they're the whole team, yes. The -X people are the real issue, and they're more common than one thinks - someone who makes good code slowly is a .1X person, someone who makes terrible code is a net negative, a -X.

Expand full comment
Jeff's avatar

Today's software actually works much better than software did 10-20 years ago. My PC crashed at least weekly back then and macs, though better, still crashed pretty frequently. I rarely encounter OS crashes now and even application crashes occur much less frequently.

It's probably true that UIs were faster in the past but UIs today are better in general (you can find exceptions in the past if you cherry-pick the best examples) and are doing a lot more. I don't know what the situation is like on Windows currently, but the Mac OS and iOS GUI frameworks include a lot of animation support and other tricks that users find appealing, but that have substantial performance costs.

To the extent that games are more performant than GUI applications it's because performance sells games while performance doesn't really matter in most apps (unless it is so bad that the app becomes hard to use) *and* UI programmers have basically no control over the performance of the framework they're using (you use what Apple of Microsoft gives you in almost all cases), while graphics programmers have essentially complete control since the graphics system is written from scratch.

There is also an ease of use aspect from the programmer's perspective. To display the text "Hello World" in a game requires several thousand lines of code (the boilerplate required to draw a triangle on screen with a Vulkan renderer is about 1k lines alone). To display the text "Hello World" in an iOS application is maybe 50-100 lines of code at most.

Expand full comment
Paul Botts's avatar

Yea, this times a hundred. I am boggled at the claim that software works _worse_ today than during say the 1990s [when I was involved in tech support for a medium-sized U.S. office environment]. And my wife says that any suggestion of going back to the home and/or office software of the 2000s will be met with all the acceptance and agreement being displayed by the Ukrainian Army.

Also, being a PC gamer continuously from the days of when SVGA was the hot display upgrade, I literally groaned out loud at this: "other off-the-shelf game engines that do the hard parts and let smaller studios focus on gameplay, art, story..."

As literally every game designer and company has learned it is the gameplay, art, story that are the "hard parts"!

Expand full comment
Doug S.'s avatar

I still use Word 2003 because I know how to use it, I can install the CD on any number of computers, and I don't want to spend money on a newer version of Word. Wtf are ribbons?

Expand full comment
Gašo's avatar

>As literally every game designer and company has learned it is the gameplay, art, story that are the "hard parts"!

Nah, only those who don't have an inkling of an idea of the skills it takes to write something like the Unreal engine.

Presuming that the Unreal engine "grows on trees" - which, for the purposes of many game companies, it effectively does - the things you mention are not the "hard parts" but the "remaining parts".

I've picked Unreal as the most refined gem. There's also stuff like Unity, which is a much less bright bulb but would *still* be quite hard to cover if it didn't already "grow on trees".

Step out of game dev, and you'll find that the central hard parts no longer grow on trees. Few companies employ the people with the skills to do those "core parts" right. The key thing is that *it doesn't even matter to them that they don't*. Once a SW company has conquered a market or niche with their product, they tend to make a captive audience who won't switch to another product "because it has a better response times or more sensible UIs".

How to conquer that market or niche in the first place? It's a dark art in its own right - combined with doses of luck, of course - but often relies much more on being fast at shipping out crap, than being slow at shipping out gems. Once it's been conquered, further dev work (even if taken by more competent devs) is /rarely/ about refactoring crap into gems. It is mostly split between gilding the crap and bloating it further.

What makes the game dev substantially different, is that there's tons of new games coming out every year, which *do* share the central hard parts. This setup allows for gems like Unreal to crystallize (or breccias like Unity to accrete).

Expand full comment
Paul Botts's avatar

You and I apparently have rather different feelings about what makes a game good.

I have tried many more computer games over the years that were technically great but uninteresting in gameplay/art/story, than the reverse. Soooo many more...! So from the actual results [the games that get published] it is apparent which parts are the hardest to do well.

Expand full comment
Thor Odinson's avatar

I'm inclined to agree that good story and game design is more important to fun, but the technical side is what dominates sales figures - all those utterly brilliant indie darlings are considered wild successes for making a few million dollars while the annual CoD release clears billions

Expand full comment
Gašo's avatar

>from the actual results (...) it is apparent which parts are the hardest to do well.

No, it's not, as long as you remain oblivious to the facts of how hard it is to make the shared cores which most games use; Unreal, Unity and the like. The fact that they are trivially available to any game devs who want them, occludes the fact that they are not themselves trivial in the slightest. Did they not "preexist", they would be the hardest part to get right.

The hardest part of Ethan Carter was doubtlessly the jaw-dropping photogrammetry work. But the results were then simply handed over to the Unreal engine.

The hardest part of Gris was the amazing artwork, including how it gets fitted with the music during the "bipolar mood swings" sequence. The results being of course just handed to Unity, which was quite sufficient for what Gris needed.

Ditto for Gorogoa - wonderful art. And practically nothing to do except the art, as the rest is dealt with by Unity.

Not everyone leaves the difficult core to commercial engines. "The Witness" apparently *had* to have its own engine, but that's because Jonathan Blow is Jonathan Blow. I don't know why'd he go and do that for. It cost him *years and years* to do. The result is of course brilliant, but that's because Jonathan is a genius. People who are not, can throw decades into their own engine, and it will still suck. And of the people who (wisely) opted for an existing engine, some will apparently say "the engine was not the hard part".

Again, my point is that having the *hard* part of the dev effort as a commodity that can be bought, is characteristic of game-dev. In industry-dev, each particular product may have its own "core of hard dev problems", problems that are not common enough to have super-refined off-the-shelf solutions (except obviously database engines and such), and that "crap shipped fast" will often outconquer the "gems shipped slow". This is how we get to where we are.

Expand full comment
Julian's avatar

I guess Microsoft bought Minecraft for billions because of its advanced game engine??

Expand full comment
NoPie's avatar

Some software is much worse. For example, MemoQ which I have to use as a translator, is terrible. It is very slow and annoying. In comparison, 20 years ago I used simple software on 86 computer that needed to be launched from floppy disks and it was fast. Granted, memoQ has thousands of features which may help but in practice they are less useful and often stand in way and slows me even more. The first thing when I translate, edit or revise a text I want being able to move fast around text. In memoQ sometimes I have to wait a second to go to the next line. And this software is considered one of the best translation tools. There are even worse tools like XTM with a cloud version that is not only terribly slow but also with poorly implemented features. All these tools are good to MBA types to streamline processes who have no idea about translation quality and do not listen to feedback from actual translators.

In comparison, software I use in pharmacy is much better and faster responding. Possibly because spending extra second when pulling a patient record or making a mistake due to poor IU, are costing businesses more money and is greater safety risk thus incentives are much greater to make it better.

I don't know what would be needed to improve the state of language translation software but the market is not really working in this case.

Expand full comment
Paul Botts's avatar

I've had similar feelings myself once or twice but the confounder there is confirmation bias: we inherently tend to like the software that we first worked in, our brains are comfortably wired for it. Also we all have some rose-colored glasses (generally, not just about this subject). That's just normal, we all do it myself included.

The real test is having someone who's proficient in a given field, but young enough that 1990s or 2000s software will be new to them, go back and try the relevant commercial software from 20 or 30 years ago. I've inflicted that test on younger colleagues in a couple of other software categories (not the one you describe) and they are always appalled at how much clunkier and generally "stupider" (their word) the older software is compared to what they're used to.

Expand full comment
NoPie's avatar

In case of MemoQ I don't see that this is a confirmation bias. Many agencies and big companies (including EMA) has discontinued to use translation memory software and switched back to Word documents. Of course, we wont' use software from 2000s because of several reasons – maybe due to lack of unicode support or supported screen sizes etc.

The main thing is that the MBA's popularized idea that translators can work in a linear way sentence by sentence is not how efficient translators actually work. We need to read the whole text first and then we often go back and forth to update and adjust the translation.

A lot of people or companies believe that machine translation is already good enough and they can use it and give the text to proofreader (or MT editors how they are called) and it will be fine. In my field it never is and all those MT projects fail. And yet some people never stop trying and doesn't listen to experts in the field.

Expand full comment
Nah's avatar

Software works worse, but operating systems and frameworks are better; so it evens out.

Also, software comes out fucking FAST these days.

Expand full comment
Maynard Handley's avatar

Yes, the above. The deepest parts of OS and APIs (and languages and tools) are substantially better than they were. But the apps themselves are in fact substantially worse in terms of terrible user-hostile behavior, and behaviors that are dismissed as "correct" even though they are terrible design.

Programmers themselves want to blame "designers" for this. Again I don't buy it. Designers have given us some terrible UI ideas, but most of the on-going crap I see on the Apple side (the platform with which I am most familiar) is not what designers have mandated; it's sloppy work put together by people who just do not give a fsck -- the equivalent of an American car in the 70s.

Expand full comment
Noah's Titanium Spine's avatar

In terms of making a fun game yes, they are the hard parts. They are not the _technically difficult_ parts though.

Expand full comment
Gruffydd's avatar

I've heard P100 masks basically fully protect against covid? Want to get my Grandma one. Is this true? Are there any places where I can read more about this, the best ones filters etc.

Expand full comment
Eremolalos's avatar

Go to the Reddit sub masks4all and search P100 or ask about them. Lots of people on there are genuine experts, though of course most aren't and there's some nonsense.

Expand full comment
Metacelsus's avatar

Will your grandma actually wear one though?

Expand full comment
Gruffydd's avatar

She takes covid quite seriously so I’d say so yeah. Why?

Expand full comment
Axioms's avatar

Regarding the image thing, as someone working with an image heavy software project using TGUI, which is a UI library based on SFML in C++, you still get that kind of hanging in the UI, and this should be much easier to deal with than web dev UI stuff. Of course it isn't optimized yet but even games on the market have this problem, especially Unity games. I mean the UI hang specifically. Major RPGs like Shadowrun have this just as bad as minor strategy games like Star Dynasties.

For my own project my impression is the issue relates to dynamic sets of images which either need complicated code to check what is added or removed or to simply erase the container and reload/draw each image every time you click something. I am 99% certain that a menu panel you only have to load once has 0ms hang time.

Expand full comment
Spiny Stellate's avatar

There are lots of local and state elections for which most of the candidates don't have any obvious strengths or weaknesses, but it's also not trivial to find out enough to know who to vote for. I don't want to spend hours doing research but I would pay to crowdsource that research, especially if it answered specific questions about my values (which candidate will best advance cause X?). Even for small races with a hundred thousand total votes I imagine there are at least dozens if not hundreds of people who would value that research enough to fund it. Mainstream media sources will never offer that perspective. It would have to be something like a paid, but one-off substack.

Does this exist? Could it exist?

Expand full comment
j_says's avatar

I want this too, and I plan to pay someone to build it once I find the right person. (I actually submitted this as a proposal to Scott's grant program).

Current plan is to talk to journalism professors and see if they'd mentor their students to do this for one town's ballot for $1000. Basic investigative journalism, nonpartisan. Explicit goal to build a reputation for even-handedness. If they're an assistant DA running for judge, look up their case history. Ask them politely but firmly to take policy positions. Write it all up somewhere online, possibly ballotpedia, possibly a custom wiki-style website.

Expand full comment
dionysus's avatar

Not exactly what you want, what Ballotpedia is a nice resource

Expand full comment
lalaithion's avatar

You might have a local reporter or advocacy group who does a write-up for each election; I know my area does. These people can be hard to find, but I would usually start with an advocacy group for a cause I support (e.g. a state or local referendum) and try to look through their documentation, the twitter accounts of their staff, etc. to see if any of them link to local election guides.

Expand full comment
Thor Odinson's avatar

An advocacy group for policies you utterly hate is just as useful here, it's worth noting - simply reverse their advice.

Expand full comment
Axioms's avatar

The issue is that government is so convoluted you just can't actually figure this out. And to figure it out for local elections just can't be profitable. Theres shit tons of these offices to elect and a pretty small total electorate most of whom do not care enough to pay money for info. You can figure out some degree of who says they care about an issue and maybe track a few votes on it, assuming they've previously held elected office but it is much harder to say who will be *effective* in their advocacy.

This stuff is hard for federal office too of course. See the infamous DWNominate scandal where they rate AOC as relatively rightwing because their metric for liberal vs conservative is dumb as fuck. This is a major organization which is quoted in political media often. Now imagine the same thing but for local races. Very depressing.

Expand full comment
Axioms's avatar

So I'm developing a Map & Menu strategy game with a strong focus on social simulation on top of a strategy layer. Think a map painter but instead of always map painting you can be a spy master or a merchant or build tall in a way that is actually interesting.

I'm writing some of the basic AI code for NPC characters. What I'd like to ask for is interesting suggestions for distinct social actions, personality traits, ideological axes and also "Interests" for characters in a fantasy world.

Now of course I have my own list but I'm curious if there's some obvious great stuff I'm missing. So Social Actions would be like Gossip, Flatter, Rant, Debate, Empathize, etc.. Personality Traits would be like Family Oriented, Obsessive, Analytical, Diligent, Courteous, Impulsive, Vicious, etc.

Ideological axes would be stuff like Purity, Egalitarianism, Militarism, Traditionalism, and so forth. I have 20 split into 10 axes going from -40 to 40 but I think I might be missing some potentially prominent ones.

Interests are stuff like Philosophy, Gardening, Animal Handling, Athletics, Tactics/Strategy, and so forth. Basically more specific stuff rather than the broader categories of Ideology or Personality.

Expand full comment
Pete's avatar

Have you played the Crusader Kings series, which seems to have a lot of conceptual overlap with what you're proposing?

The current version is CK3, but for your needs taking a look at the earlier CK2 with all its bajillion expansion packs would illustrate a bunch of "some obvious great stuff I'm missing"; buying all of that might not be worthwhile but youtube has videos and all the traits should be documented in some wiki.

Expand full comment
Axioms's avatar

I started developing my project prior to CK2 releasing, though only barely, and bought it on a recommendation from several people. CK3 was the game where I decided never to buy a Paradox game again because they focused on all the stuff I didn't care about.

CK2 has a lot of mechanics but they are shallow, rigid, and unintegrated with each other. There's certainly some overlap.

I blogged about a system called Secrets back in 2014-2015. CK3 later developed with a system doing roughly the same thing, but again shallow and unintegrated, also called Secrets. I took a break between 2017 and 2022 and I may not have gotten back to working on my game if CK3 had been any good. Now of course it has fantastic production values, fancy modding tools, and way too much focus on graphics like the 3D model system, when I say any good I mean with mechanical depth. Paradox decided incest memes and streamer friendly visuals pay the bills better.

I don't think there's anything in CK3 or CK2 that I'm unaware of that would help. I had a few hundred hours in CK2. Their system is quite different and perhaps you can argue inherently limited by being "real time".

I should maybe have specified my level of familiarity with Paradox because I think for casual gamers CK2-3 is the obvious comparison.

Expand full comment
Nah's avatar

No joke: Include some Ghandi style 'at some point my aggression will go from 10 to 2147,483647', but not too many.

Those are fun.

Also, you should add a hidden preference/antipathy for border gore in some agents.

Like, Fredrich the great absolutely needs to take half of Poland so the MAP LOOKS NEAT GOD DAMN IT

Expand full comment
Doctor Hammer's avatar

One thing I thought would be useful, or maybe just funny, would be ai traits based on parts of the game they don’t like. “The trade system is annoying” or “God, do I really need to build every damned water fountain?!” The more they hate it the less they engage in it. It just occurred to me that there are often game systems I just ignore because they don’t work or are irrelevant or just a pain, and I know the AI is skipping around them to cheat. Don’t cheat but give AI players a tendency to ignore aspects, especially if the player can figure that out in game and exploit it.

Expand full comment
Axioms's avatar

In theory each NPC would emergently focus on particular stuff. You need a base amount of investment in certain systems because of the core design but other stuff is quite skippable if you care about interesting AI rather than getting the AI to min-max to victory. NPCs who are not social butterflies, NPCs who mostly care about magical research and knowledge, and so forth. Also the NPCs will be delegating stuff to each other such that they have something of a specialization of labor.

Expand full comment
a real dog's avatar

I have a pet idea of a "tradition tree" for a 4X modeled after Haidt's https://en.wikipedia.org/wiki/Moral_foundations_theory . I think it could also work as a base for political leanings of individual characters in a grand strategy game, and arguments/influence strategies such characters would find persuasive.

Expand full comment
Axioms's avatar

I've got some analogue of all of those except maybe harm. Including all the possible additions. I think they are all pretty useful. I'm not a fan of Haidt but certainly some parts of his theory are true and useful maybe even the majority of it.

Expand full comment
Dino's avatar

MetaFilter had a post about Every Bay Area House Party, and I made the mistake of reading the comments. Each of these is from a different person -

"lovable eugenicist asshole SSC fucker"

"eugenicist Scott"

"he pulls the trick of writing something endearingly witty and not obviously fascistic."

"The author is a eugenicist. You don't become that way if you don't think you're far above average while being incredibly self-unaware."

"he didn't want his employers and patients knowing he was a eugenicist."

"a eugenecist assclown" (sic)

Did I miss it when Scott said he was a eugenicist? Or maybe he isn't, in which case why do these people think he is?

Expand full comment
Edward Scizorhands's avatar

The world is full of shit and a good way to deal with it is to not spend your time looking at it

Expand full comment
magic9mushroom's avatar

Scott *is* a eugenicist. A few articles I remember that touch on this:

https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

https://slatestarcodex.com/2014/09/10/society-is-fixed-biology-is-mutable/

https://slatestarcodex.com/2015/08/17/the-goddess-of-everything-else-2/

https://astralcodexten.substack.com/p/welcome-polygenically-screened-babies

https://astralcodexten.substack.com/p/please-dont-give-up-on-having-kids

Scott is not a *Hitlerite* eugenicist; he doesn't want to change the gene pool via murder or involuntary sterilisation, but rather by genetic screening and editing plus some amount of conscientious breeding by high-IQ people. I think the closest he's ever come to Hitlerite eugenics was in Meditations on Moloch...

>Or to give another example, one of the reasons we’re not currently in a Malthusian population explosion right now is that women can only have one baby per nine months. If those weird religious sects that demand their members have as many babies as possible could copy-paste themselves, we would be in *really* bad shape. As it is they can only do a small amount of damage per generation.

...and that's quite a way short of Buck v. Bell let alone Hitler.

Calling Scott a eugenicist is simply stating fact. Thing is, though, that the kind of eugenicist he is is the kind you actually have to make an argument against, as opposed to "point at Auschwitz" for the Hitlerite school; there's some equivocation going on there. And I dunno why #3 is calling him a fascist.

Expand full comment
Melvin's avatar

I'm not sure that a belief that eugenics is a generally nice (though obviously peril-fraught) idea makes one a eugenic_ist_, any more than believing in abortion makes you an abortion_ist_.

Overall the word "eugenics" is probably sufficiently tainted by the bad form (the many many bad forms, to be fair) that it's probably best to taboo it and come up with new words to describe the various ways of improving human genetics.

Expand full comment
thelongrain's avatar

There’s rather a strong difference between believing that people should be *allowed to do X* if they want/need to and believing that people *should do X* for the benefit of society.

I’ve been paying attention to abortion-rights conversations for thirty years, and have never encountered pro-choice advocates arguing that people *should* get abortions, only that the procedure should be legal for individuals who choose to use it. Scott’s writing on eugenics is quite clearly in the “it *should be done for the benefit of humanity*” camp.

Expand full comment
David Piepgrass's avatar

> Scott’s writing on eugenics is quite clearly in the “it *should be done for the benefit of humanity*” camp.

Umm, *should be allowed* for the benefit of humanity. Surely Scott would oppose any forced sterilizations, forced screening, or other such connotations associated with the term "eugenics". Also, this is not the abortion debate; advocating polygenic screening is not like advocating abortions. Except if you view zygotes as people. But that's just not reasonable.

Expand full comment
magic9mushroom's avatar

1) I can't say I've seen people arguing specifically "we should have lots of abortions" (primarily because "conceived and then aborted" is dominated by "conception prevented"), but I've seen quite a few anti-natalists going "fewer babies being born is good actually".

2) Melvin may be driving at a different point i.e. reserving "eugenicist" for people who actually do eugenics rather than simply those that support it. The last link I posted above, though, is clearly intended to have a eugenic effect, so even this is debatable.

Expand full comment
Maynard Handley's avatar

https://slatestarcodex.com/2013/04/13/proving-too-much/

Are these MetaFilter folks against ANY genetic tests before having kids? You know, as practiced by that well-known Nazi group, the Orthodox Jews, in the form of Dor Yeshorim?

Expand full comment
magic9mushroom's avatar

I don't know. I don't know what MetaFilter even is.

There are arguments that overuse of even this kind of eugenics could result in a societal failure state (the classic case is a parent who wants his kid to do well selecting against idealism because it might result in the kid being fired and blacklisted somewhere down the line). But you do actually have to make them, and they don't apply to everything (there doesn't appear to be any societal advantage to having Tay-Sachs babies exist).

My prior on remarks like some of those in the OP being essentially word association of eugenics = Hitler = bad rather than an actual ethical analysis - i.e. not correctly describable as a "policy position" at all - is rather high, though I can't speak to specific cases without a great deal more context.

Expand full comment
Nancy Lebovitz's avatar

Metafilter is a link discussion blog, or at least that's the main part. It's pretty progressive, though I see occasional complaints that it's too white/insufficiently black.

I enjoy a good bit of it-- it's a decent source for interesting links and discussion, but when it's running malicious, it's damned malicious. It's also a good source for links about the war on Ukraine.

There's a small one-time fee to be a commenter, and it's firmly moderated.

Expand full comment
Melvin's avatar

No, they are against Bad Things and Bad People.

Who and what the Bad Things and Bad People are is something decided by a sort of collective humming, it's not something derived from logic or consistent definitions of words.

Expand full comment
Anti-Homo-Genius's avatar

Metafilter is a woke toilet, disregard it as viable indicator of any reasonable human-heldable opinion about anything vaguely culture-warry

Expand full comment
Deiseach's avatar

Ah, the good old NYT article, the gift that keeps on giving.

Basically, because Scott once mentioned Charles Murray, or someone who mentioned Charles Murray, without doing the requisite "leper! unclean! fascist racist nazis!" disclaimer, this is enough to get him tagged as a eugenicist, which is shorthand for racist.

Then interested and malicious parties kept spreading that take, and as the saying goes "a lie can travel around the world before the truth gets its boots on".

Expand full comment
Schweinepriester's avatar

Agree. Problem is, when the truth gets its boots well laced, its tramples down a lot and somehow stops being the truth. Managing lies may be charitable, not that I like to admit it.

Expand full comment
Anti-Homo-Genius's avatar

I think Scott also supports the non-fascist meaning of eugenics if I remember correctly, something like would-be parents checking for genetic diseases before having babies (something which he, I remember, mentioned is done in the highly interbred Jewish communities) and possibly extensions to that to the same tune.

Wokes have extremly low capability for abstract thought, hypothetical what-ifs and steelmanning, Scott entertaining a version of $BAD_WORD, any version, is equivalent to being basically Hitler, a Jewish Hitler to be sure, but hey, you work with what you have.

Expand full comment
Nancy Lebovitz's avatar

to be fair, there were some friendly comments, but the thread got more and more malicious, then mercifully got into what "house party" means, then died a merciful death.

Expand full comment
Essex's avatar

1. I think Scott's argued for some soft-eugenics positions at some point in the distant past (that rounded to "Maybe we should incentivize people with marker genes for serious medical conditions to not breed"), but AFAIK he's not some full-blown advocate for mass sterilization or anything.

2. MetaFilter seems to skew very strongly to the SJW-Left. The fact Scott refuses to bow before their idols and (more importantly) castigate others for failing to do so makes him a devil, and thus he must be ritually castigated himself to prove your own loyalty to the cause. See the number of people who mock the idea of the free exchange of ideas in the same comment thread.

EDIT: Oh yes, and also the accusation that making jokes about steppe nomads destroying industrial civilization is a direct equivalent to 19th-century Yellow Peril agitation.

Expand full comment
Nancy Lebovitz's avatar

Yes, that last was one of the worst comments, and evidence that someone didn't know it was satire.

Expand full comment
Essex's avatar

"Didn't know" is the wrong phrase, in my opinion. In my experience with these sorts of things, the correct phrasing is "didn't care to know". When it comes to purity spirals, the actual CRIMES of the accused are of tertiary importance, and in fact must be kept vague and primarily on the level of near-immutable personality flaws. The point is to 1. Demonstrate one's own purity by 2. Casting out and abusing those who are impure. Notice that the orientation of their arguments fundamentally aren't "Scott holds these bad beliefs", it's "Scott IS bad because he holds these bad beliefs, THEREFORE he must also hold every other belief we find bad and also do a bunch of things we find bad as well." It's a ritualistic activity, whereby, by denouncing Scott's evil, they denounce the evil itself and attain catharsis.

See how many of them fixate on trivial aspects and try to prove that THESE also prove his evil (not liking loud, obnoxious party music isn't a personal preference, it proves you're an elitist and probably racist as well. Lampooning x-risk obsession with sn-risk isn't a joke about how x-risks look like psychotic far-outside scenarios to anyone who isn't solidly invested in them, it's a racist attack against all Asians equivalent to putting in buck teeth, squinting your eyes, and going "me so solly, me love you long time!"). I don't think these people actually strongly believe these claims. If someone who wasn't Scott had written this article, they would not be brought up. But because an impure one wrote this article, one can demonstrate their purity by showing how EACH AND EVERY ASPECT of it is also impure- the less obvious and subtle the offense, the better, as it demonstrates a superior spiritual advancement to one's peers.

You see this behavior in every group that becomes trapped in an obsession with purity- hypervigilance for any offense, no matter how small, and constant attempts to demonstrate one's own purity making the threshold of acceptable behavior higher and higher until only the most monomaniacal and unhinged can reach it. In the best-case scenario, this leads to the group losing all power and silently crumbling. In the worst-case scenarios the group manages to retain enough members that they radicalize and start endangering themselves or others.

The good news is that MetaFilter's behavior (as well as wider social trends) indicate that the average netizen is no longer willing to play nice with the constant bar-raising of the SJW-Left, and signs indicate that similarly-totalizing far-right viewpoints are likewise losing what ground they have. I'm allowing myself some small hope that maybe the internet can, if not go back to the way it was, at least move forward with no more tiresome status games than one might have to deal with in a hobby club.

Expand full comment
Nancy Lebovitz's avatar

""Didn't know" is the wrong phrase, in my opinion. In my experience with these sorts of things, the correct phrasing is "didn't care to know""

This is actually a hard topic. One reliable way of being very angry at people is to assume that they're not just wrong, they're voluntarily wrong.

My impression is that people's minds skid away from what they don't want to think. It's possible to get out of that problem, but not easy.

Expand full comment
Schweinepriester's avatar

One guy crucified should be enough. There will probably be lots of people unable to concile the law and the love as long as humanity exists. I fail there more often than I should. It's not trivial.

Expand full comment
Sean O'Flaherty's avatar

When Joss Whedon fell out of favor, there was a lot of energy spent pointing out all the bits that could be seen as racist or sexist in his shows.

PS Still, it’s funny that no one noticed the neo-confederate elements in Firefly when it aired. They weren’t that subtle.

Expand full comment
Byrel Mitchell's avatar

I mean, when I was introduced to the show, the guy telling me about it described Mel as basically a former confederate. It just wasn't a big deal; this is sci-fi, and you can actually just have former rebels without the racism.

Expand full comment
Bullseye's avatar

Whether you catch the neo-confederate elements in Firefly is very much a matter of how familiar you are with southern conservatives' claims about what the war was about.

Expand full comment
Melvin's avatar

People did, they just didn't have the same insane reaction to anything vaguely Confederacy-tainted that they do nowadays.

Expand full comment
Eremolalos's avatar

Everybody commenting wants to have a sharp irritable take, but they don't know enough about Scott to have one, and unfortunately for them the piece they read is charming & doesn't supply anything to make mudballs out of. Not sure where the eugenicist slander comes from -- did that NYTimes piece about Scott say or imply something about that?

Expand full comment
Nancy Lebovitz's avatar

I'm reasonably sure a number of the commenters didn't understand that Scott's piece was satirical.

Expand full comment
Eugenia Boyer's avatar

How is it that so many comments at the same time say the same thing?

Expand full comment
Anti-Homo-Genius's avatar

$CURRENT_THING effect

Expand full comment
Axioms's avatar

Scott allowed people to talk about certain topics like HBD which many people have a strong negative reaction to.

Expand full comment
Brian Bargh's avatar

Why is it slow to load 2d images? It's standard practice in the software industry to write something functional without paying too much attention to speed as a first step. And it actually works pretty good. Most of the time your first try is good enough. Also most of the time you're not working on the step that takes the most time so it wouldn't make sense to spend time making that part of the code run faster. Why spend a week writing something in C when you could spend an hour writing it in python if the difference in speed is only 50ms?

So almost all software is 1000x slower than it could be and this represents a much more efficient allocation of developer time than if all software were fast. (Obviously some things are slow that should be fast though.)

Expand full comment
Solra Bizna's avatar

> Why spend a week writing something in C when you could spend an hour writing it in python if the difference in speed is only 50ms?

Actually, choices of algorithm and data structure usually make a much bigger difference than language. Spend an hour writing it in Python and get 3000ms hitch, spend a week writing it in C and get 2950ms hitch, spend an hour thinking about the design and thirty minutes writing it in Python and get a 50ms hitch.

(Source: I am a programmer, trained in the era when each of those hitches would be multiplied 10-to-1000-fold.)

Expand full comment
Thor Odinson's avatar

Agree that thinking about the design is the most important step, but it's far too oft forgotten - In part, I suspect, because to a bad manager sitting and thinking looks like slacking off, while furiously typing terrible code looks like productivity

Expand full comment
Solra Bizna's avatar

When I'm my own supervisor, even I still fall into that trap. I have to remind myself that I should clock in even when I'm "just thinking".

Then again, I'm the kind of person who clocks out for bathroom breaks...

Expand full comment
Karen in Montreal's avatar

Also because people aren't thinking about everything the computer has to do, to load those 2D images. Instead they're making a mental judgement based on how fast most of the buttons they usually deal with and especially the ones they grew up with were; and those were PHYSICAL buttons. No loading delay!

Expand full comment
Axioms's avatar

The vast majority of delays in UI come from generating a new menu panel. That's when you are loading lots of images, often images you have to decide to load based on pretty complex function checks. Obviously the issue is worse in Unity and generally high level languages like Python but you get it in C++, as well.

Expand full comment
unresolved_kharma's avatar

I'm quite desperate right now and I don't think I can make things worse by posting this comment here. Does anyone have short-term, concrete, actionable advice for EXTREME procrastination and emotional block to start intellectual work?

Alternatively, I'd enjoy recommendations for an online therapist which can help with this kind of issues. I'm already in therapy but this is kind of an emergency situation and the slow-paced approach of my regular therapy is not very useful for these immediate problems (to be fair, I've focused mostly on relationships so far).

Apologies if this is not the right place to ask....

Expand full comment
Schweinepriester's avatar

Looks like it has been the best place to ask. How nice. Many useful hints. I can´t really do any better but would recommend tweaking the system physically, too. Workout, cold water, stuff like that. Once the system is happy to work at all, it may be inclined to do something.

Expand full comment
unresolved_kharma's avatar

Indeed! It seems the issue stroke a chord in quite a few people.

Thanks for your suggestion too, physical interventions along the line of what Christina suggested are something I've not tried enough, but it makes a lot of sense.

Expand full comment
Bldysabba's avatar

Listening to the huberman lab podcast helped me immensely - there are three episodes that are critical

- on the importance of sleep(you have to stop getting light in your eyes by 1030pm),

the importance and mechanism of dopamine (https://youtu.be/QmOF0crdyRU) , and

on treating addiction by cutting off your easy dopamine reward sources for 30 days (https://youtu.be/p3JLaF_4Tz8).

The last is the most important to get done if you're in extreme procrastination mode, but sleeping right helps you do it, and understanding the dopamine cycle helps you understand why you need to do it. I have struggled for decades with procrastination, and only after listening to some of these episodes, and implementing the 30 day protocol am I finally feeling in control.

Expand full comment
unresolved_kharma's avatar

Thanks, I'll check these episodes. To be honest, I do have mixed feelings about his public persona that led me to mostly avoid his content, but I admit it might be purely a prejudice.

Expand full comment
Bldysabba's avatar

What about his public persona? (I only know about his podcast)

Expand full comment
Christina the StoryGirl's avatar

First, I'm cautiously seconding modafinil, but be aware it's like shooting an arrow from a bow - once you aim it, it's going to continue that direction. So don't take modafinil and go into your usual Dark Playground, because all the modafinil is going to do at that point is *keep you there.*

Second, if you don't know what the Dark Playground is and you haven't read this essay and its follow-up (https://waitbutwhy.com/2013/10/why-procrastinators-procrastinate.html) ...well, it's not a fix, but it will make you feel seen and not alone.

Last, and the epistemic status is "I heard this on a podcast somewhere and it seems to work for me..."

Suffering through a *very* cold shower or submersion/plunge of three minutes or so at the start of your day can weirdly make starting other unpleasant tasks that day *much* easier.

The water should be so cold it's *extremely* unpleasant verging into intolerable, but obviously not so cold that it triggers a physical shock response, cramping, shortness of breath, or anything like that. Just cold enough that you hunch up, breathe hard, and yell "Fuckfuckfuckfuckfuck!" a lot.

If you can't do three minutes, try 30 seconds or a minute and work your way up.

Good luck. I know how much procrastinating sucks.

Expand full comment
unresolved_kharma's avatar

Thanks, both for the word of caution about Modafinil and for the unorthodox cold shower suggestion. Sometimes I have the almost physical feeling that in order to get something done I need to "reset myself" in some way... probably a cold shower can help with that.

Expand full comment
Radu Floricica's avatar

I can confirm that resetting your idea of uncomfortable helps. Not a miracle cure, but helps. Once you have a bit of slack you may want to start lifting weights. It's not a short term thing, but will tackle the problem from several directions, some you don't expect (for example once you start seeing effects, you have a lot more confidence in your agency). But yeah, make yourself some slack first.

Confirm the thing about modafinil being an arrow. It's like ambien: works, except if you forget to go to bed :)

Expand full comment
Christina the StoryGirl's avatar

You're very welcome.

For what it's worth, I was being a little glib about my "I heard it on a podcast somewhere" epistemic status; I've actually been hearing about the benefits of cold immersion on *many* different podcasts and on internet articles from highly-credentialed sounding folk. Mostly they talk about the physical benefits - something something inflammation, something something brown fat (especially paired with sauna) - but increasing motivation / productivity is very commonly discussed as well.

And anecdotally, when the hot water heater went out in my Seattle house share in the winter, I had to take brutally cold showers several days in a row. They were brutally uncomfortable and I literally yelled "fuckfuckfuckfuck....uh.....fuckfuck!" the whole time I was scrubbing and rinsing, but seriously, nothing feels better than getting out of one and pointing yourself at the next thing.

If you're already having a feeling that a "reset" will do the trick, then this almost certainly will.

Expand full comment
Maynard Handley's avatar

Try the Pomodoro method:

https://en.wikipedia.org/wiki/Pomodoro_Technique

It's easy to use, the 25 min works well (in general, though for particular use cases you may want to alter it). You can find a zillion apps claiming to implement it, but honestly, you don't need an app, you simply need a phone or smart watch with an easy way to start a 25 minute timer.

A second scheme that works well is to define one day a week (eg Saturday or Sunday) as the day for all the boring but essential work. Garden, laundry, paying bills, whatever it is you dislike. You are allowed to let that stuff pile up for a week and not feel guilty, because on Sunday you get to work, get it all out of the way, and let it start piling up again on Monday.

Expand full comment
a real dog's avatar

Time box your shit. Once your timebox starts, don't necessarily solve the problem right away - if it was easy you'd already do it, right? Look at the difficult hairy problem, try planning next steps, maybe explaining it in writing - this way it'll be easier to ask someone else for help, but the act of describing a problem will often get you unstuck.

Uninstall everything that distracts you, if necessary, but honestly just scheduling time should be enough. Don't overdo it - an hour of good work is better than a day of fucking around and not getting anywhere.

Expand full comment
unresolved_kharma's avatar

Thanks for chiming in! Planning the steps to solve the difficult problem is actually good advice, as I often feel overwhelmed by the whole thing.

Expand full comment
Karen in Montreal's avatar

Uninstalling and/or blocking for determined periods all the stuff on your computer, phone, tablet is super helpful. Using planning to manage later impulsivity/distraction. Too much interesting stuff on our devices, way too distracting. As is this thread ... back to work!

Expand full comment
Joe's avatar

Promise yourself that you only have to work for five minutes. This is a bona fide real promise; I am telling you that if you hate what you’re doing then you’re allowed to stop after five minutes and you have fulfilled the above promise in full. You’re not obligated to stop, though, so if it turns out you don’t hate it that much, you can keep going if you want.

Also, find the smallest, tiniest, fastest task you can find that still makes progress towards the goal, and do that one first. You are encouraged to include “find tiny tasks” as a task that make progress, given the extremity of your procrastination.

See a psychiatrist about ADHD.

Expand full comment
Phil H's avatar

Yeah, I agree with this one. It's in the same family as the pomodoro, and I have another similar one that I've used on occasion: a ten-minute-do-something rule. I set myself the goal that in each ten minute slot of my work time, I will do something, even if it's only one word. You can use a timer, or usually I just remember the time, and go, right, by 9.22 I need to have done one thing... Sometimes I do end up doing just one word (or looking up one thing, or replying to one work text, or whatever), and that's fine, but as you know, the hard bit is the getting started, so often it snowballs into a bit of real progress.

I find there's value in having a bunch of different approaches, because then you can chop and change between them.

Expand full comment
unresolved_kharma's avatar

Thanks!

Expand full comment
joe's avatar

FocusMate changed my life more than probably any other procrastination-related intervention, especially once I'd fixed some of the smaller, more obvious things

Expand full comment
unresolved_kharma's avatar

Thanks, I'll have a look at this service!

Expand full comment
Chezkele's avatar

I had this issue once after a stressful college semester. I had a long history of anxiety, and at that particular time I couldn't get past the anxiety paralysis to even start my final papers, even with an extension. I'd pull up Microsoft word and my research tabs in chrome, and I'd just get stuck, unable to continue. I found EMDR therapy to be quite helpful at getting back that particular mental block. It wasn't a useful treatment for general anxiety long term, but it was very, very helpful at removing paralysis anxiety associated with a specific cause.

Expand full comment
DecipheredStones's avatar

Get someone who you trust to sit with you as you attempt to do the thing (not necessarily watching over your shoulder or anything, just immediately available) so that you can't completely fuck off and not do it at all and so that you can vent your negative emotions out loud as you try to do it. Doing this with my girlfriend got me to start working on something I'd compulsively procrastinated on for months, and once you've started, it's far easier.

Expand full comment
unresolved_kharma's avatar

I've tried this with mixed success in the past... to be honest it's hard to not feel judged sometimes. Anyway, I don't have a way to make it happen in the immediate future, but thanks for chiming in.

Expand full comment
Thor Odinson's avatar

Asking a friend or partner is limited in availability. Flat out hiring someone may be worth considering.

Expand full comment
Karen in Montreal's avatar

Procrastination is about avoiding negative emotion. 99% of therapists know little to nothing about it, and think its a time management issue, or an unconscious desire to self-sabotage, when it's neither. A good CBT therapist might be able to help, anyway, just by getting you to break things down further and further and try out strategies until you have a system that mostly works.

People who procrastinate tend to seriously over-estimate how long, boring or annoying a task will be. (And people with ADHD do that far more often than neurotypical people, therefore, more procrastination.) That over-estimation leads to a lot more dread about doing a task. (A medium-term procrastination approach helps people learn to estimate those factors better, and/or take their own estimations less seriously.)

If the task will be evaluated (as in pretty much everything we do during our educations, and many things we do at work), then anxiety also plays a role. More anxious, more procrastination. (Especially annoying because SOME people who are anxious about their work being evaluated channel that into getting everything done ahead of time and perfectionism. That anxiety management approach is encouraged and rewarded.)

Deadlines have a magical impact on our ability to get work done, because as the deadline approaches, our dread about NOT getting the work done grows, until it's stronger than our dread about doing the work. Unfortunately, when that finally occurs, we are often left with too little time to complete the task, or to complete it well, or to complete it and check it over well.

Stimulants help by something something something dopamine hand waving. Less dread, less procrastination.

Pommodoros, or making lists breaking tasks into very small sub-tasks and tackling one at a time, help because the dread of doing 20 minutes or just opening the instructions for an assignment is much less than the dread of doing the whole task or assignment. And once we've started like that, the rest of the task often doesn't seem as intimidating. Those are super useful strategies, as are things like creating intermediate deadlines for subtasks and making a commitment to someone else around those intermediate deadlines, practice of real focus on times we've experienced those negative emotions and it's been fine afterwards .....

Even just recognizing, labelling and accepting the negative emotions leading to procrastinating on a specific task can help. Sometimes that frees up enough mental energy to be able to choose to start the task despite the negative emotions. That's more of an ACT approach.

If you happen to live in Quebec, there's a researcher/clinician who does great work on this stuff, Frederick Dionne ....

Expand full comment
Thor Odinson's avatar

This matches what I understand of my own procrastination, yeah. What do you do when the task is genuinely horrid and the dread is warranted, though?

Expand full comment
unresolved_kharma's avatar

Thanks a lot for the detailed reply, this is very helpful and I'm grateful you took the time to write it. First o fall, you made me realize that my therapist is probably not very knowledgeable on this particular issue.

However, personally I fully acknowledge that my behavior is due to avoidance of negative emotions, yet I'm not able to do anything useful with this knowledge.... sometimes I actually feel worse, because this absurd divide between knowledge/understanding and action makes me feel powerless.

Unfortunately I'm in Europe, not Quebec. By the way, for some reason even if English is not my native language, I feel I have better chances to find people to help me tackle this problem in the anglophone word and that's why I'm mostly looking for online sessions.

Expand full comment
Karen in Montreal's avatar

Look for a therapist, wherever they are, who does Acceptance and Commitment Therapy, then explain this to them. They should be able to help!

Expand full comment
unresolved_kharma's avatar

Thanks a lot, I'm reading about ACT and it seems quite suited to my needs! I'll look for an ACT therapist.

Expand full comment
Radu Floricica's avatar

Second modafinil, for "short term, actionable advice". Adrafinil is easier to get a hold of, if that's a problem, and almost the same thing. Peak effect is just 1-2 hours later.

On the therapy side, what helped me in similar situations was to take a long introspective walk and try to figure out why/if that particular task is important for me. Of course, you may discover it's not, in which case you'll have a different problem :) but either way, the point is to have a conversation with yourself with discovery as a goal.

Expand full comment
unresolved_kharma's avatar

Thanks!

Expand full comment
coffeespoons's avatar

The Pomodoro system has worked for me in the past. Set a timer for 25 minutes, and tell yourself that you'll only focus on writing until the timer rings. Don't worry too much about quality at first, just write until the 25 minutes are up. Then, take a five minute break, and set a timer for another 25 minutes. Repeat a few times and you should have a decent chunk of output. Good luck! and remember, done is better than perfect.

Expand full comment
unresolved_kharma's avatar

Thanks!

Expand full comment
Axioms's avatar

Modafinil is easy to get or some ADHD med if you can get that without a lot of trouble.

Expand full comment
unresolved_kharma's avatar

I don't live in the US and I think here it's a bit harder to get hold of those, but given the situation I will try.

Expand full comment
Maybe later's avatar

Nicotine patches are an option as well.

Expand full comment
Axioms's avatar

Modafinil ordered from ModUp or something is relatively easy. I guess it depends on the specific laws of the specific country.

Expand full comment
David Gretzschel's avatar

Uhm... how do you guys avoid this pesky extra whitespace in your comments, that I seem to create in longer comments? It does not show up for me like that in the edit-box.

Expand full comment
Lumberheart's avatar

Line breaks signal the start of a new paragraph for substack - including extra whitespace. Unlike Markdown, it won't collapse it or require double-line breaks. There's not an option for adding a line break without it.

Expand full comment
David Gretzschel's avatar

Thanks. Structuring my thought with linebreaks is muscle-memory for me. Spacing it out also helps me read over my own text. I'll try to avoid doing that here then. I think it ends up looking pretty damn cringe, but the space-hogging is probably worse than the "no-line break" at all text.

Expand full comment
Eremolalos's avatar

Here is a site created by an engineer at Microsoft that gives info about the current availability of drugs to treat covid (Paxlovid, Molnupiravir & Bebtelovimab) and the preventive drug Evusheld, given to people who are immune compromised. Info is only for US.

Evusheld locator: https://rrelyea.github.io/evusheld/

Paxlovid locator: https://rrelyea.github.io/paxlovid/

Bebtelovimab locator: https://rrelyea.github.io/bebtelovimab/

Molnupiravir locator:https://rrelyea.github.io/Molnjpiravir/

Site draws data from COVID-19-Public-Therapeutic-Locator, but is easier to use and gives extra information. You can get information about Bebtelovimab, which is not on the Therapeutic Locator, & see how much supply of each drug each site has. You can also see a graph showing what the supply has been over time. Seeing supply over time is useful if you want to make some noise about how terrible the distribution of these drugs has been, and how much is going to waste. For instance, here’s Paxlovid in freakin Alabama: https://rrelyea.github.io/paxlovid/?state=AL Notice how many pharmacies’ graphs are practically flat — they got in a supply of the stuff a good while ago, and it’s just sitting there.

Expand full comment
David Gretzschel's avatar

That's pretty damn cool. But I was just about to recommend that to some people I know, who are very worried about Covid, till I noticed that it's US-only. Please edit that in.

Expand full comment
Eremolalos's avatar

OK, did it. Are you in the UK by any chance? Just found out this week that Paxlovid is so scarce in UK that it is given only to the severely immunocompromised, and there's no Evusheld at all.

Expand full comment
David Gretzschel's avatar

Nope, I'm in Germany. No idea what our supply situation is. Just that I would have sent this along as a neat resource.

Expand full comment
Jonathan Ray's avatar

What often makes websites ridiculously slow is an excessive number of round trips to the server. First your browser asks the server for an html file, and then the html file tells your browser to ask the server for JavaScript file #1, and then JS#1 tells your browser to ask the server for JS#2, and so on up to JS#9. They could save between hundreds of milliseconds and several seconds (depending on how slow your connection is) by inlining all the javascript code in the first request.

Expand full comment
Retsam's avatar

I'm not sure this is actually a very common problem with websites as "bundling" websites - where everything gets rolled into a single JS file - has been basically the standard way of building/distributing websites for close to a decade now. (And, yes, a quick check in the browser console tells me Substack is bundling their code, too)

The more common issue is actually rather the reverse - where there's too much code in the initial bundle, like maybe the comment section code shouldn't be blocking the initial load of the article - so I'd guess more sites have issues from not splitting their bundles enough than from not bundling enough

Expand full comment
Jonathan Ray's avatar

Substack is bundling the code, but not the data. There's a clusterfuck of dozens of separate requests to the server for each page load.

Expand full comment
Retsam's avatar

It actually doesn't - at least, I'm opening the /comments view of the page, and it basically did one load of all the structure of the page, and then one API request for the comments, which resolved to an 650K character JSON file of all the comments.

Most of the API requests I'm seeing are loading people's icons - which are only loaded as they become visible - which definitely seems like the right way to handle it, even if it's 'more round trips' - as well as periodic polling for new comments (which if no new comments have been posted, comes back as an empty JSON object, not a repeat of the 650K initial load).

It seems, overall, like a fairly sensible way to do it, and I definitely don't think your initial point that there's "not enough inline JS" is accurate.

Expand full comment
Ferien's avatar

Now, a million dollar question: why Process Manager (the program you start to find an offending program which takes too much CPU/RAM/disk and to stop it) is slow too?

Expand full comment
Thor Odinson's avatar

Presumably because you're tending to only open it when your computer has no resources to do it with? if I open it with not much else running it seems fast to me

Expand full comment
Axioms's avatar

Do you use ProcEXP?

Expand full comment
Ferien's avatar

I did, in WinXP times...

Expand full comment
nifty775's avatar

So Honduras revoked their law that allowed new ZDEs/charter cities (unanimously!), and we'll see how things shake out for Prospera and a couple of the other existing ones. I think charter cities are interesting and I'd definitely like to see them succeed- unfortunately, as I said at the time, you basically can't trust a 3rd world Latin American country to not go populist in the future. Even if you make an 'agreement' with one government at one point, the history of Latin America tells us that a future populist will come into power and rip it up. Prospera settling in Honduras was a bad idea, I would basically expect that a future government will shred their agreement and seize the land.

I continue to advocate for charter cities among Caribbean nations, probably an ex-British colony. While not perfect they tend to be much more politically stable than poor Latin American nations like Honduras, respect the rule of law more, basic property rights, etc. The Bahamas, Barbados, Saint Kitt, Grenada, Saint Lucia..... all much better options. Let's hope that charter cities build on a more successful foundation for their next round

Expand full comment
Dirichlet-to-Neumann's avatar

I doubt charter cities can really work unless backed - at gunpoint if necessary - by a foreign power. Which is what the successful historical charter cities did - think East India company, Dutch India company and so on. They made a lot of money and had an 𝙖𝙢𝙖𝙯𝙞𝙣𝙜 human right record so I guess that's the way to go...

Expand full comment
Thor Odinson's avatar

Can't tell if your human rights record comment was sarcastic - to my knowledge, it falls under "the past is an awful place" - those colonies were terrible for human rights by modern standards, but the local governments they replaced were *also* terrible, and I'm not enough of a historian to assess which was worse

Expand full comment
Deiseach's avatar

Well. colour me surprised (not). New socialist government decides it wants to keep all territorial powers for itself, who could have seen that coming?

Charter cities are not going to work unless (1) you can outright buy the rights to the land lock, stock and barrel or (2) you are well tied-in paying reasonable but not extortionate bribes to a corrupt government that is not going to let itself be overthrown anytime soon in coups d'état, popular uprising, or the like because it can reliably send in the troops to squash those pests. The problem with option (2) is that if a strong government decides it likes what you are doing and decides to take over for itself (e.g. I'd be concerned that if China let you set up on its territory, after a while it would decide 'yes we'll have all of that, thanks').

Expand full comment
Thor Odinson's avatar

What about (2b) well tied in to paying a retainer to a decently sized foreign military (mercenary or otherwise)? I guess that rather presupposes (1) in practice, maybe with scare quotes around 'buy'

Expand full comment
User's avatar
Comment deleted
May 8, 2022Edited
Comment deleted
Expand full comment
John Schilling's avatar

> Good. ZEDEs are a colonialist, imperialist, anti-democratic project.

You say that like it's a bad thing.

I tend to look at the modern world and see an awful lot of really good things that were the work of colonialists, imperialists, and some mix of autocrat/socialist/plutocrat/theocrat, then had the rough edges filed off by some local democrats. And I don't think we're even close to the place where all possible good things have been created in their basic forms and only need the rough edges filed off. So if someone wants to create dedicated zones clearly labeled "here be there colonialist, imperialist, anti-democratic dragons", that strikes me as a pretty good idea and I'd prefer they be left in peace.

Depending on exactly which brand of anti-democratic imperial colonialism they are involved in, I might make a point of never ever going there, and you may want to do the same. But they *are* clearly labeled, so what's your beef with the people who *do* want to go there?

Expand full comment
David Piepgrass's avatar

> You say that like it's a bad thing.

Umm, yes it is?

Seems to me that ZEDEs are good precisely because they offer a new and better way of doing democracy: you can choose to live in the ZEDE, or not, which means that good public policy does not have to come from convincing the entire country to vote for people who in turn vote for the thing you want, which is damn near impossible. Plus, a successful ZEDE can inspire the rest of the country to pick up some of the elements of the ZEDE. And an unsuccessful one can be dissolved. All this is good.

Expand full comment
Melvin's avatar

There might be corners of the internet where you can call something "colonialist" and "imperialist" and expect the dots to already be connected from those labels to "bad", but I hope this isn't one of them. Colonialism and imperialism have good and bad aspects.

But if you're opposed to colonialism and imperialism in general, surely charter cities are a good thing, they enable rule by local people rather than faraway imperial powers? The Honduran Empire isn't a huge one, but it's not a tiny one either, there's no good reason why people on Ratan should be ruled by people in faraway Tegucigalpa, is there? Charter cities provide a framework for people in small local groups to escape this sort of smaller-scale imperialism.

Expand full comment
a real dog's avatar

Colonialism, imperialism and anti-democracy seem strongly underrated in 2022. Especially given how the alternative works.

Expand full comment
Cire Barr's avatar

That video presents little or no information.

Expand full comment
nifty775's avatar

Seems like you're hostile to market systems. OK, no problem. What's your alternate public policy for how Honduras (GDP per capita $2400) is going to emerge from poverty? Honduras is regularly described as one of the poorest countries in Latin America. My read on the 20th & 21st centuries is that 100% of countries that emerged from poverty used market-based systems, and 0% of countries using other systems have done so. The record seems pretty clear. For example, South Korea was famously about as poor as Peru in the 60s, yet today is one of the wealthiest countries in the world. This was accomplished not only through special development zones similar to ZEDEs, but embracing capitalism in general.

Seeing as you don't like market mechanisms- what's your alternate path for how Honduras will emerge from poverty?

Expand full comment
User's avatar
Comment deleted
May 8, 2022Edited
Comment deleted
Expand full comment
nifty775's avatar

Deiseach summed up everything I was going to say. If some eccentric libertarians want to live on a separately built commune that's semi-independent, and have their own system of government that they voluntarily opted into, I don't see how that's 'non-democratic'. So long as they're always free to leave if they choose- what's the issue? Especially as, to my understanding, the criminal law code of Honduras was always supreme.

How is that different from, say, taking a cruise ship? You're subject to the rules of a non-elected corporate entity for as long as you're on their boat, their security could even throw you in the brig if you get too rowdy. It's not like the passengers hold an election to select their leader upon boarding. You voluntarily chose to be there. How long does your stay with a corporate entity have to be, to cross the line into being 'undemocratic'? Should my landlord have to earn my building's vote every year?

Most Western countries are ultimately ruled by a judiciary who are unelected and are deliberately insulated from public opinion. They can't be removed no matter how much 'the people' dislike their decisions, and even the government has to obey them. Are judges 'non-democratic'? Should we select legal or constitutional outcomes just based on what's most popular? Maybe we should decide civil or criminal cases via a referendum. Or even better, fundamental constitutional rights like the 1st or 4th Amendments can be up for a vote every time there's a controversy. We could hold a vote on whether Islam is permitted in the US right after 9/11. That'd certainly be more democratic!

Expand full comment
User's avatar
Comment deleted
May 8, 2022Edited
Comment deleted
Expand full comment
David Piepgrass's avatar

Umm... even if they're free to leave, that doesn't mean they're free after they leave. (Or that other countries will welcome them.)

https://foreignpolicy.com/2018/03/29/the-disappeared-china-renditions-kidnapping/

Expand full comment
Thor Odinson's avatar

That first comment just isn't true - even if China freely grants passports to leave (which I would tentatively bet it does not, it certainly didn't in the relatively recent past), somewhere else needs to let you in. By contrast, since ZEDEs are part of the country, you don't need to deal with any immigration laws to just walk out.

Expand full comment
Deiseach's avatar

"I'm opposed to Prospera because I believe that its proposed structure is fundamentally incompatible with democracy and that it would necessarily deprive its citizens of basic human rights."

The local Hondurans are not going to be the citizens of Prospera, those are the rich-by-local-standards foreigners that Prospera wants to entice in with promises of remote working for their IT/fintech jobs with Western companies, live in an agreeable climate in an agreeable location, pay few to no taxes, and have cheap local labour to be their servants who will then go home in the evenings and not hang around cluttering up the perfect city with their lesser economic status and need for cheap housing etc.

If the people who want to sign up to the charter city are willing to give their rights away to whatever governance structure will be in place, that's no skin off anyone's nose, given that if it's too intolerable they'll just fly home to their native Western cities. The locals might make some money out of being cheap labour, if there isn't alternative employment available. As to how valuable it would be to lift Honduras out of poverty, I tend to agree with you: the previous government saw this as a golden goose of funds to be funnelled into their own pockets, and to hell with the locals.

Expand full comment
User's avatar
Comment deleted
May 8, 2022Edited
Comment deleted
Expand full comment
Wasserschweinchen's avatar

Your opinion is difficult for me to understand. Is your point of view that someone who votes for the opposition in a democracy, or who doesn't vote at all, for that matter, has given consent to be ruled by the winner of the election?

Personally, I feel a greater degree of consent now that I live in a jurisdiction in which I do not have the right to vote, but which I freely chose to move to, than I did when I lived in a jurisdiction in which I did have the right to vote but which I had been born into.

Expand full comment
Deiseach's avatar

Oh, I dislike Prospera on its merits. But if people are willing to sign up and sign away their rights, because they think it will free them from the tentacles of intrusive government or it'll be like a big gated community or they will make more money or they can live like gentry complete with servants, then I don't see how you can stop them.

So long as they're able to say, at any point, "this sucks" and drive to the nearest airport to fly home, then I don't think you need worry too much about them and their kids. Of course, the interesting part would be if Prospera owners/operators went "Sorry, you signed a contract for X months, until that time is up and/or the money you agreed to pay in rent and services provision is paid, you can't leave" and they *physically* blocked them from leaving, it would be a different matter. If the Honduran government said "Sorry, we can do nothing to help you, this is its own little sovereign state outside of our jurisdiction".

That's why you should always read the small print of anything, especially when it seems too good to be true.

Expand full comment
DinoNerd's avatar

I've recently retired from a job in digital device performance. (Much of that time involved cell phones, with a little bit of work with computers near the end.) That doesn't make me an expert; I always felt more like I was just fumbling rather than knowing what I was doing.

In general, user-noticeable performance problems tend to be caused by a giant collection of tiny problems, rather than a single big issue. I call it a death of a thousand cuts. Occasionally the root cause is a single major design or coding mistake; we loved those problems, because we could pretty much always solve them. Whereas with the deaths of a thousand cuts, it would be tradeoffs all the way down - do we want 10% of users bit by *this*, or 11% bit by *that*?

At a high level, I'd say the twin root causes of much of what we dealt with was (a) excessive complexity and (b) cramming in at least 10% more features, more constantly-running processes, more memory use, etc than the device could handle comfortably, and relying on various tradeoffs to make it look happy *most of the time*. The cell phones I worked with lived on the edge - not just the older, slower models we were still supporting, but generally also the latest and greatest model we hadn't yet announced. (There was a big tradeoff between the cost of more powerful hardware and the features designers wanted to load unto it. Neither higher prices nor fewer features were going to keep profits high - but that meant everything was always running on the edge.)

Getting back to excess complexity - modern software development optimizes to reduce development costs. Everything's done with really really high level languages, with layers of frameworks. No developer understands all the things a particular line of code will do (or cause to be done). Moreover, the same line, in the same program, will do different things depending on the state of the device. 99 times out of 100, or even more often, a particular operation is non-blocking - and the other time, it sends a message to another process, which needs to be launched, which wakes up 5 more processes, which need to be launched, which runs the device out of free memory, causing the process that started it to be paged out (on a computer), or even killed and relaunched (on a cell phone or tablet).

One culprit was ever higher level languages. Swift uses more resources to accomplish things that could have been done in Objective C more cheaply. C++ or C would be even cheaper - in terms of performance. But not in terms of development time. Swift would also protect developers from some common coding errors Objective C could not. But when I write C, I pretty much know *exactly* what it will do - except for operating system behaviour, and whatever external routines I call. And when I write Swift, I really don't - I hopefully know what it will eventually accomplish, but not how it will be done.

Another culprit is that iOS, at least, and much of MacOS, does just about everything by sending a message to some other process, which does part of what you wanted itself, then sends out a few more messages to accomplish the rest. It's unlikely that the whole set will fit in memory. (And by the way, your cell phone lies to you about what apps are really in memory - sometimes it's metaphorically freeze dried and has to be reconstituted on the spot. But most of the problems come from daemons - non-app processes that the user never sees.)

Web browsers have all the problems of any other app, even before you get to the point where the web page itself is written in some language like javascript, which needs to be interpreted at run time.

Writing code to run on more than one browser, or more than one operating system, or both, adds its own complexities. Both web browsers and operating systems have things they do well/easily, and things they do badly, and what Safari on Macos does easily is probably a disaster on some other combo - but not because Safari or MacOS is objectively better. Some other thing you want to do is implemented in such a way that it's at its best on Microsoft Edge, running on the latest Windows system. (And this gets even worse on cell phones, which tend to be more constrained.)

I could go on, but this response is already in TL;DR territory ...

Expand full comment
Deiseach's avatar

Yeah, I have a ton of stuff running on my phone which I don't want or don't need or don't use and trying to get in and turn it off is very difficult because the phone manufacturer wants to run its own background stuff, Android is running its own, etc.

Expand full comment
Benjamin Jolley's avatar

Video games use way more power than my computer consumes at any other time. My tiny laptop heats up to the heat of the sun if I don't use an external fan while playing video games, but pretty much doesn't heat up at all if I'm browsing or using regular boring 2d programs.

Expand full comment
REF's avatar

Get an Intel based MacBook Pro. Then you can run at the temperature of the sun _all the time_.

Expand full comment
Phil Getts's avatar

Scott, I don't expect to be a finalist, but I've become aware since submission of many glaring problems with my review, and imagine other writers may have too. Is there a way for the finalists to revise their reviews before you post them?

Expand full comment
Generative Gallery's avatar

If we have an entry in the book club, but we don’t get selected, will there be any way to find out what ranking our review was? e.g 319/423

I want to get more into writing, so I want to know where I’m at compared to other people.

Expand full comment
Nechninak's avatar

I guess if my review is in the top two quintiles, then I'd like to know the rank, while if a review is in the bottom quintile that's probably enough information. Not sure about reviews in the other two quintiles.

Expand full comment
Resident Contrarian's avatar

Off topic, but I'd caution you from (caps for emphasis) EVER DOING THIS as a way of assessing something like "how good my writing is". Writing contests are TERRIBLE for this. They are the WORST FOR THIS.

The reason why is that writing in public is a game of finding the few people who will like you and ignoring everyone else. And I'm serious that it's just a few - a 1% view-to-conversion rate is actually really high unless you are writing in a specialty niche and getting traffic from other places working the same niche.

This is fine, because there's a negative relationship between conversion rate and how much value you provide to any particular confirmed fan. The more specific the need you fulfill (and I mean either in terms of what you write about or how you write) the more important you are to that person whose need you are serving.

And some people are very, very successful both in terms of reader counts and how much they get paid grinding very specific niches. I know a lady who writes a blog for a popular note taking app who makes my readership numbers look dumb, because she's really serving the living hell out of that community in a way they need and want.

A contest like this is the opposite of all that. Consider:

1. A contest like this rewards general palatability within the community, usually. I'm not sure how Scott scored it, but consider that he probably did something like a flat average score. There's nothing wrong with that, but it's going to discourage risk-taking.

2. A contest like this is *specific* and has you looking at previous winners to try to level-set winning strategies.

3. A contest in a specific space is going to reward people who do work that's very similar to the space - this is a self-selected group that explicitly likes Scott-like things. But unless you think you can out-Scott Scott himself, then optimizing your writing with "how much did Scott-fans like it" is a really bad tactic. They are already well served in terms of their Scott-needs by Scott.

None of this is at all a dig on the contest, which is fun and good. It's just saying that high-performance in a contest like this really isn't generalizable to high-performance elsewhere in most cases for most writers.

It tells you *something* sometimes. For instance, if you scored well it would probably mean you could at least write clean, readable text. But that's only about 10% of the battle, at best. And scoring poorly doesn't tell you anything - it's too confounded by taste and selection effects, etc.

TLDR: "How did I perform in someone else's book contest with someone else's most dedicated audience members" doesn't tell you a lot about how well you'd perform in the wild when building your own audience. Only trying to build the audience for real would tell you that; that's the test you want to run.

Expand full comment
Froolow's avatar

In terms of feedback, the dream would be seeing the distribution of scores - low scores across the board would indicate a serious problem with my writing (failing at clean, readable text for example). A mix of extremely high and extremely low scores probably means you've hit the sweet spot and just need to find your audience. Mid-scores across the board is probably an indication that you're just a boring writer and should stop!

Expand full comment
Froolow's avatar

I'd third this - it would be enormously useful to have objective measures like people's scores to understand what is working or not in my writing.

Expand full comment
McClain's avatar

I second that emotion - having ranked all of them, I’d be curious to see how my verdicts differ from the general consensus

Expand full comment
Deiseach's avatar

I'd appreciate that too, I've a feeling I'm 423/423 but it'd be nice to know!

Expand full comment
Dirichlet-to-Neumann's avatar

Yours was the one on who write the Torah wasn't it ? As a data point I gave it a 10.

Expand full comment
Nechninak's avatar

How did you guess which review Deiseach wrote? Or do you know each other personally?

Expand full comment
Dirichlet-to-Neumann's avatar

Well as you can see from Evelyn reply above I was wrong, but it seemed to me the style and subject matched Deiseach's.

Expand full comment
Evelyn's avatar

Aw, thanks!

Though for future reference: folks, I screwed up: Mr Friedman says that the seven-day creation story is a *Priestly* account, but at one point I said it was from the Elohist. (Otherwise I correctly attributed the story.) There's always some silly mistake you overlook until it's too late...

Expand full comment
walruss's avatar

It takes orders of magnitude longer to get a response packet from a remote server than it takes to do anything local. There are ways to compensate for this in a video game context (e.g. streaming/network socketing and predictive rendering) but those methods have tradeoffs that aren't really worth the trouble when working with a web app.

Also...video games do regularly hang for hundreds (on my computer thousands) of milliseconds. These are called loading screens and they're ubiquitous. Designers have gotten very good at hiding these, but you might have noticed space warriors taking more elevators and ninjas spending lots of time in low-texture corridors before entering into a vast and gorgeous expanse lately. However clever these might be, at some point your computer has to move the information about how to render the world from the hard drive (which may as well be in China as far as the CPU is concerned) to RAM (which is more like next door).

With that said, I think the jist of the comment was that the user experience of the average video game is better than that of the average web app/word processor/photo viewer. That's hard to deny.

The thing is that games are made with billions of dollars by the best programmers on Earth, using programming languages that work closely with the machine. Blogging web apps are built in JavaScript.

Expand full comment
Radek's avatar

I think the "amazing game engines" argument underestimates how difficult it is to render fonts. Not only each glyph has many points of its own, but they also interact with each other with ligatures and kerning. I think this interaction may not be parallelizable, unlike rendering many triangles.

Expand full comment
Solra Bizna's avatar

There are techniques for doing text rendering very fast, and some of these can parallelize most of the work. Glyph shaping (ligatures, accents, etc.) is inherently serial, but also easy* compared to the actual rendering of glyphs.

My personal favorite fast but high quality technique is Viktor Chlumský's multi-channel signed distance field technique: <https://github.com/Chlumsky/msdfgen> (there's an abominably-long link to his thesis paper on the topic in the readme)

With MSDFs, your program has to do a lot of processing ahead of time to "render" the glyphs into the MSDF representation, and you (a human being) have to tweak some of the parameters on a per-font basis, but the result is a remarkably compact, highly-detailed representation of each individual glyph. The actual rendering operation is simple: sample a texture, take the median of the three channels, compare that to the edge threshold, done. (Optionally, repeat a few times per pixel for antialiasing.) GPUs, even crummy mobile GPUs, are blindingly fast at this part. Getting the glyph coordinates to the GPU efficiently is a challenge, but this is the same problem that you have with e.g. particle systems, so there are plenty of existing solutions.

*Easy for the program. Hard for the programmer, if you have to write the shaping code yourself. Please, for the love of all that is good, use HarfBuzz (or RustyBuzz) instead of trying to write shaping code yourself. Please. And I haven't even mentioned bidi or linebreaking...

Expand full comment
Sniffnoy's avatar

Dan Luu has a good piece on how computer interfaces have gotten slower over time: https://danluu.com/input-lag/

Expand full comment
David Gretzschel's avatar

That's an interesting approach. Personally, I've always needed beefed up gaming-capable computers, because I felt that anything else was just too slow for non-gaming purposes. I'm very sensitive to the micro-interruptions of load times, due to ADHD. If something takes too long, I'm liable to get distracted during the load. Or just forget why I wanted to do it in the first place. I also do love how much better high refresh rate panels feel. Though at the moment, I can't have both 4k and high-refresh rate :(

Expand full comment
Justin H's avatar

Economic reasons: On the labor supply side, the number of people employed as programmers has far outpaced the number of people who have the talent and expertise to write maintainable and performant software. And the training that many new programmers have is at a level of abstraction which completely obfuscates the systems you have to understand in order to write really performant code, like the rendering pipeline and how the CPU interacts with different layers of memory (cache, etc). And outside of the game industry, the best programmers usually end up working on the backend. Meanwhile, on the consumer demand side, most users don't complain about performance unless it gets really bad. Or unless they're gamers.

Institutional reasons: 98% of the time executives don't ask about performance at quarterly meetings with product managers, so PMs don't pressure programmers to focus on performance, or even have a reliable way of monitoring it. Instead, they're asking for more features faster, which puts pressure in the opposite direction, and performance slowly gets worse. Eventually, an executive might notice how bad performance is getting, or one star reviews might start showing up, then there's a scramble at the company for a bit, performance gets better - or not because an executive decided something else was even more urgent, but either way the development process usually doesn't change and poor performance slowly creeps back in.

Technical reasons: A lot of these have been mentioned already, but take react as a case study. You've got a virtual DOM (document object model) that has to be re-calculated whenever there's a change in state - there are a lot of foot guns here, you only want it to re-calculate the part of the virtual DOM that's relevant to the state change. Then when the virtual DOM changes, it has to be reconciled with the actual DOM (the stuff that gets specified with HTML tags), which often means the entire layout of the page has to be re-calculated and redrawn. Meanwhile, the javascript engine handles memory for you with a garbage collector, but you often don't know when new memory is allocated (which is slow), or when the GC will run, or whether or not the memory you're using is in the cache (fetching memory outside of the cache is ~30 times slower). Javascript usually runs with "just in time" compilation, which helps, but it also gives the CPU extra work that it needs to do upfront whenever new javascript code loads. Meanwhile, your latest triple A video game is written in C++. In C++ (as in C and Rust), you control when memory is allocated, when it's destroyed, how it's laid out. You can check the assembly that it gets compiled into for different CPUs (godbolt is a great tool for this) and make sure that it's being optimized at that layer.

Expand full comment
Micah Zoltu's avatar

The minimum viable video game requires extremely high performance. The minimum viable UI for an app requires "generally works, and is about as snappy as other things you use". Once the minimum viable product is achieved, resources are directed to adding features/content, rather than continuing to improve performance.

To put it another way, there is a cliff of profitability if your video game can't achieve sufficient frame rate to not make players sick. That cliff for a desktop app is many orders of magnitude slower.

Expand full comment
Jacob Steel's avatar

Why do economists draw supply/demand graphs with volume of trade as the X axis and price as the Y axis?

As a supplier or demander I can control the price I accept or offer, and the amount I can sell or buy is then a function of that price and everyone else's choices.

But supply/demand graphs are invariably drawn as though volume is the independent variable and price is the dependent variable, which feels weirdly counter-intuitive to me.

Similarly, "The supply curve is steep" feels to me like it ought to be saying that supply is very sensitive to price, whereas this way round is means that supply is very insensitive to price.

Expand full comment
Ghillie Dhu's avatar

Blame Alfred Marshall. His 1890 "Principles of Economics" introduced the graph with the axes in those positions, and the field has just gone along with it since.

Similar to how we can blame Ben Franklin for positive current pointing in the opposite direction to the flow of electrons.

Expand full comment
Paul T's avatar

I think the root of your confusion is that you think there is one dependent and one independent variable. But that’s not the case here; the relationship is a bi-directional link. So it’s just a convention, it doesn’t matter which way round you put them on the graph.

You can consider a demand increase as either a “shift right” (more quantity demanded at a particular price) or “shift up” (higher price bid for a particular quantity). Both of these are exactly the same mathematically. Eg see https://www.thoughtco.com/shifting-the-demand-curve-1146961.

A steep supply curve means that (mathematically, if you read the values off at each point on the curve) if you increase the quantity supplied by a little, the price goes up a lot. Or equivalently, if you increase the price a lot, you only change the quantity supplied by a little.

Expand full comment
coffeespoons's avatar

The story I heard is that it was based on how goods were sold at farmers' markets: everybody shows up with their quantity of apples, then the price of apples is determined based on how many apples were brought to market. Agreed that today the opposite would be more intuitive.

Expand full comment
Jack Wilson's avatar

There are plenty of very active commodities markets today: oil, wheat, cattle, aluminum, etc.

Expand full comment
Mystik's avatar

I’m not positive that this is the answer, but I believe it’s because you calculate the bonus gains from trade by integrating along volume, not price, and thus making volume the x-axis is more natural in that context

Expand full comment
Lars Petrus's avatar

I find myself with a lot of extra belly fat after sitting on my couch during Covid.

What are the tradeoffs around removing this with diet vs liposuction?

Expand full comment
raj's avatar

Liposuction should be seen as a tool for redistributing fat for aesthetics, rather than controlling total amount of body fat. Fat in certain places is more aesthetically 'costly' than other fat (like a BBL is just transferring fat and has incredible results). A little bit of belly fat for a mostly fit person is a good example; 'a lot' of belly fat, maybe not so much.

Liposuction does not change the laws of thermodynamics. So moving forward it will not change how weight you gain or lose due to normal factors. One potential downside would be that subcutaneous fat (what is removed) is often considered healthy fat, versus visceral/intra-organ fat which cannot be removed and is correlated with mortality. So putting this together it means you will end up carrying a relatively largely portion of unhealthy visceral fat in the future.

In short, you should only look to get a small amount of fat removed if needed to balance aesthetics when you are already in a healthy weight range.

Expand full comment
Coldinia's avatar

I don't know the whole context, but one of the things we were taught in my university biochemistry course that always stuck with me is that you have a limited number of fat cells in the adult body, and making new ones is non-trivial. Having some excess fat stored in fat cells has some disadvantages, but once they're full, any overflow fat molecules end up circling around in the body As fat molecules of various sorts are extensively used as signalling molecules in your body's biochemistry, that can cause serious issues. (this was as part of a lecture series discussing what the consequences of obesity for the body actually are, which had surprisingly many layers of increasing subtlety, many of which we do not understand particularly well yet!)

All that to say that liposuction does not only remove the fat content of cells, but the fat cells themselves, thereby reducing your safe fat storage capacity in the future. Also fat cells do other important things, like producing the satiety hormone leptin, which is what gives you a feeling of fullness after a meal (and surely many more I don't know about, and more no-one knows about yet), so removing a subset can also affect any of these processes.

Not to say that liposuction can't be a useful and health-promoting surgery, and I am not a medical doctor, etc etc, but it removes far more than just the excess fat itself.

Written from memory on a train, so caveat that I might have misremembered some details, sorry!

Expand full comment
Pepe's avatar

Who owns text generated by GPT-3, or images created by Dall-e 2? The person who wrote the prompt? Once AGI is a thing, who will own ideas and/or products generated by the AI?

Expand full comment
Nancy Lebovitz's avatar

I'd have thought that if a person has to go through a bunch of GPT-3 output to find the good stuff, the person should have copyright.

Expand full comment
Will W's avatar

I don’t know what GPT3 is. But a compilation of information can be copyrighted.

Expand full comment
Will W's avatar

Most courts that have considered this have determined that there is no copyright for works created by AI. Australia had an outlier opinion but that got flipped weeks ago.

Expand full comment
David Piepgrass's avatar

And it strikes me as weird that people even think there *should* be an exclusionary right by default. Copyright isn't a right to copy, it's a right to *prevent other people from copying*, or at least sue them. Why should society give anyone copyright over things they didn't create?

Expand full comment
Cry6Aa's avatar

Simplified answer: it depends on where you live. Some countries take the approach that the person who owns the software owns the copyright. Others take the approach that, where the input of the owner is reduced to 'press a button', then nobody owns it and it falls into the public domain.

Which approach becomes generally accepted is up in the air right now, but I'd put money on the program's author (in the legal technical sense) becoming the owner.

Expand full comment
Pepe's avatar

"Others take the approach that, where the input of the owner is reduced to 'press a button', then nobody owns it and it falls into the public domain."

I wonder how this would apply to, say, new drugs discovered via AI.

Expand full comment
Thor Odinson's avatar

Patents for drugs usually cover the whole manufacturing process, not just the molecule itself, and even if AI starts being responsible for finding the latter we're a long way form it working out the former.

Expand full comment
Cry6Aa's avatar

Not really - you can have a patent that 'covers' (read: claims) just a specific molecule and its uses, with dependent claims (or even a separate application) for the process of production. But you're right that sufficiency of disclosure would most likely require some experimental data showing how the molecule is made (sufficient such that a person skilled in the art could replicate the results claimed). This is a separate issue from who the inventor is btw.

Expand full comment
Cry6Aa's avatar

As Pete says: that falls into a different bin (patent rights) with different rules about ownership and inventorship. Presently inventors are generally written so as to be natural persons (ie: human beings), but inventors also aren't very important (outside of the US) to the process of ownership of a patent.

I'd say that, so long as there is a person willing to press the button, that person would be the inventor for the purposes of present-day patent law.

Expand full comment
Pete's avatar

All the above comments are about copyright, and drugs are uncopyrightable no matter by whom or what they're discovered.

Drugs are protected by patent rights, which have completely unrelated rules. In general, you'd have to name a human as inventor (see https://www.hklaw.com/en/insights/publications/2020/05/does-an-invention-discovered-with-ai-obtain-patent) however, you can probably get away with having someone take credit and write an application for something where 99.9% of the work was performed by an automated system.

Expand full comment
Essex's avatar

If it's a "genuine" AGI (intelligence generally on-par with a human, capable of behavior largely similar to a human, and indistinguishable from sapient life), I'd argue it does. It'd be morally inconsistent to deny AGI personhood in that circumstance.

If it's ASI, one way or another the concept of property will probably become irrelevant in short order.

Expand full comment
MondSemmel's avatar

Regarding the horrid performance of modern software, relative to the incredible hardware, I recommend Casey Muratori's stuff (of Handmade Hero, as well as former software engineer at RAD Game Tools (which produces high-performance tools like video game codecs which are used in gazillions of games, and which was recently bought by Epic Games for a bunch of money)).

Here he illustrates how slow the Windows Terminal is, and how he made a version in a week that was orders of magnitude faster than what Microsoft manages to produce with all their time and resources: https://www.youtube.com/playlist?list=PLEMXAbCVnmY6zCgpCFlgggRkrp0tpWfrn

Or here he talks about "Where Does Bad Code Come From?": https://www.youtube.com/watch?v=7YpFGkG-u1w (there's also a follow-up Q&A video)

Or here he lectures on "The Only Unbreakable Law" of software architecture: https://www.youtube.com/watch?v=5IUj1EZwpJY - Namely Conway's Law. Quote from Wikipedia: "Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure."

Expand full comment
bagel's avatar

Observation is on the money.

https://computers-are-fast.github.io/

Computers are blindingly fast, if you use their resources properly. If you chain a bunch of requests over the internet together, say naively running a bunch of loads or library calls in sequence, you can make anything take a long time. (But it should be telling that speed of light latency at the range of a planet is considered slow in computing terms.)

Expand full comment
Nancy Lebovitz's avatar

Possibly naive question: I was reading that many (most?) computer languages don't handle complex numbers well, and I was surprised. It's just algebra, isn't it? Algebra is a mechanical process. Maybe it takes more computation than arithmetic, but once you've got the equation (for reasonably simple equations), you just turn the crank, don't you?

I realize that computer programs tend to use approximations rather than algebra, but I don't know why.

http://www.antipope.org/charlie/blog-static/2022/04/holding-pattern-2022.html#comment-2145812

Expand full comment
Anti-Homo-Genius's avatar

This comment is completely, without any additional context, false. Any language where any serious mathematics is being done has a complex numbers implementaion that handles them at least as will as that language can handle real numbers. Fortran just so happens the oldest programming language in continued use, (some would even say it's the first, but that's debatable), so of course there are tons of libraries for complex numbers lying around for it. But complex numbers aren't a particularly hard thing to implement, and languages can call each other's libraries quite well. It's untirely unclear what that commenter is referring to.

In an unrelated remark, wow look at that RvW post, Stross became an NPC. Shame, I used to respect the guy.

Expand full comment
Mark's avatar

I have no idea what that comment is getting at - not only do computers do complex numbers just fine (or at least floating point approximations thereof), pretty much every 3d game out there makes extensive use of quaternions for representing rotations.

Expand full comment
Nancy Lebovitz's avatar

Maybe what I was reading was about older languages.... I'm not sure why it seemed reasonable to ask here instead of at antipope.

"Until recently (in these terms), Fortran was the only language that handled complex numbers even half-competently. It still is probably the best for that. C is appalling beyond belief."

"The standard version of Visual Works Smalltalk does not handle complex numbers. But the Cincom Public Store Repository does have a package called SYSEXT-Complex. From its documentation:"

A counterexample:

"Smalltalk (have I mentioned Smalltalk before? :-) ) has handled Complex numbers perfectly well since about 1972. Along with proper handling of Fractions, which I think is rather uncommon."

"14 years after Fortran. But you (and AlanD2) seem to have missed "semi-competently". Plenty of languages have 'supported' complex numbers from the 1960s onwards. I have not used Smalltalk, so I am not saying that it doesn't, but my experience of over a hundres languages from 1972 onward is that most don't, and that is especially true for any language perpetrated by 'computer scientists'. Note that this is as much about the implementation as the language but, unless the language attracts competent people who write serious programs using complex numbers, the former is almost always dire.

It includes things like generating EFFICIENT code for complex and mixed-mode arithmetic, conversions, and the basic functions (and even ensuring they exist!) And ensuring that division and the basic functions give accurate answers, do not produce errors when they shouldn't, and DO produce errors when they should. If you have never done that, I doubt very much that you know what is involved."

"I was not aware that those applications used complex numbers heavily. As I said, complex was typically misimplemented in the 1970s and 1980s, not least because almost no 'computer scientists' understood it, even then, and that is based on experience with many dozens of languages. As I also said, I did not use Smalltalk and so cannot comment on it's support for them; from your response, I doubt that you have written serious applications that depend on complex arithmetic in Smalltalk, and almost certainly not in the 1970s and 1980s.

Yes, since 1980 is 'recent', for technical reasons that I could explain if needed."

""It’s done well enough to run sophisticated machines like the world’s chip-making machinery and the CanadArm, to operate power grids, to make billions of buckaroonies for banks, so I’m going say we got the maths decently correct."

I doubt that any of those computations require complex numbers, which is what EC is talking about.

Plain floating-point calculations are hard enough to get right, mostly because what floating-point data types model is not the field of real numbers but a finite subset of them, which among other things is neither associative nor distributive. Also the "true" results of most computations are not in the set of possible numbers and have to be rounded.

Complex calculations are still harder, even if the implementation doesn't make gross errors[*]: again, "complex" data types don't model the field of complex numbers, and the interaction between real and imaginary parts complicates the rounding operations.

[*] I have seen a production "complex" C++ library that did something like this (in pseudocode,to avoid C++ template syntax):

double abs(complex x) { return sqrt(x.re * x.re + x.im * x.im); }

Working out what's wrong with that is left as an exercise for the reader."

"I just found this: GNU Smalltalk Users Guide

"One can reasonably ask whether the real and imaginary parts of our complex number will be integer or floating point. In the grand Smalltalk tradition, we’ll just leave them as objects, and hope that they respond to numeric messages reasonably. If they don’t, the user will doubtless receive errors and be able to track back their mistake with little fuss. "

If the code there really is how it's implemented and not just a simplification for didactic purposes, it is not "semi-competent" for the same reason that my example above isn't.""

Expand full comment
Pete's avatar

I don't want to delve into detail, but I get the impression that you're worrying about an argument in which "Until recently (in these terms)" means time before I was born, and it shouldn't be assumed that the statements made there are valid or relevant after all these decades.

Expand full comment
Davis Yoshida's avatar

Thanks for the context. I don't know why I didn't think of this earlier, but I found the reference below within 15 seconds of searching "numerical recipes in C complex modulus"

https://i.imgur.com/NPFKQNo.png

I don't know if it's fair to call this a difficulty of doing complex arithmetic on a computer. This would equally crop up for doing the norm of a vector.

Expand full comment
Davis Yoshida's avatar

Well the assertion of one of the later comments is that (as an example) sqrt(re * re + im * im) isn't a good way to compute the modulus. I'm not sure why though. Squaring could make something inf if it's already at half the max order of magnitude, but that's not really particular to this.

Expand full comment
Davis Yoshida's avatar

I actually don't know the details that that person is getting at, but the gist is that while on complex number can naively be represented by two real numbers, doing so leads to numerical problems.

Computers do things with approximations instead of algebra because if you did math with exact fractions, and single number could take up an unlimited amount of memory.

Expand full comment
Anti-Homo-Genius's avatar

Exact fractions are not that infinite, they are simply pairs of integers. 1/3 is not the infinite decimal expansion 0.3333..., it's the data structure Fraction(1,3), which takes just double the memory of what a single integer takes. The reason exact fraction aren't usually used is that basic arithmetic on them is slow, addition and subtraction reduces to 3 integer multiplication plus an addition/subtraction of integers, and every single operation that produces a fraction (whether an +-*/ operation or anything else) has to perform the GCD algorithm to normalize the fraction into it's simplest form. There is no specialized hardware for this like that other format which most programming languages use to represent (a subset of) Real numbers, IEEE754 floats. This means slowness. Nevertheless, it's done, and it's done alot. It's the default result of dividing 2 integers in Perl 6, now known as Raku. It's an exercise done in many a programming book, most famous is the Structure and Interpretation of Computer Programs.

Expand full comment
Davis Yoshida's avatar

I said infinite memory because the the denominator can grow without bound (e.g. 1/2 * 1/3 * 1/5 * 1/7 * ...). You have to be careful writing code which uses "Fraction" as a type, since as you do successive computations, your data might grow in size. If someone writes f: Fraction -> Fraction, I need to check the output to make sure the numerator and denominator are reasonably sized. On the other hand, with fixed-width data types you know what you're getting, which I think is a big advantage. Also, for SIMD stuff I believe being fixed width helps even more.

Expand full comment
Anti-Homo-Genius's avatar

Well, the underlying integer types should of course be well documented and emphasized in the Fraction datatype docs. Even better if the language has generics or similar: the Fraction datatype should permit you to supply your own integer types.

What I'm saying is that Fraction isn't any more or less risky to overflow than an int, any code that manipulates a fixed-width int is prone to overflow in the exact same way in the case it manipulates 2 fixed-width ints, which is exactly what Fraction is. I can't think of something specific to Fraction that makes it overflow in a way that ints can't.

Expand full comment
Davis Yoshida's avatar

Fractions aren't an int replacement though, they're a float replacement. Floats really don't overflow as easily as a Fraction does. Floats have the property that their resolution gets coarser as they get larger. Fractions do not have this property, unless you're basically implementing a float on their back with some sort of auto-rounding, so for a given number of bits their dynamic range will be much worse.

You essentially have N * 1/M, so the denominator bits are playing the role of the exponent in a typical float, as they're what are letting you manage the scale of the value. Unfortunately, for a 32-bit denominator + 32-bit denominator, you only have a range of (positive values) from 10^-10 to 10^10. Compare this to the range of a typical 64-bit float, which ranges from 10^-308 to 10^308. The typical mantissa/exponent representation has _way_ higher coverage. For 64 bits, you get exactly 2^64 values, and need to choose how to distribute them. Double floats do that very well, the fraction method doesn't.

Expand full comment
Lambert's avatar

It's not about the number-crunching, it's about presenting a sensible interface to the programmer. My guess is that matrices are more powerful than complex numbers (a + bi can be represented as [[a, -b], [b, a]]) and a more natural way to think in terms of computing. So if your language's complex number handling is bad or non-existent, you can use matrices instead. If you really need the kind of concision that complex numbers bring, you're probably doing scientific computing in Fortran or, nowadays, numpy.

Expand full comment
pale ink's avatar

There is a great article/rant about software bloat that basically lists everything that is wrong with our various software products and urges for developers to spend more time on optimizing their products

It mentions 3d games/text editor seeming difference in performance too, although doesn't explain it

https://tonsky.me/blog/disenchantment/

(his blog also has some more examples of software disenchantment, like https://tonsky.me/blog/good-times-weak-men/)

Expand full comment
Lambert's avatar

I read about a problem recently that might be a good application for prediction markets or Social Impact Bonds/Retroactive Public Goods Funding/whatever you want to call it.

Growing a tree takes decades, so it's possible to fund a reforesting initiative that looks good at first but fails further down the line, due to either a lack of maintainance or corners being cut during planting (e.g. using a cheaper species of tree which is unsuitable for the haitat being afforested). Principal can't be expected to learn everything about forestry so they have to trust the agent doing the planting.

https://www.bbc.co.uk/news/science-environment-61300708

What you need is to pay for there to be a forest in an area in 10, 30, 50 years' time. This could be assessed through remote sensing such as the Sentinel 3 Chlorophyll Index and backed up by in-person inspections of reforested areas.

Investors will solve the problem of the cost of reforesting being front-loaded while the results can only be seen decades later. They will be incentivised to make sure that the planting is done in a way that will result in a forest that survies.

Expand full comment
Pete's avatar

I'm seeing local forests being treated effectively as equivalent to wheat fields with the only difference that you reap them every 20-30 years instead of every year.

So the key issue is how do you handle people who genuinely want to plant a forest for 30 years, and then harvest it before replanting it - if you pay for there to be a forest in an area in 30 years, they'll gladly take that free money and harvest it in year 31 instead.

In essence, if you want to ensure that no one harvests the primary economic output of a piece of "farmland" (i.e. managed forest), which is the primary thing setting the value of that piece of land, then effectively you want to own that forest almost completely. If it's allowed to cut the trees then after 30 years almost all of the value is in the accumulated lumber, not the land itself; and if it's not allowed to cut the trees then that land is economically worthless.

So why not go all the way and buy it directly, since you'd have to pay for almost all of its value anyway? Then control and incentives become trivial, just have a trust that owns a lot of forests and simply does not harvest them.

Expand full comment
Nah's avatar

There is already such a trust: the nature conservancy.

One of the things they do is buy/ take donations of land and sit on it.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Why do we not have politicians or journalists as good at their jobs as Ronnie O’Sullivan is at his?

Expand full comment
Phil H's avatar

One possible important difference is the continuity of the job. Elite sportspeople train continuously, but compete only in very short bursts, at tournaments. What goes wrong in training doesn't matter; the public only sees their performance at the tournaments.

Another important difference is the consequences of bad performance. When O'Sullivan loses, it just means that he loses. When a politician loses, their political party loses power, and potentially millions of their constituents suffer. As a result, all we remember about O'Sullivan is the memorable parts (the wins); and all we remember about politicians is the memorable parts (big losses and big wins).

Taken together, I think these two go a long way to explaining why politicians don't *look like* they're as good at their jobs as elite sportspeople are at theirs. As POTUS, every decision that Joe Biden makes is scrutinised, so he doesn't get to go through the normal process of training, where you initially accept that you're not yet good at skill X, practice for it, and develop better ability. Moreover, the record books (if you can find a relatively unbiased one!) contain lists of both his achievements and failures. If you go and check the wiki pages for great athletes, they might mention slumps, but they definitely don't list every tournament that the athlete loses in; whereas every tournament they win is typically mentioned.

Expand full comment
Jacob Steel's avatar

"Being good at snooker" is a fairly well-defined skill, and it's easy to measure relatively accurately.

"Being good a politics/journalism" is neither, but for the sake of the argument let's assume you've specified one of the many possible disambiguations.

Obviously, being good at skills like snooker or politics only makes sense as an ordinal, not as a cardinal - you can say "Alice is the best at X in the world, whereas there are 7 people better at Y than Bob", but you can't say "the best person at X in the world has Skill Level 8, whereas the best person at Y only has skill level 5".

So I think the right question isn't "why don't we have people as good at politics or journalism as O'Sullivan is at snooker?", it's "why doesn't the person who is best at politics or journalism in whichever sense you've picked stick out obviously the same way O'Sullivan does?"

And I think the answers are pretty obvious - being good at an ambiguous things with a large element of luck is much less noticeable than being good at rigorously-defined, clearly-measurable things with less luck.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

A lot of people are saying this but I don’t see why being a great politician, way above the norm, is that hard to measure. Quite frankly I think you could replace any of the last two US presidents with a mid level manager in a small corporation, Obama was slightly more above average, and Bush II the same. Remember he was considered low wattage at the time.

Clinton was the kind of speaker and politician that was way above average, but probably not in the relative level of O’Sullivan.

Expand full comment
Thor Odinson's avatar

I think part of it is that there's not nearly as much competition in the world of Snooker - a lot of people will sacrifice a hell of a lot for a chance at becoming POTUS, while only a relatively small number of people will ever elevate snooker beyond a hobby

Expand full comment
Nolan Eoghan (not a robot)'s avatar

How does that explain that O’Sullivan is way better at his role than the best politicians are at theirs?( At least that’s my claim, but I think it holds up).

Earlier posters have said that snooker is easier to analyse on this - that’s true but a really good politician should be obvious enough. The presidency has been held by mediocrities for a while now, or people slightly above mediocre, like Bush II. The fact that he’s the second Bush, is telling though. It’s not genetic but a form of nepotism - his name - that helped with that job. That tends not to hold for sports. In fact is there a great athlete who has had a great athlete as a child? Only Schmeichel comes to mind immediately.

( I could have moved outside snooker to Messi, or Tom Brady, or Woods but the idea is the same).

Expand full comment
Tasty_Y's avatar

Ronnie has advantages others don't. He got to practice his game since he was a tiny kid. The game remained exactly the same throughout. He could practice it as many hours per day as he thought optimal for his improvement. When he made mistakes he got feedback a fraction of a second later - it was immediately and unambiguously clear that a mistake had been made. In those cases he had the luxury of restoring the situation and trying again (at least during training) as many times as was needed until he got the result he wanted. Mistakes did not prevent him from trying again. His game is very simple and there are very few variables in play, so he got to try out most plausible situations lots of times. It also is a game of perfect information, so his performance is going to be especially impressive because he's not going to get randomly hit by 20 different factors he had no way of predicting.

Same reasons why we can make computers super-humanly good at Chess or Go, but it's much less convenient to teach them to drive cars, do surgery or become cult leaders.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Ronnie has those advantages with everybody who ever played snooker from a young age. But I wasn’t asking why Ronnie is so good but why we don’t have a Ronnie O’Sullivan in politics. What would that look like? Well the politician would be a brilliant speaker, probably off the cuff, but also have a brilliant systematic mind.

Expand full comment
Thor Odinson's avatar

Nah. The world's most successful politician is the one with the vault of blackmail and a web of connections and lots of backroom deals so that he never has to rely on anything so fickle as the votes of the general public.

Mitch O'Connell is a much better example of a successful current US politician than any recent president, in terms of actual power over the nation, both per year and cumulative - he's not term limited, either. Looking at less civilised countries, Putin has done very well for himself, albeit terribly for Russia and its neighbours. Xi seems to be doing well, though it's too early to say for sure.

Expand full comment
Melvin's avatar

Being a brilliant off-the-cuff political speaker and having a brilliant systematic mind are probably contradictory requirements, much like it's impossible to be both a champion weightlighter and a champion marathon runner.

I once saw the late philosopher David Lewis give a talk about many-worlds. At the end, he took questions. What really struck me was that after each question he would sit there in perfect silence looking at the floor for a good twenty to thirty seconds before answering. I loved this, because any philosophical question worth asking is probably a question worth thinking about for twenty or thirty seconds before shooting off an answer.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Feynman could talk your ear off.

Expand full comment
garden vegetables's avatar

I'm interested in what makes those contradictory requirements. Surely "brilliant off-the-cuff political speaker" doesn't require that the politician think up all their ideas during the speech, just how they want to propagate said ideas to their audience. In that case, they can use their systematizing mind prior to public interaction to come up with a set of ideas, and analyze their audience on the fly to see how they can best communicate these ideas in a way that they won't be rejected.

I think that it's simply that heavily systematizing people tend to not enjoy public speaking (possibly due to high levels of audience analysis being overwhelming or inability to edit one's past statements) or that they are drawn to other fields besides politics or journalism. It's not an unlikely idea that a brilliant systematic mind will see the level of cooperation required for good politics/good journalism and realize "this is a coordination problem I can't solve on my own; I'm going to go become an aerospace engineer/particle physicist/computer scientist/AI investigator".

Becoming a brilliant politician or journalist *never* allows escape from the coordination problems inherent to it; as an independent journalist, a platform is needed (Substack's a good example; even here it's fairly hard to get an audience!). And as a politician, the problem becomes even worse; as an authoritarian, in a government with the bare minimum of coordination required, you still need to dedicate valuable time and effort to watching your back and avoiding assassination or imprisonment by your subjects. In a democracy, you're working together with dozens to hundreds of other people, and in order to get them all to agree with you, have to come up with a position that appeals to the majority of them to get them to vote your way to pass laws.

Compared to that, for a brilliant systematic mind, revolutionizing space travel or solving aging may seem like much greater impact for much less effort. I exaggerate, of course, but "getting one's way in politics" is difficult due to reasons entirely unrelated to the potential for a systematic mind and an appealing presence on stage at once.

(Also, an academic talk really isn't the best comparison for a political speech, due to the differing audiences and thus differing methods of audience appeal.)

Expand full comment
Pete's avatar

The contradiction is essentially in the willingness to confidently proclaim strong opinions without a proper analysis, so they may turn out to be wildly wrong. If you're willing to do so, you're probably not a systematic mind; if you're unwilling to do so, you're not an effective off-the-cuff (with off-the-cuff being the key part) speaker.

Expand full comment
Tasty_Y's avatar

What I mean is that thanks to the advantages I listed, snooker is something that's convenient to practice and become good at. Politics and journalism lack these, so the best journalists and politicians won't become as absurdly good as the best snooker players (or the chess players or whatever). Games are very learnable while, say, interviewing people isn't.

Expand full comment
Justin H's avatar

The snooker player? We do have journalists and politicians who are as good at their jobs as Ronnie is at his. The difference is that we know exactly what Ronnie's job is: Winning at snooker. Whereas what exactly it is to be a great politician or great journalist is opaque - there are hidden incentives involved in advancing one's career as a journalist or politician, ones that aren't always aligned with the greater social good.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Your second argument seems to refute your first.

Expand full comment
Davis Yoshida's avatar

His point is that there are politicians who are as effective at accomplishing what _they_ want, as Ronnie O'Sullivan is at snooker. Unfortunately what they want is different from what we want them to do.

Expand full comment
Neike Taika-Tessaro's avatar

Many good comments have been made on the observed 2D / 3D divide (the most important of which is usually disk I/O or network I/O), but I just wanted to add: Only one of these is likely using your computer hardware to near-full capacity and spinning up the fans. You typically don't want that, period.

But it's only the 3D AAA video game that needs to draw upon all these resources or fail. It is indeed remarkable that it can, but a lot of hacks have had to happen to hardware and software over the years (even with computing power increasing) to allow it to render that much at all. The trade-off is that it uses approximately all resources.

If your office software did the same thing you'd prefer a different one, which can give you several windows of the same thing at once, let you run your web browser in parallel, possibly listen to music as well, rapid tab between applications, et cetera.

The video game has the benefit of having your full attention, so it can afford to be greedy.

(Yes, if you have a good desktop computer, you can tab between the game and whatever else you're doing. But even if you have a good gaming PC you should be able to tell the difference in system load if you boot up a high-polygon 3D game to other usage patterns.)

Expand full comment
Justin H's avatar

It's true that most modern desktop computers have way more computing resources than are needed by most 2D software, but a frequent result is that companies making 2d software don't care how much of the computer's resources that take (making it less likely you'll be able to rapid tab between several programs), and don't know how to handle the performance issues that do arise - because eventually, if you keep adding features and abstraction layers, performance problems will arise.

Expand full comment
Neike Taika-Tessaro's avatar

Yes. I'm just supplementing the other comments, I'm not trying to cover all the reasons in my post. This was just one I hadn't seen yet, but is also absolutely a component of this.

Expand full comment
Muster the Squirrels's avatar

Reading about the mathematician Gauss, I was reminded of Secrets Of The Great Families (https://astralcodexten.substack.com/p/secrets-of-the-great-families).

Gauss's grandson claimed that Gauss "did not want any of his sons to attempt mathematics for he said he did not think any of them would surpass him and he did not want the name lowered" (https://en.wikisource.org/wiki/Charles_Henry_Gauss_Letter_-_1898-12-21).

This suggests that Gauss saw family reputation, in his field, as an average (with all non-mathematicians excluded, rather than assigned a 0).

I suppose that I see family reputation in technical fields as cumulative, like knowledge itself. Parent proves 5 theorems + child proves 1 theorem = family proves 6 theorems. Unimportant, yet difficult to imagine otherwise.

Secrets Of The Great Families discussed families with accomplished ancestors and accomplished descendants. In contrast to those ancestors, there are also accomplished people without accomplished (or any) descendants. Now I'm wondering whether these two contrasting notions of family reputation might have some small influence.

Or is Gauss's view truly rare?

Expand full comment
Thor Odinson's avatar

That seems like an incredibly rude reason. I could see a much more defensible argument that doing the same thing as your dad means you're perpetually in his shadow, being compared to him, such that people will think you a failure if you're less brilliant even if you're the best in your generation; that reasoning points towards pushing your kids to excel in a slightly different field where the comparisons are less direct (Are 5 new important chemical reactions documented more or less impressive than 5 new important mathematical theorems?), but still pushing them to excel.

Expand full comment
Muster the Squirrels's avatar

First family with at least one of every Nobel prize is the winner!

On reflection, another possibility is that Gauss's son made this up as an excuse for why he didn't follow his father's career, and the grandson (the letter-writer) accepted it uncritically.

Expand full comment
Mystik's avatar

Gauss was notoriously a jerk, so I’d consider him an outlier

Expand full comment
BoppreH's avatar

The truth is that we don't how to build good software. The term "software crisis" was coined 54 years ago, and every developer knows in their bones it's still the status quo. Our profession just feels *wrong*, like natural philosophers making stuff up loosely based on observations.

We keep making the same mistakes, reinventing wheels and grasping at straws, then getting desperate and releasing something that barely works. Some other poor soul has a similar problem, starts with that partially broken program in hopes of saving effort, and builds another layer of this Jenga tower.

Eventually the tower gets too unstable, working on it is too hard, so people build a new tower somewhere else, with a base that is unstable in different ways. But not before 10 other people make the same decision, so now we also have 10 new towers dividing our efforts.

Performance is only one of the facets of this problem, though it's an easily measurable one. I just wrote two equivalent programs[1] that count to 1 billion. One is written in C, and the other in Python. Neither one uses parallelism, the GPU, caching, or does anything clever.

The program in C executes in 0.52 seconds. The Python one takes 1m42s, for the exact same results.

That's 200x slower, and just for the language! There will be libraries written on this language, and frameworks built using the libraries, and applications developed using those frameworks. The performance sacrifices scale multiplicatively, and that's where your performance went.

The same mechanisms also make our software less reliable, harder to customize and interoperate, and basically any characteristic that is not mission critical. Because software developers don't have our shit together.

If you'd like to know more and hear about counter movements, look into the Handmade community, and look for talks from Jonathan Blow and Molly Rocket.

[1]: https://gist.github.com/boppreh/e9110afa1077c6329de3f47041e90646

Expand full comment
David Piepgrass's avatar

Totally with you. I wrote this article on the key principles and practices of programming...

http://loyc.net/2020/principles-of-programming.html

...and I feel like

(1) most developers have not figured out what the most important principles are (they are taught lots of ideas, like SOLID and design patterns, etc., but they are not taught, and rarely learn, a good prioritization of such ideas), and

(2) I haven't learned all the necessary lessons either, even though I've been much more vigilant than typical developers at pursuing excellence.

Also, the foundations of the software industry are a bit rotten, as we rely so much on unpaid efforts of open-source developers to make the foundations of our software. e.g. no one wants to pay for a good math library, so we mostly rely on open source stuff, whose quality is limited because the developers are working weekends for free. Why more people don't talk about government or philanthropic funding of open engineering, I do not know.

Expand full comment
Jacob Steel's avatar

Sure, but how easy were they to write? Actually, to be fair, for something that simple the C isn't much harder, but - as I'm sure you know better than I do, if you're a proper software engineer rather than a mathematician who dabbles - for more complicated tasks that discrepancy will grow. I learned to program in C, and coming to python for the first time was like having weights taken off - except when I wanted to do anything where performance was an issue, when it was like having weights put on!

I guess in principle a future aggressively-compiled language could be as easy to write and maintain as code written for humans to be able to interact with efficiently in today's high-level languages and also as fast as code written for optimum performance in today's low-level languages, but for as long as what you write and the sequence of operations the computer performs are always closely related in the same way I don't see much hope of getting around that tradeoff.

Expand full comment
David Piepgrass's avatar

> in principle a future aggressively-compiled language could be as easy to write and maintain as code written for humans to be able to interact with efficiently in today's high-level languages and also as fast as code written for optimum performance in today's low-level languages

I've been saying that for 23-24 years. C# was designed well enough to come close but not quite reach the finish line. Rust... well, not exactly easy, but a step above C++. Go maybe? D v2+ was pretty good. I'm still waiting for an easy-to-use language that works well for both CPU and GPU programming. (Also waiting for lots of other features too.)

Expand full comment
BoppreH's avatar

For these specific examples, the Python and C code were equally easy to write. I guess the biggest "weight" I had to put on was declaring the type of the counter variable to be an int, but even that is something that you want to do in Python sometimes[1].

Some of the performance difference is simply lack of funds. Java and Javascript (completely unrelated) are also languages running on virtual machines, and their default implementations are an order of magnitude faster than Python's. They got there through obscene amounts of engineering effort (JIT).

But most of the performance difference is simply not caring. Compiling your program is obviously much faster, but almost no one is trying with Python. Declaring the types of your variables and using that for optimizations is obviously much faster. Having objects maintain a consistent shape is obviously much faster. But they would make the programmer's life a tiny little bit harder, so...

What really bothers me is that the features that make Python slow -- virtual machine, dynamic typing, and dynamic objects -- are not even that good of an idea. They make the humans struggle to read, troubleshoot, and deploy programs.

Probably 99% of my Python code could be machine translated to C for instant performance gains, but that would lock me out of the ecosystem of libraries.

[1]: https://docs.python.org/3/library/typing.html

Expand full comment
joe's avatar

Prompted partially be a recent Buck EAF post- does anyone have any interesting reading about the potential value of doing psychotherapy-type analysis on AI?

Not so much as in pure Freudian analysis on 1s and 0s, but more like- as deep learning models become more and more convoluted and impenetrable, could a valuable route of analysis be interrogating their outputs using the same tools we use to interrogate the outputs of the (also impenetrable) human mind?

Expand full comment
Vim's avatar

I can only do a poor job at conveying the depth to which things in software are critically bad, but here's a couple talks that go deeper on why we got in this mess:

Jonathan Blow's

"Preventing the Collapse of Civilization" https://www.youtube.com/watch?v=ZSRHeXYDLko

Casey Muratori's "The Thirty Million Line Problem" https://www.youtube.com/watch?v=kZRE7HIO3vk

Both of these talks are by people who are working very hard to improve the situation : you might recognize Jonathan Blow as the game designer of Braid and The Witness ; he has been working since ~2014 on a new programming language meant to be a replacement for C++, which is now in closed beta. Casey Muratori is the host of Handmade Hero, a 650+ video series on building a game engine, and who somewhat recently has put Microsoft to shame by coding in a few days a terminal that runs several orders of magnitude faster than Windows's new terminal, with the same amount of features - after being told by Microsoft developers that what he was suggesting was "an entire doctoral research project in performant terminal emulation" (https://github.com/microsoft/terminal/issues/10362, https://www.youtube.com/watch?v=hxM8QmyZXtg&list=PLEMXAbCVnmY6zCgpCFlgggRkrp0tpWfrn).

Expand full comment
Matthew Talamini's avatar

Just wanted to circle back and thank you for the Jonathan Blow link. I'm a front end web developer, and I can see first-hand the complexity increase he's talking about and how it corrodes the transmission of knowledge.

Expand full comment
Davis Yoshida's avatar

Loved watching the terminal videos, thanks for linking

Expand full comment
Amadeus Pagel's avatar

I made a web app for one-on-one voicechat with other ACX readers.

One-on-one, because it's simpler:

- When one stops talking, the other starts.

- No groups, no hierarchy, no status.

Voicechat, because it's more intimate then text, but more private then video.

With other ACX readers because that creates some common ground.

It's called coffeehouse: https://coffeehouse.chat/acx

Expand full comment
Davis Yoshida's avatar

I'm not sure there are enough users for people to be able to match on average. Maybe setting some daily or weekly times in order to get more people to be present simultaneously would be good?

Expand full comment
Amadeus Pagel's avatar

Maybe, though I'm not sure how to coordinate that, and it seems easier to get people to try it right now, then to get people to plan to try it at some specified time. A possible UI might be that if you visit a coffeehouse outside the specified time, it asks you for permission to send a notification at that time.

Expand full comment
burritosol's avatar

This is cool. I don't get how it works though.

Expand full comment
Deepa's avatar

Hello folks! I've had 3 doses of the Moderna vaccine. It has been 6 months since the 3rd. I've been pondering whether to get the 4th that is now approved for 50+ healthy adults. I was wondering if there were statistics experts here who could comment on this. Is this good science? Thank you!

https://youtu.be/o_nKoybyMGgh

Expand full comment
coffeespoons's avatar

I'm no expert, but I found that video intriguing. I was surprised to learn that live measles vaccines had beneficial effects that increased mortality beyond merely preventing measles - a rudimentary google search confirms that it is indeed accepted scientific knowledge.

As far as the Moderna vaccine specifically, I would think it is the covid vaccine with the highest risk of side effect, and you're probably better off getting one of the others if you have the option.

Expand full comment
Byrel Mitchell's avatar

I'm also no expert, but I seriously doubt this is a real effect. This is probably the same quirk that makes every study show drinking a little alcohol associates with better outcomes than no alcohol. There's a group of people with serious medical conditions who don't drink (and don't get measles vaccines) because of those conditions. The positive effect of a drink of alcohol is just in excluding that group from the population. I suspect that the non-measles benefits of the measles vaccine are due to the same artifact.

Expand full comment
coffeespoons's avatar

Definitely can't rule that out. I'm looking at this 2016 BMJ paper which claims such effects exist, but I haven't dug into it so it may well be the kind of association you're talking about.

https://www.bmj.com/content/355/bmj.i5170.long

Expand full comment
JohanL's avatar

I think this looks like the kind of research that raises questions rather than proves a result. The numbers are tiny, many of the numbers aren't statistically significant, the trial groups aren't comparable, and the situation was unusual. The study hasn't been peer-reviewed (which it needs to be - this kind of comparison looks like something where it would be easy to make mistakes or introduce bias). And this is pretty much what the researcher herself claims in the clip.

It's probably worth taking a look at this with ten to a hundred times the participants, but 16 compared to 30 out of tens of thousands just isn't all that strong. For the time being, we know it works against Covid, and we don't know that it has any non-specific risks.

I had my third dose in January, and with summer coming I would personally look to pick up dose four in October in time for the next Covid season (and plus, it's possible the new Moderna version has arrived by then).

Expand full comment
Gruffydd's avatar

Are there any Effective Altruists/Rationalists/Lesswrong users at Lancaster University, going to come here next year, or in Lancaster? I’m starting up an EA group with two others here and we’re starting the process of getting members, anyone that’s around feel free to reply here or send an email to gruffyddgozali@gmail.com

Expand full comment
Gregorian Chant's avatar

It's because your graphics card is a completely separate computer. The PC and the graphics card are well tied together so you don't normally notice the seams but if you know what to look for they are visible.

Quite often games have a 'loading' stage. This is where all the data is being passed across to the graphics card and the shaders (programs) are being set up. It can take a while because there is a lot to do and you have to shift things across the computer/card boundary and that is slow.

Once it's there the graphics card can do its own thing with occasional nudges from the main computer. It has everything it needs in it own memory and it can run its own programs. There is little to hold it up,

Your main computer is a much bigger thing. It has a variety of types of memory of varying speed from registers, to cache (levels 1, 2, 3), to RAM, to SSD to hard disk. Most operations are bound not by CPU but by memory speed. If it's all near to hand, things are fast. If the thing you wanted is on disk then that requires loading into RAM, cache and then register. It can be a physical thing and that's not fast.

There are other reasons too. UI design is just not sexy like 3D and it's actually relatively complex. A graphics card does the same thing over and over again and to do that quickly it needs to be relatively simple. A UI involves many more moving parts going down into the depths of the OS and it is a tangled web. Perhaps for that reason we don't get the best engineers working on it.

If would be nice if we did. Then we could have folders which showed their size, or file copies that worked reliably, or folder colums that persisted each time the folder was opened. Instead we have a menu bar in the middle. Hallelujah!

Expand full comment
Gergő Tisza's avatar

Others have already mentioned the distance component of website vs. 3D video game performance (one mostly has to transfer information between parts of the same silicon chip, the other between the browser and the web server which might be in different countries; the speed of electromagnetic waves can't really be optimized) and cost tradeoffs (for a website, excess funds are better invested in more content / more features / less bugs because the user cares a lot more about those than performance).

The other big thing is control - web technologies have been optimized to be transparent and easy to interfere with, so the website owner can override the advertiser's decisions on how the ad can look (e.g. whether it is allowed to show phising popups), your browser can override the website owner's decisions, and browser plugins can override the browser's decisions. That's a win for accessibility and user control, but a loss for performance, since information needs to be available for manipulation at some early stage of processing (the DOM tree) so your adblock plugin can interact with things like "link" or "popup", not pixels on the screen (at which point identifying and removing ads would be rather impossible). You will never have a tool that can remove ads from a 3D video game - it would be performance-prohibitive to expose the 3D processing pipeline to such manipulation.

Another aspect of control is that a video game mostly gets to own the relevant resources, while browsing the web means interacting with massively multi-tenant systems - both on your computer, where you want to be able to have more than one website open at a time, and want the OS and other applications to still work as well, so the resources available to a single website are fairly limited; and on the network, where from the provider's point of view where you are one of a million clients trying to transfer information through them, and they need to figure out how to do that fairly and safely, taking into account that some of the clients are malicious or overly greedy. That's achieved with various communication protocols that add even more overhead to already slow network requests; and then, since you need interoperability in the network, you end up with standards used by billions of people, which are very slow to evolve.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Based on my comment below on non native apps not being that fast on the Mac, I ran a test (on second and subsequent launch) of Books, Kindle and Teams on a 2021 m1 MacBook Pro.

The test was from double click to being able to use the app:

Results on average.

Books:0.65

Kindle: 3.8

Teams: 16.2

if there’s any bias it’s to add time to books as I had to react to the launch by hitting stop, my reaction times matter here.

Expand full comment
David Gretzschel's avatar

2.

They say that one of the causes is a "mobile first"-paradigm.

In some sense we do. But imagine what a world would look like, where software developers actually cared about a good mobile experience.

Let me describe an actual "mobile first"-paradigm.

Your apps would automatically downloads tens to hundreds of megabytes of data every time you're in wifi. Like all the websites and blogs I commonly visit would auto-refresh.

There is no reason I need mobile internet when I'm on the go, if the couple kilobytes could have been automatically preloaded. If I read maybe... five blogs and five newspapers, why do I not automatically get them updated in the background?

If I cannot reliably read in the forest or on the train, then that's a sedentary paradigm.

Not a mobile one.

A "mobile first"-approach would also instantly detect a slow connection and offer a text-only website. Cutting out all the overhead, cookie prompts, advertising and tracking junk, so that only the actual information gets to the user (that's in the kilobyte range, yet somehow we need 5g, lol).

What we actually see is "cloud first", which preloads nothing and demands your phone to be permanently and reliably online. Obviously this is not what you'll have, when you're "mobile".

For the longest time, there was not even automatic full cloud download for Android.

OneDrive has it now, but it's second-class.

OneNote won't even sync in the background. Many apps only will sync their content once you open the app, forcing you to wait.

"Mobile-first" would see local storage in the terrabytes for full cloud sync. Instead we've been letting phone manufacturers play stupid price games with minuscule storage sizes for a decade now.

Expand full comment
Michael Kelly's avatar

Very developer should be using a doggy slow, dodgy proxy so that they could see through the user's eyes.

Expand full comment
Acymetric's avatar

I think the main reason this doesn't happen regularly is that it significantly increases testing time (because everything is loading 5-10x slower or however much). All about the bottom line as they say.

Expand full comment
Pete's avatar

Why are you equating "mobile first" with "offline first"? In many markets (not many parts of USA though) it's perfectly reasonable to expect 100% coverage of good mobile data literally everywhere, including the middle of any forest or park. And in the markets where it's not, that's something that should be solved by the telecommunications industry (perhaps Starlink?), instead of the software or devices.

If people can do internet videochat in Mariupol bunkers with all the infrastructure being bombed to smithereens, perhaps it's time to assert that first world consumers should be able to go "mobile first" with full connectivity, instead of having to figure out how to degrade the experience less while disconnected.

Expand full comment
David Gretzschel's avatar

"Why are you equating "mobile first" with "offline first"?"

I would not say that I am doing that. Being "mobile" implies that you might move between markets and places/situations with great coverage. "mobile first" to me implies that you enable the user to move freely and still get the most reliable experience, that's technically possible in any circumstance. Automatically preloading content is using the reliability of wifi for when you don't have it. That's not an "offline first"-approach. You're still downloading things when you are "online".

"In many markets (not many parts of USA though) it's perfectly reasonable to expect 100% coverage of good mobile data literally everywhere, including the middle of any forest or park."

Yes and I'm happy for them. But why should that matter? Our software/hardware stack should work as reliably as it can be far beyond those places.

"If people can do internet videochat in Mariupol bunkers with all the infrastructure being bombed to smithereens, perhaps it's time to assert that first world consumers should be able to go "mobile first" with full connectivity, instead of having to figure out how to degrade the experience less while disconnected."

Well, I won't have reliable internet in my gym in Germany. Downgrading" the experience should/would be a fallback in case you don't have great conditions. Low-data, Text-only is an upgrade over nothing.

"perhaps it's time to assert that first world consumers should be able to go "mobile first" with full connectivity, instead of having to figure out how to degrade the experience less while disconnected."

You also are not disconnected, if you only have "Edge". But since pages won't load, you effectively are. This has been an ongoing problem since the first smartphone came out. Yes, future techonology might change that. The situation does get better (though I still don't have good internet in my gym in Germany, which is behind tons of concrete). But why should we wait for that? And why should we have suffered this needless dependency on wifi or being in a superduper metropolitan area or South Korea in the past? This is also not an either/or proposition. You can make phones less dependent on centralized infrastructure and build up centralized infrastructure at the same time!

Expand full comment
Acymetric's avatar

These sound like nice ideas but they aren't economically viable (generally, certainly there are specific apps or markets where it could make sense). Not enough people need what you're asking for badly enough for it to be worth the investment.

Expand full comment
David Gretzschel's avatar

I don't know why we don't have those common sense features.

But your explanation feels wrong:

Pulling data in wifi also indirectly increases battery life and that has always been a major concern for consumers. Iirc, wifi-usage is also more energy efficient than using 3g/4g/5g. There have been 2.2 billion iPhones made, which are high-end specced phones. I'd say the people who have reliable internet in every daily situation are below 5% on average over the 15 years we've had them. These days, I'd gues 15% of iPhone/high-end-smartphone users can say that they ALWAYS have internet. I could go on and on, why those features would sell more phones and even how to market them. As for what the actual reasons for this lack are, I have an incomplete model (or a bunch of suspicions and inferences) but that's a bit too long to post here.

Expand full comment
Acymetric's avatar

Well, either you've found your million* dollar idea or you're missing something.

*Do we need to update this phrase to "billion dollar idea"? Million just doesn't hit like it used to.

Expand full comment
David Gretzschel's avatar

No, it's not a billion dollar idea. It's just how you'd make smartphones work, if you cared about making the best possible and profitable product. But the incentives don't align for Google/Microsoft/Apple/Samsung to make that product a reality (even though I think the consumer demand would be there). Or they would have already done it. Free market competition ideas don't map too neatly to how giant corportations work in our world. They are propped up, protected and heavily intertwined with big government and each other. It's a weird oligopoly-situation with some less than great dynamics.

Why this specifically never happened, I'm unsure. I am certainly missing lots of things. I have some suspicions that I think are more or less plausible, but don't want to peddle intuitions without further research. Good analysis is hard.

Expand full comment
Kenny Easwaran's avatar

This is a good point. Cloud first is a better description than mobile first.

Expand full comment
Valerio's avatar

As for the videogame comment, that's not what's going on. Unless the world we are talking about is really small or simple, no game engine is genereting everything 60 times a second. They might not even generating stuff right behind you or few meters from your user camera, if they use the occlusion culling technique (e.g. https://docs.unity3d.com/Manual/OcclusionCulling.html ) .

Also, most 3D engine uses massive parallelization thanks to GPUs (we know very well how to parallelize computer graphic operations) - most common 2D UIs don't.

Expand full comment
Godoth's avatar

Nothing you’re saying is wrong exactly, but I have vintage Macs, original hardware, that have more responsive UI than Electron apps in active development. It’s not that 2D can’t do this because of 3D HW advantage, it’s that efficiency used to happen by accident due to simplicity of the system and serious constraints, and now efficiency must be designed into the system as a priority and isn’t.

Expand full comment
Bill Benzon's avatar

Does anyone know current Japanese sentiment about the possibility of Rogue AI posing an existential threat? I ask because the Japanese certainly seem to have different attitudes about robots, which isn’t quite the same thing, but very close. They’re certainly much more interested in anthropoid robots and have spent more effort developing them.

Frederik Schodt has written a book about robots in Japan: Inside the Robot Kingdom: Japan, Mechatronics, and the Coming Robotopia (1988, & recently reissued on Kindle). He talks of how, in the early days of industrial robotics, a Shinto ceremony would be performed to welcome a new robot to the assembly line. Of course, industrial robots look nothing like humans nor do they behave like humans. They perform narrowly defined tasks with great precision, tirelessly, time after time. How did the human workers, and the Shinto priest, think of their robot compatriots? One of Schodt’s themes in that book is that the Japanese have different conceptions of robots from Westerners. Why? Is it, for example, the influence of Buddhism?

More recently Joi Ito has written, Why Westerners Fear Robots and the Japanese Do Not,

https://www.wired.com/story/ideas-joi-ito-robot-overlords/

He opens:

“AS A JAPANESE, I grew up watching anime like Neon Genesis Evangelion, which depicts a future in which machines and humans merge into cyborg ecstasy. Such programs caused many of us kids to become giddy with dreams of becoming bionic superheroes. Robots have always been part of the Japanese psyche—our hero, Astro Boy, was officially entered into the legal registry as a resident of the city of Niiza, just north of Tokyo, which, as any non-Japanese can tell you, is no easy feat. Not only do we Japanese have no fear of our new robot overlords, we’re kind of looking forward to them.

“It’s not that Westerners haven’t had their fair share of friendly robots like R2-D2 and Rosie, the Jetsons’ robot maid. But compared to the Japanese, the Western world is warier of robots. I think the difference has something to do with our different religious contexts, as well as historical differences with respect to industrial-scale slavery.”

Later:

“This fear of being overthrown by the oppressed, or somehow becoming the oppressed, has weighed heavily on the minds of those in power since the beginning of mass slavery and the slave trade. I wonder if this fear is almost uniquely Judeo-Christian and might be feeding the Western fear of robots. (While Japan had what could be called slavery, it was never at an industrial scale.)”

As for Astro Boy, which Osamu Tezuka published during the 1950s and 60s, there are some robots that go nuts, but they never come close to threatening all of humanity. But rights and protection for robots was a recurring theme. Of course, in that imaginative world, robots couldn’t harm humans; that’s their nature. That’s the point, no? Robots are not harmful to us.

But those stories were written a while ago, though Astro Boy is still very much present in Japanese pop culture.

What’s the current sentiment about the possibility that AI will destroy us – not just take jobs away, that fear is all over the place – but destroy us?

Expand full comment
LadyJane's avatar

I think the whole Singularity narrative isn't popular outside of the West because it's basically just secularized Christianity. "The world is a terrible place now, but if we [pray to God and follow His commandments/work really hard on the Alignment Problem], then eventually [Jesus/Friendly ASI] will arrive to save his devout followers and whisk us all away to [Heaven/Techno-Utopia]. Unless we're [not virtuous enough/fail to align its values properly], in which case [Satan/Unfriendly ASI] will [consume the world in fire/turn the world into paperclips] and [torture our souls forever in Hell/torture simulations of us forever in Virtual Reality]."

Granted, Scott had a very clever response to this line of argumentation: "It would be foolish to think that the Wright Brothers couldn't fly because Daedalus could." But ultimately, I don't think it holds up, because the point of these Singularity/Christianity parallels isn't to critique the technology itself, but to critique the narrative. It's more like the Singularity believers are claiming that the Wright Brothers will inevitably end up flying too close to the sun and being immolated, and we're trying to point out "hey, that seems pretty unlikely, just because it happened that way in the story doesn't mean that's how it'll happen in real life."

Some form of ASI may indeed be possible (at least eventually, though probably not any time soon). But the idea that it will either bring about Utopia as a benevolent Philosopher God-King or become an unstoppable Devil that completely destroys all life on Earth? That's clearly a prediction more inspired by Christian eschatology than any realistic assessment of the possible consequences.

Expand full comment
Erusian's avatar

That thing about Daedalus is reasoning by analogy. It's a logical fallacy. And it WOULD be foolish to think that that being too close to sun would be a serious issue concern for aircraft.

Expand full comment
Erusian's avatar

As I've said a few times: AI catastrophism is basically not a thing outside of the US. Or at least I've never encountered in despite extensive engineering contacts abroad. It's really not much of a thing outside of a few subcultures mostly concentrated around northern California. Most places with significant computer science disciplines write on AI risk but they limit it to stuff like, "What happens if AI breaks all our encryption?" The idea, "What if we create a malevolent AI that takes over the world?" is pretty unique.

Anyway, if I had to guess: robots in the west were a literary invention before they actually existed. The idea of clockwork people in the 19th century and later the original robots from the 1920s both preceded real robots. Further, fears of workers being replaced translated were translated into fears about everyone getting replaced and mixed with ideas of Social Darwinian superiority to create the idea of a superior robot species wiping out humans or colonizing them as the Europeans colonized others.

East Asia had the opposite experience: automation came BEFORE it was imagined. Their fear was focused not on the machines but on the Europeans. Which was probably the right call. The machines were not to be feared but a path to salvation. European guns fired just as well in Japanese or Chinese hands as European. This is why so many stories (like Neon Genesis Evangelion) have an undercurrent of getting your hands on the enemy's technology and turning it against them. Because that was, in fact, what they had to do. It's also why you have a few defectors (cultural memories of Europeans who helped modernize the East Asians) and important prototypes (cultural memories of when they might have a few modern ships/guns/whatever supported by a bunch of old style ones).

Basically, the European story is, "In our hubris we imitated God and made robots who we thought were better than us. They rose up to overthrow us. But in the end we proved we were really superior and overcame them!"

The East Asian story is, "Out of nowhere, scary aliens showed up with advanced technology. We fought a desperate rearguard action to keep them from overwhelming us completely while we attempted to steal/copy their technology. And once we do, we can turn the tide and overcome them!"

Gundam SEED is an interesting example. In that genetically modified advanced humans (coordinators) are fighting non-genetically modified humans (naturals). The coordinators explicitly think they're racially superior and within the narrative are stated to actually be faster/stronger/smarter.

Despite this, the plot follows the same basic idea. The coordinators are winning not because they are racially superior but because they have better technology. When the naturals steal it and start producing their own versions they become, basically, capable of fighting toe to toe. Natural pilots aren't quite as good as coordinators but they make up for it with numbers and some new innovations. And the fact the coordinators are actually, objectively superior doesn't mean they can count on winning one on one. It's not even always a struggle: experienced natural pilots easily shoot down inexperienced coordinator pilots. While the coordinators are portrayed as correct in their belief they are faster/stronger/smarter the belief they're intrinsically better or superior to naturals is portrayed negatively as just racism.

I can't imagine that story getting told in the west. In Andromeda the genetically superior humans rebel and destroy the entire state and divide it up into fiefdoms. In Star Trek they take over significant portions of the world and basically destroy each other (or something like that). This leaves a memory so traumatic that the Federation bans genetic engineering. The Japanese apparently thought a good answer was basically to tell the objectively superior superhumans, "Get along with the normies and stop being a racist."

Expand full comment
Lars Doucet's avatar

> I can't imagine that story getting told in the west.

I can! X-COM is a really successful and long lived video game franchise that's just that, and would by my go-to western (specifically British) example of the Asian storyline of "Out of nowhere, scary aliens showed up with advanced technology. We fought a desperate rearguard action to keep them from overwhelming us completely while we attempted to steal/copy their technology. And once we do, we can turn the tide and overcome them!" Unless you specifically meant the genetic angle of it.

But nitpick aside I enjoyed this take and largely agree with it.

Expand full comment
Erusian's avatar

Oh, I agree that story gets told in the west (though less than in the East, I think). What I meant was that the west tends to give conflicts/advantages a totalizing quality. Like, I can't imagine a world where a western story has genetically modified superhumans and then gives them the lesson, "Stop being racist!" as a resolution rather than the narrative itself believing in some kind of inherent conflict.

Expand full comment
Dirichlet-to-Neumann's avatar

"AI catastrophism is basically not a thing outside the US".

Speaking as a French, I think this is mostly true but misleading. It's not that people had a good look at AI capability trends and thought "yeah should be fine", it's that they just don't get what AI is and will be capable of - they think of AI as a glorified excel sheet. So for French people all the worries gets concentrated into "wealth and power concentration for Musk/facebook/google + social control software in China".

Expand full comment
Erusian's avatar

Yeah, see, I disagree that French and Indian and Chinese engineers specializing in AI are all so far behind the US that they hadn't considered the possibility of AI catastrophes. Especially when they do get funding into various kinds of AI risk. I've heard the "they're all just so far behind us" argument. It's entirely possible. But I don't see any evidence that it's more likely than "the US has an obsession with a peculiar theory that other groups don't see as plausible."

Expand full comment
Guy's avatar

Do experts in those countries actually achieve any really novel AI stuff though, and if not why would we think that they know where AI is heading?

Which ethnic groups in the US are into AI risk in the US(Ashkenazim? Anglos? Scandinavians?) and what do those groups think outside the US?

Can rote-learning, holistic-thinking people intuitively understand something from first principles, like instrumental convergence, or do they just adopt a blanket skepticism towards any ideas with overly big consequences until they've already happened?

https://www.unz.com/akarlin/salon-demographics/

Expand full comment
Bill Benzon's avatar

It seems to me we’re looking at two different things: 1) general themes widespread in a culture, and 2) the beliefs of specific (highly intellectual) subcultures. The Terminator films, for example, were broadly popular here (and in Japan? I don’t know) and are certainly about AIs going crazy. Apparently there are six of them:

https://en.wikipedia.org/wiki/Terminator_(franchise)

I’ve only seen the first three (1982, 1991, 2003). AI catastrophe wasn’t much of an intellectual concern at that time. Things were changing by the time the most recent three came out (2009, 2015, 2019). I know nothing about the relative popularity of the six films.

But then *2001: A Space Odyssey* came out in 1968 and it was enormously popular. Back in 1956 we have *Forbidden Planet* (where Lucas cribbed the idea of scrolling text at the beginning of a film). There a scientist, Dr. Edward Morbius, hooks into the mind-amplification technology of an advanced, but now defunct, civilization and, as a result, a Monster from the Id stalks the (all but deserted) planet, killing people and eventually turning on Morbius himself. That film is based loosely on Shakespeare’s *The Tempest*. Incidentally, I’ve written an article linking *The Tempest*, *Forbidden Planet*, the Terminator films and a bunch of other stuff: From “Forbidden Planet” to “The Terminator”: 1950s techno-utopia and the dystopian future,

https://tinyurl.com/2b6sr3m5

That’s all pop culture. That is to say, independently of the transhumanist/rationalist/EA communities, rogue AI is a theme in American culture, and it has deep cultural roots.

The Astro Boy stories I mentioned in my original comment are pop culture, as are Neon Genesis Evangelion and Gundum. Those are Japanese and show a different pop culture.

The world of general popular culture is one thing. The concern of a particular intellectual community is something else. We can ask why that community has emerged. One answer, of course, is that AI really does present an existential risk. That presupposes, moreover, that AI will eventually develop to a human or even super-human level. That’s far from obvious, at least to me, though I recognize that others think differently.

Expand full comment
Guy's avatar

Yeah, I wasn't talking about pop culture but just responding to this part:

"I disagree that French and Indian and Chinese engineers specializing in AI are all so far behind the US that they hadn't considered the possibility of AI catastrophes."

If you want my take the differences in culture are a symptom of different ways of thinking rather than the cause of westerners worrying about AI risk. Unrealistic Hollywood movies could just as easily have the opposite effect and make westerners take "skynet" less seriously.

"That presupposes, moreover, that AI will eventually develop to a human or even super-human level. That’s far from obvious, at least to me"

Well, can you come up with a plausible reason why AI progress would permanently stagnate at some point bar civilizational collapse? Are human brains just magic that can't be surpassed, except in all the ways it has already been surpassed?

Expand full comment
Erusian's avatar

Yes, experts achieve really novel things. They're behind the US (because everyone is) but not maximally in all ways.

I have no idea bout the demographics of AI risk. Anecdotally it seems like it's just a cross section of Bay Area types. So various East and South Asians, Jews, and Eastern Europeans. With the Eastern Europeans, Jews, and East/South Asians you could match them back to their home countries and see the demography back home isn't as in agreement.

I don't know how to parse that last question. If it's something about IQ then I don't really know the IQ statistics. But is there a huge gap between Chinese in the US and back in China? Or Jews in the US and Israel?

Expand full comment
Guy's avatar

"Yes, experts achieve really novel things."

In AI you mean? I don't think experts in most fields succeed in doing really novel things.

"They're behind the US (because everyone is) but not maximally in all ways."

Right, but are they behind the US because they're innovating a bit slower, or are they behind the US simply because it takes a certain amount of time to copy/replicate exactly what the US is doing while doing nothing novel themselves except what they need to do to replicate US results?

No, that last question wasn't about IQ but about different ways of thinking. East Asians have higher IQs but innovate less. The ones in the US are more educated and yet less likely to participate in blogs like this according to my link in the last comment. The sort of people who are famous for making good cars, smartphones and video games, but didn't invent those categories of things(or any modern category of thing?).

My impression is that Jews and northern Europeans are more likely to think systematically and take things to their logical conclusions. Analytic philosophy, game theory, bayesianism, natural selection, utilitarianism and utility functions seem like examples of that sort of thing.

These ethnics were AFAIK the first to worry about things like fossil fuel driven climate change and dysgenics in the 19th century, so that's why I don't find it comforting that others aren't worrying about AI risk. People that I can think of that are worried about AI risk are of the same ethnicities, like Scott, Yudkowsky, Sam Harris, Elon Musk, Stephen Hawking and Nick Bostrom. But maybe that's just my bubble.

Just FYI there's a pretty big IQ gap between Jews in Israel and the US actually, possibly because US Jews are mostly Ashkenazi while Israeli are a hodgepodge.

Expand full comment
Dirichlet-to-Neumann's avatar

I was speaking of the broader cultural background - should have made it clear. I don't know what the French AI community is thinking on those matters, I'm not an insider.

Expand full comment
Erusian's avatar

I'm not either though I do read some papers. That's what I'm drawing from. If some insider wants to correct me I'll take the note. Maybe they all secretly think it and can't get funding or something. But I've seen no sign of that.

Expand full comment
Bill Benzon's avatar

Thanks, very helpful. Do you mind if I post your comment to my blog, New Savanna?

https://new-savanna.blogspot.com/

Of course I’ll credit you and link back here.

On robots, here’s a Wikipedia entry that explains the origins of the word:

https://en.wikipedia.org/wiki/Robot

“The term comes from a Slavic root, robot-, with meanings associated with labor. The word 'robot' was first used to denote a fictional humanoid in a 1920 Czech-language play R.U.R. (Rossumovi Univerzální Roboti – Rossum's Universal Robots) by Karel Čapek, though it was Karel's brother Josef Čapek who was the word's true inventor.”

Expand full comment
Schweinepriester's avatar

The newt guy? "Válka s Mloky" was a creepy book to read.

Expand full comment
Erusian's avatar

Sure, if you'd just add some note I'm not an expert on anything of this and it's just speculation/half remembered tv shows. (One thing I've learned on the internet: any claims about sci-fi and anime will get picked apart in a two hour youtube video.)

Expand full comment
Bill Benzon's avatar

No problem. I'm not an expert either, but I know a lot about how culture works. And if someone wants to roll by and dispute it, that's fine with me. I'll learn something, maybe.

Expand full comment
Lain Steiner's avatar

Vim is indeed very correct in their observation (on top of being a text editor that I very strongly like). I experienced the same frustration during my PhD in real-time distributed simulation and kept thinking about this for a long time. I feel like I have several elements of explaination worth sharing. Two of them stem from architectural and philosophical considerations, and most of them stem from personal experience.

This is my first serious post here, expect some heavy rambling and feel free to correct/comment anything that seems off.

Architectural reasons :

1. Time is not a first-class citizen in computer programs. Execution time is not a property embedded in the source code of a program, as it is the combination of what the source code says, how it was compiled (compilers have a lot of different options allowing for speed vs size tradeoffs), the specs of a computer running the program, how system resources are shared between concurrent programs, etc. It is mind-bogglingly difficult to determine how fast a piece of code would run, as there are too many moving parts to allow for static analysis to produce meaningful results. Therefore, the act of optimizing for speed is a tedious one: it requires profiling the running program, determining which functions were consuming the most CPU cycles, and revisit said functions by refactoring them and hoping that those efforts would indeed lead to improvements. Most programmers don't take it that far : if it works, albeit slowly, they'll leave this section as is and will start working on something else.

2. Parellel vs sequential. It is quite unfair to compare the rendering prowesses of a modern GPU with the inner requirements of a business-grade software. In the case of a video game, screen rendering is offloaded to the GPU, whose specialized architecture have been designed to process ludicrous amounts of instructions in a parallel fashion. In contrast to that, CPUs mostly don't get parallelism, and process data one bit at a time. If there are say 3000 pixels to render, a GPU would render them in a single cycle. If there was 3000 bits of data to review, a CPU would have taken 3000 cycles to go through all of them. Most business-grade software isn't written in a parallel manner, because parallel processing is finnicky to get right and comes with its own caveats requiring specialized knowledge.

"I-feel-like" reasons :

3. Switching to slow languages : 8 years ago came a framework called Electron. This framework allows to write code designed for webpages and build a desktop program with it. This allows to develop applications once, and deploy them both on a webpage and as a desktop software. However, this adaptability comes at the expence of speed, as web languages such as Javascript are much, much slower than other languages such as C. Plus, the compatibility layer provided by Electron make use of a brower. For each Electron-enabled software running on your computer, a new browser is spawned, increasing the load of your machine tenfold. At work, I'm writing code with Visual Studio, checking my mails with Outlook, Teams start automatically, and I look at documentation online using another brower. Therefore, at all times, I have at least 4 browsers running in the background, and requiring time to run their slow Javascript (several orders of magniture slower than C). Browser slowness is leaking onto the desktop applications as well.

4. Disdain for execution time : During my initial studies, we were told not to optimize too much, since computers are already very fast and optimization wouldn't bring more value. This is true for small, distinct programs but false for large software. As 100 different individuals work together, they each bring their own non-optimizations, quoting how fast computers are and how they do not need to optimize. However, as software grow in size over time, slowness adds up multiplicatively as the codebase looks more and more kafkaesque, and "feature creep" forces functions to adapt to more and more weird edge cases, with patches upon patches upon patches, each degrading a little bit execution time. At the tipping point, it appears that throwing the entire codebase and rewriting everything from scratch is the only viable solution. Said solution will seldom be implemented as it requires extra time, effort and money the company would like to invest somewhere else.

5. Size growth : In the past, we were working with small batches of data since storage space was limited. As storage space is infinitely better than before, we process larger and larger batches of data requiring more processing power.

6. Mandatory internet connectivity : As Vim pointed out, software often realize network transactions upon activating features. Network exchanges are several orders of magniture slower than fetching something from memory, as we delegate work to servers outside our own country. As stated before, execution is sequential, and everything will hang until server reply.

7. Size and scope : Software back in the day was quite primive in terms of features. It was designed around carefully selected use cases. Due to their storage and processing power limitations, software had to be extremely well written to even fit in memory, and every aspect was tightly controlled. Nowadays, software is bloated with hundreds of functions which will almost never be used, but still affect how data is structured and increase the amount of operations needed.

Expand full comment
David Gretzschel's avatar

2. I think of this as cost disease in software.

economics:

There's a standard of living that we accept and put up with. Work 40 hours per week for so many decades. GDP/capita may be however many times higher than fifty years ago, but the pressure is much the same.

software:

There's a standard amount of lag/sluggishness that people are used to put up with.

Our computing devices actually never end up becoming faster on the user interaction level.

economics:

Surplus gets captured by more bureaucracy, regulation and other overhead. As standards of living remain constant, nobody notices what we're stealing from ourselves.

software:

Any hardware innovation will be used to fuel the demands of an evergrowing stack of software complexity. If there's time freed up, it gets eaten by regulation again, like cookie-prompts or mandatory/non-optional 2FA-authentification. As the speed remains constant, nobody notices the time we're stealing from ourselves.

Meh... not a perfect isomporphism. I give it 6/10.

If I figure out how to translate the Baumol-effect to that dynamic, I'll rate it higher.

Expand full comment
Brendan Richardson's avatar

The phenomenon is called "Wirth's Law:"

https://en.wikipedia.org/wiki/Wirth%27s_law

Expand full comment
David Gretzschel's avatar

From 1995... woah.

Expand full comment
Erica Rall's avatar

That's a special case of Parkinson's Law: work expands to fit the time (and resources) available for completion.

Expand full comment
David Gretzschel's avatar

Probably stems from the general human tendency of satisficing and something like "low endogenous aspiration levels", I suppose.

https://en.wikipedia.org/wiki/Satisficing#Endogenous_aspiration_levels

Expand full comment
Nolan Eoghan (not a robot)'s avatar

No operation on the UI thread should take anything close to even 200ms. If the network or disk need to be accessed then that should happen on another thread. It is a cardinal sin to do networking calls before the app is launched in my opinion, but games are a likely culprit in that regard.

Therefore it’s a bit disingenuous to compare start up times to frame rate though, games can take minutes to load. Network and disk are much slower to access than RAM.

Added to that the games are doing their calculations on the GPU, which is dedicated to the kind of floating point calculations that enable fast graphics.

For most application developers writing native code the launch time is beyond their control - and a lot is going on as the OS needs to bring the application and it’s frameworks, and libraries, into memory. Often a cause of slowness is the need to swap out older memory. That is an OS driven operation and beyond the control of application devs. Nevertheless that’s often fast.

This doesn’t justify all long launch times however, on my M1 mac some of the applications from the usual suspects (Adobe and MS) take seconds.

This is because often the developers are using a non native framework, so that needs to be loaded. In order to get the app running. Electron allows developers to create applications in JavaScript, to run on any platform. So that environment (which is chrome) needs to be loaded. Apple’s native apps are ready to use in less than half a second. There’s no or one bounce. Something like slack does bounce once, but even then the main window takes ~1 second to populate. Teams takes 2 bounces to show a small window to tell me that it is loading, then about 3-5 seconds until I have a workable window.

Expand full comment
Edward Scizorhands's avatar

> No operation on the UI thread should take anything close to even 200ms

When OSX decides to shit the bed, it can take several seconds for it to even echo back the characters I'm typing into a terminal.

Expand full comment
Crazy Jalfrezi's avatar

What's happening is that the GPU is, by far, few most heavily optimised piece of hardware in your computer. It is really good at performing the linear algebra required to fill those pixels with such desperate alacrity (or for deep learning for that matter).

Every other component in your pc is a sluggard by comparison.

Expand full comment
Erusian's avatar

There's a bunch of technical answers about load speeds (and I can get into the technical side if you want) but the fundamental reason is economic: engineering time is orders of magnitude more expensive than computer time. How much does an engineer cost per hour? $50 on the low end? So $8,000 a month. How much does it cost for a basic Watson plan per month? $500. And that's unusually high.

Optimizing for speed is less economically efficient than throwing processing power at the problem. This is a pretty generally accepted principle: you minimize your engineering time above all else because engineers are the most expensive. (Now, of course, you also minimize FUTURE engineer time through tech debt. But fundamentally you're still minimizing engineer hours.) This includes creating features that are as slow as users will tolerate. Any excess time invested in making it faster is wasted engineering hours that can be better used elsewhere. Like developing new features or solving bugs.

How much will users tolerate? People are generally willing to tolerate a few seconds of latency on websites or for things like Word. (And they'll tolerate it much, much more than bugs or lacking features.) If Amazon takes a few seconds to paint you're generally not going to complain. Videogames are in a different market. If you shoot someone and miss because the game is off by a few seconds then you're very upset.

Expand full comment
Michael Kelly's avatar

An engineer ... at a desk costs about $200 per hour when you consider he makes $50, there's another $50 or so with taxes, insurance, etc. Plus there is a server room, networking, maintenance, office space, janitorial, HR, and all the other overhead supporting them.

Expand full comment
Austin Chen's avatar

This is the answer I was going to give - the economic argument mostly explains the gap in video game vs web latencies. Web developer hours are super expensive and we optimize accordingly.

Another point: video games are usually released once and never really updated afterwards (like a movie), which means that the devs can aim for one specific target vision and spend a bunch of time optimizing for it.

In contrast, most web software is continuously improved upon, and all the time we're producing new features or removing old ones. This means that optimizing the performance of any one feature is rarely worthwhile, because there's a good chance that feature will be cut, or go unused by users.

Expand full comment
Jared Smith's avatar

I too came here to give the "because economics and differing expectations" answer and found someone beat me to the punch. That being said, video games being released and never updated is how video games _used_ to be. Now they're more like traditional desktop software in that respect: they get substantial post-launch patching and even new features (it's just that most other software doesn't call it "DLC" or charge for it). Still quite different than the continuously upgraded world of the web, but now it's a question of degree rather than kind.

Expand full comment
Austin Chen's avatar

Agreed, it's definitely the case that video games are moving towards the "as a service" model, but there's still a long ways to go.

Video games patch eg monthly, mostly for balance tweaks or cosmetically; the best web sites update weekly or even continuously on every new commit (how Manifold operates)

Expand full comment
Jared Smith's avatar

Yeah, that's why I likened them more to traditional desktop software, as opposed to the way it used to be with e.g. a SNES or PS1 game in the 90's. We release to our main site 2x/week at work and are generally (appropriately?) ashamed we can't manage to release *more* often yet.

Expand full comment
Austin Chen's avatar

Curious, what kind of product do you build slash what's the bottleneck to more frequent releases?

Manifold has a huge advantage in terms of being a pure, standard web product, on a monorepo: https://github.com/manifoldmarkets/manifold

So every push to `main` branch leads to a new build by Vercel, and a new deployment in ~5min. It's developer nirvana 😁

We also don't have tests lol - I think that actually helps for UX stuff, but unclear if it's long term sustainable.

Expand full comment
Jared Smith's avatar

Ecommerce retail, billions in annual revenue per year from the site. Barriers to more frequent releases: organizational inertia. Currently transitioning to a more devops culture but not there yet. Conservatism in ops (lots of revenue means no one wants to risk being the dev who cost the business millions and millions of dollars by breaking something). Flakey integration tests (unit tests are pretty solid). If we were building it from scratch today we'd be doing it a lot different, but it was built almost a decade ago which is forever in IT years.

As for your tests: for a younger product there's not much sense in going overboard with UX tests, your UX is likely changing too fast to really know what you're testing yet.

Expand full comment
Resident Contrarian's avatar

Sometimes I find I have a piece of context that seems like it's universal that some people don't have, like getting choked unconscious or hit with an airbag or something, and it's always jarring for me. I'm curious to know what experiences people have that others don't that are jarring for them in the same way - where you go to talk about an experience you unconsciously felt was universal that turned out to be less than.

Expand full comment
Zærich's avatar

Not sure about experiences. I probably have some, although most commonly I'm the exception to the supposed universality of the other. I definitely have a lot of "Wait, I thought everyone knew that!" moments, I seem to have a broader knowledge base than many.

Anon's walking experience definitely rings true for me as well.

Perhaps the following: growing up, there was music almost constantly in my house. Classical, Jazz, Rock, Christian Pop, Musicals; later, I started hearing video game music frequently, and also started listening to more things myself (harder rock, metal, electronica from my brother, swing...). I still listen to music all the time, and when I'm not actively listening, I've probably got a song in my head. Musicality is, to me, a deep universal.

My default assumption is that people can sing, if not well, and could be taught to sing well if they wanted. This, despite growing up with several close friends who absolutely couldn't carry half a tune if their life depended on it. Funny how stubborn parts of the mind can be.

Expand full comment
Eremolalos's avatar

Seeing the "cheat factor" in my dreams. While a summarized version of one of my dreams sounds like a story or a slice of real life (albeit a slice with some odd elements), I'm always aware as I remember dreams of their chaotic deep structure. It's as though in the dream state a lot of random stuff is pouring through, and my mind is making a story of it on the fly as best it can. Say there's a dream that can be summarized like this: I'm in a store with Joe, looking for something, and he tells me to go look on the back shelf for the mosquitos. So this sounds like a real event, right?, except for the oddity of the store having a mosquito shelf. But in the actual dream Joe didn't look like Joe all the time, he looked like Bonnie but I somehow knew he was Joe. And at one point Joe was the cashier. And actually the part where we were looking for something was not exactly in the store but outdoors somewhere -- no, maybe some of it was inside the store. And Joe didn't actually say "go look on the back shelf for the mosquitos," in fact I don't remember his saying any actual words I just knew he wanted me to go in the back to find the mosquitos . . .

I thought everybody's dreams were, like mine, nonsense on which my mind imposed a layer of sense, but lots of people have told me that in their dreams people look like their real selves, and speak in actual sentences, and take place in actual settings. Still don't know whether my dreams really are more of a mess than other people's or whether I'm just observing them more closely. Would be curious to know about other's experience here.

Expand full comment
David Piepgrass's avatar

I'm pretty sure my dreams swing both ways, making more or less sense at different times. Sometimes Joe looks like Joe, other times he doesn't but I know it's Joe somehow.

Expand full comment
Maybe later's avatar

Having been prescribed an exciting variety of psychoactive medications, I can reliably state that dreams can vary wildly in intricacy-of-plot and self-consistency.

Expand full comment
Bullseye's avatar

Sounds like a regular dream to me. I don't think I've had a dream where people didn't look like themselves, but it nevertheless seems like the sort of thing that would happen in a dream.

Expand full comment
Arbituram's avatar

Fights at school. I've moved social class over my life (although this also appears to be a broader social shift) and I just took it as standard that boys/young men just got into punch ups every once in a while. Ends up... That's not quite true, especially above a certain income level/class.

Expand full comment
Andrew Flicker's avatar

Yeah, this is a big one for me. I got in quite a few fights in middle school, and a few "almost-fights" where a punch or kick was attempted, etc. in high school- and any time I talk about that with my upper-middle-class peers, there's some degree of shock.

Expand full comment
Erusian's avatar

I've survived like more than a dozen natural disasters/mass casualty events/etc. Whenever the next one hits I'm reminded that most people have never even been in one. I guess part of me just think it's normal to have at least one devastating storm or earthquake or whatever in your life. I'm also often surprised at how many people just haven't had much experience of death. I attended my first funeral when I was only a few years old. I recently comforted someone who'd never been to one before despite being in their thirties.

Fun fact: If you survive the first day and the power's out Ben & Jerry's has a standing policy of giving out free ice cream before it melts. Don't worry about the calories either. Food's usually about to get scarce and unappetizing.

Expand full comment
covethistorical's avatar

Interesting! Do you find people act very selfishly in such a mass event or do they actually help each other more than in normal circumstances? And do you take any particular precautions that you see people without the experience miss?

Expand full comment
Erusian's avatar

What's your baseline for how selfishly people act? People look after themselves and their families first (or just themselves for some people), other people they care about second, and strangers third. Most people are as helpful as they believe their circumstances allow. I think that's normal. The difference is that everyone's capacity to help goes down due to resource scarcity. If there's not enough food then people will step over the starving in the street. If there is they'll give them food.

What does happen is people reorganize around immediate needs in a sort of primitive social/barter economy. Lots of favor trading and doing work for social capital and the like. But I find people tend to overestimate how altruistic such an economy is. Yes, sometimes someone helps you just because they have the free time. But even that really only goes to people who might someday return the favor. People with nothing to offer can get shut out completely and left to survive on their own. In the worst case that can even be the majority.

Events are heterogeneous so it's hard to write a general guide. I could probably answer specific questions. But not something so vague as "what should I do to take precautions for an unspecified disaster in an unspecified place?"

I guess I could say: if you keep a basic strainer, a pot for being over a fire, and some honey and vinegar then you've got non-perishable ingredients that can purify freshwater. Strain the water to remove anything big, boil the water for fifteen minutes (as in, full boil), and add in one tablespoon of vinegar and two of honey for every bottle (~12 ounces, it should be mostly water). You can drink it yourself or sell it. You can also add in tea or whatever if you have it. It's also not that hard to get more honey/vinegar. They're not perishable or high value items.

If you're picking a water source then pick one that has life in it, algae especially. If the water is pure looking then something is killing whatever would otherwise live in it. Oh, and you can smell salt content. This process won't remove salt. If you don't have or can't find a freshwater source and water's not coming in then leave.

Of course, most of the time aid organizations will dump water bottles in the area. And plumbing tends to be pretty resistant to most disasters.

Expand full comment
PeterM's avatar

Why do you add the honey and vinegar? What does that do?

Expand full comment
Erusian's avatar

Mild astringents that can help kill some things and keep the water good longer. Or at least that's what I was told. More importantly from my point of view, it helps with the taste to cover up the flatness of boiled water and anything else that might be flavoring it in a way you don't want.

Expand full comment
Michael Kelly's avatar

How is it you've experienced such a disaster-prone life?

Expand full comment
Erusian's avatar

I don't know. I think I must have pissed off some deity in a past life. Seriously, I once went on my first vacation in like a decade and the place got hit by a Category 5 Hurricane while I was there. It's like someone's deliberately screwing with me.

Expand full comment
User's avatar
Comment deleted
May 8, 2022
Comment deleted
Expand full comment
Andrew Flicker's avatar

I can't find the Watership Down line, but I love the book. If you find it, let me know.

Expand full comment
Deiseach's avatar

"He disappears. Wore black clothing, black bike, no lights, no reflectors, no helmet."

Oh, gosh. Terrible for him (and you watching this) but I don't know why people won't have lights on their bikes, or at the least reflectors. Visibility in low light when cycling/walking is not at all the same thing as visibility for someone in a car who will *not* see you in the dark, even if it's only "but the sun is just going down'" twilight, until they are on top of you. Just because *you* can see them does *not* mean they can see you.

Add in dressing all in black and that's just a recipe for disaster.

Expand full comment
Schweinepriester's avatar

Except you ride ninja mode. No one sees me, fine. Just have to keep moving and watch out. Stopping at red lights is a bad option, of course.

Expand full comment
User's avatar
Comment deleted
May 8, 2022Edited
Comment deleted
Expand full comment
Doctor Hammer's avatar

I had a similar experience with regard to the absence of road signs. I grew up in the middle of nowhere, where roads were unmarked save for the occasional state route, so I was used to giving directions based on landmarks and rough estimates of distance. “Drive through the woods, then just past the third red barn on the right turn left. About a mile past that there is a field with a big rock and a long haired cow. Turn right there. Don’t worry , the cow will be there.”

That made people nuts when I went to college. It took a while before I could remember to look for the name of the roads I was on, pay attention to intersection combos, etc.

Expand full comment
billymorph's avatar

I found living in London to be fascinating for what it does to your perception of distance and time. The city is large enough to disfavour walking and busy enough to discourage driving, but it's well connected by public transport so after a while you tend to think in stops and lines rather than physical distance. Living at Mornington Crescent I could get to a pub Tower Hill in twenty minutes and two trains, but going to the boardgames cafe in Hackney was up to an hour even though it is about the same distance as the bird flies.

Expand full comment
Valentin's avatar

They are already good answers on the 3D vs 2D question. I'd add: The UI part is the wrong framing. Your 2D lags not because it's hard to render, but because it's waiting for the data.

Also, the 3D is wrongly described. It doesn't compute everything for every frame. Most of it (like 99.9%) is cached. For every new frame, the computer just computes the difference with the previous frame.

Expand full comment
Ben Labowstin's avatar

It's worth noting that when a 2D desktop application is just sitting there idle, it's still being drawn 60/144 times per second, so it's evidently not the 'draw' that is causing difficulties. You can occasionally catch programs that do have some weird difficulty with the draw, as they will hang when you try and drag the window around, for example. As others have pointed out, it will be the logic behind what it is deciding to draw.

Even something as simple as a menu, it might decide to check what items should be in the menu, and look up your settings, and have to pull that from disk, or some other slow method. Even then, we're probably just talking about a hang on the first load, which you do also see with games (the initial loading screen).

Some of these 100ms hangs might be fixable with having your application do a 60 second initial load, like a game might do, but I doubt that's a desirable behaviour.

Expand full comment
MartinW's avatar

Part of the story is that a video game is installed locally, it runs directly on your CPU, and it has full access to all the capabilities of your video card. Whereas most boring 2D apps nowadays are just websites, so they are written in Javascript which gets translated on-the-fly to run on your actual CPU. It’s like speaking in your native language versus going through a translator.

That’s not the whole story, however. Modern computers are fast enough that even in Javascript you can write an app which responds to any user action within tens of milliseconds, not hundreds or thousands. You just need to spend some effort on it. And the people who like to obsess about shaving the last microsecond off a piece of code, tend to go into video game development (or high-speed trading) rather than web development.

But even then, if you look at Substack’s Archives page (which shows the first 10 entries and then waits for you to scroll down before fetching the next 10 from the server) versus the Archives page of the old SSC blog (which simply sends you the entire archive so that you can scroll through all entries immediately), that’s not a matter of "not spending enough effort on optimizing it".

The reason why the Wordpress site which SSC is built on, is so much faster, is not because the Wordpress people knew some incredibly clever advanced tricks to optimize it. The Substack page is much more complicated and, in a way, more “advanced”. The Wordpress page is faster and more responsive exactly *because* it is simpler: it just sends all the data to your browser and leaves it up to the browser to handle scrolling, searching etc. And your browser runs natively on your CPU, and also probably had a lot more effort (and possibly also more competence) spent on optimizing it.

(Edit: which is also how 3D games do it. Most of the credit for rendering those millions of triangles 60 times per second, goes to the video card hardware. The game’s job is mostly to *not get in its way* and to make sure that the data for the next room is ready to be loaded into the video card’s memory by the time the player enters that room.)

So then the remaining question is, *why* do website developers nowadays so often insist on doing things the hard way, when the result is so obviously inferior? That is more a psychological question than a technical one, so Scott this is actually your area of expertise..

Expand full comment
Kenny Easwaran's avatar

My impression is that Substack thinks of its content like a social media feed, where it’s potentially infinite, but ephemeral. So it only ever sends you what you are looking at now, and expects things to change while you are browsing.

Expand full comment
Ashwin Narayan's avatar

As someone who works with embedded systems that rely on updates happening every 2ms without fail, building a responsive, fast UI is absolutely possible using the same techniques (i.e. hard real-time operating systems) but takes far too much effort to make sense for everyday computing. Most computing that interacts with humans is on a "best effort" basis. Any effort put into optimizing UIs beyond "usable" is going into luxury territory. Nice to have but maybe nobody wants to pay so much for it. Although, I do hear Apple runs their UI updates on something close to real-time priority and that's why their UI feels relatively better compared to the competition. So maybe people are willing to pay the premium?

Expand full comment
Resident Contrarian's avatar

There might be very good reasons not to do it, but if other people are like me you might consider making that finalist list public - knowing you didn't get on a list lets you close that tab and free up some RAM.

Expand full comment
Scott Alexander's avatar

Consciousness And The Brain, Making Nature, The Anti-Politics Machine, The Castrato, The Dawn Of Everything (EH), The Future Of Fusion, The Illusion Of Grand Strategy, The Internationalists, The Outlier, The Righteous Mind (BW), The Society Of The Spectacle, Viral

Expand full comment
duck_master's avatar

Without the context of the comment above it, your comment reads like a bad GPT-n generation that got cut off before it could become even worse.

Expand full comment
duck_master's avatar

Actually I just realized that this sort-of applies to my comment too. *And* this one too. So meta

Expand full comment
AV's avatar

For some GPT-n, it applies to all comments!

Expand full comment
Resident Contrarian's avatar

Thank you! I appreciate it!

Expand full comment
Oleg Eterevsky's avatar

I believe the main reason why games apparently are working faster than other software is that it is a requirement for them. If something can't be rendered at 30 fps, it's either optimized or removed. As a result you end up with whatever you _can_ do in 10-40 ms. On the other hand, in most other types of software there is no significant difference between use waiting half a second and a second when performing a relatively rare operation.

On a more technical level the things that games do 60 times a second usually only rely on in-memory data. Most real-world operations on the other hand involve disk operations, network requests, RPC to other services etc. It is not impossible to optimize, but it's not easy. I'm working on a service that for every request calls ~40 backends (secondary services), runs several machine learning models and a metric shitton of business logic. It usually finishes within 30 ms.

Expand full comment
Pycea's avatar

Based on the recent discussion about the blog layout, I'm going to plug a browser extension I created to help make the site better. It's called ACX Tweaks and it works on Firefox and Chrome-like browsers. Find it in the extension store or at https://github.com/Pycea/ACX-tweaks.

A few of its features:

- Can restore the old SSC theme

- Can highlight new comments

- Can add back the comment like button

- Adds keyboard shortcuts for comment navigation

- Fixes the stupid header that appears whenever you scroll up

Hopefully this makes the site more palatable for some of you, and let me know if there are any other issues with it that you think can be addressed.

Expand full comment
Dino's avatar

Can you make it so that clicking on the vertical line to collapse a thread works?

Expand full comment
Pycea's avatar

That should already work even without the extension. Can you tell me what version/browser/os you're using?

Expand full comment
Dino's avatar

It works without the extension, it doesn't work with the extension. MacOS, Firefox, don't know which version of the extension. Maybe I should re-load the latest?

Expand full comment
Pycea's avatar

It works for me. Can you make sure you're updated to the latest version?

Expand full comment
Dino's avatar

I just got the latest version. Things look different - there's a green box around each post, and a new "parent" thing. When I click on the vertical line, the vertical line (and nothing else) disappears.

Clicking on the "Parent" thing does nothing. What is its purpose? Every post either has that or "Top", and clicking on "Top" scrolls to the top of the window.

Also now not seeing an edit button for my posts.

Expand full comment
demost_'s avatar

Thanks a lot! This is a great service to the community!

Expand full comment
Paul's avatar

I'd suggest adding a screenshot in your README.md file

Expand full comment
Neil Scott's avatar

UX designers spend a lot of their time doing user testing or studying usage via analytics. User testing questions are things like: “show me how you would share this with a friend?” They aim for frictionlessness, obviousness, and simplicity. Analytics will show how much time people spend reading, how many shares and subscribers an article gets. Numbers must go up! Neither of these ways of designing would capture the sentiments described by your longtime readers who are nostalgic for the old, clunky site.

(Though, personally, I am nostalgic for LiveJournal so ...)

Expand full comment
Nancy Lebovitz's avatar

LiveJournal still exists, and so does a clone, DreamWidth.org.

I'm nostalgic for trn, which I believe is the best format for long discussions.

Expand full comment
Kenny Easwaran's avatar

The old site is less clunky than the new site. On the old site it was easy to find an article from a specific day and share it with a friend and browse to another article and link to a specific comment.

Substack is good at many other things (probably including things like moderation, that readers don’t see) but treats content as more ephemeral rather than something you want permalinks to.

Expand full comment
David Piepgrass's avatar

Note that the date on a substack comment is its permalink.

Expand full comment
Neil Scott's avatar

Yeah, it is a shame not to have date/tag archives, but the search/archive is ok, no? https://astralcodexten.substack.com/archive

Expand full comment
Marian Kechlibar's avatar

Starting a program up will always incur some delay due to disk access. Programs often need to load a lot of libraries from the disk into RAM. (Note that SSDs are visibly better in this regard, program startup is so much faster than with classical HDDs.) Video games are no exception, they do not start instantly either.

On the other hand, displaying a command menu should not take long. If it does, either the framework is shitty, or the software is doing something over the network and tries to refresh some menu-related data from a remote server.

Expand full comment
Pete's avatar

The disk access delay is mostly determined by the quantity of disk access required, so it's not really due to disk access, but by the design choice of program developers of how much they will need to load - if there's a relatively simple program that needs to load a bajillion things including a browser and its javascript engine and the kitchen sink, the design choice is causing the startup delay.

30 years ago MS Office install was 40 (or less) megabytes large. Current MS Office with comparable functionality (back then it used to have MS Access, but now there's a bunch of other addons) takes something like 1200 megabytes, so 30 times more. It's likely that a 30 times smaller app would load from an SSD 30 times faster.

Expand full comment
MartinW's avatar

Many frameworks are shitty. But also, many devs aren’t proficient with the framework they’re currently using. Because there’s a cool new framework out every week, and everybody wants to use the latest hot thing, so people rarely stay on one framework long enough to become truly familiar with it.

E.g. let’s say you want to display a list with 2000 items in it. So you create a List object, and you call the addItem method on it 2000 times. It works, but it’s slow and glitchy, because after every item it does a full redraw of the screen, and you can add items faster than it can redraw them. Probably there’s a better way to do it - maybe you can suppress the intermediate redraws until you’re finished adding items, or maybe there’s a way to add your whole set of items to the List object in one call instead of individually.

So you go to your project lead and say "look, it works, but it’s slow and glitchy - give me another 30 minutes and I can probably figure out how to make it smooth." And they say, nah, this is good enough for the first release, we have more important tasks left to do, let’s put “make it smoother” on the backlog so we’ll pick it up again later. And on most projects “the backlog” may as well be called the Graveyard Of Eternal Slumber, so…

Expand full comment
Marian Kechlibar's avatar

True. Also, profiling your code for obvious bottlenecks has gone out of fashion, so many people do not even know where the bottleneck is.

Expand full comment
David Manheim's avatar

"the framework is shitty" <-- This. Even when it is because of network latency or remote data, or whatever, if UI / software responsiveness was treated as a key requirement, these could be fixed. You'll notice that game servers also lag, but they do an amazing job of pre-fetching and similar to minimize perceived non-responsiveness.

Expand full comment
nobody's avatar

This has been on my mind for a while, and I'm convinced I've got a stringent and nonobvious theory. But over the course of the last few days I got increasingly convinced that something in there is off, and maybe you can tell me ;)

I would really like to not waste any more energy on something really wrong

Some people play weird status games in their head. They desire to 'win' it. And also, whatever the desire is about isn't the point. The point is to win the status game. So the goal is not the 'real' desire. It is extrinsic motivation that makes us pursue this desire. This kind of desire wouldn't exist without other people (or the imagination thereof in whatever form == the Other). I would like to call this social desire.

Some people (if I had to bet, most people) do sometimes put up effort for things that don't let them win any status games. They simply want whatever the desire is about.

Please note, that this includes all desires that aren't social desires. I would like to claim, that those come from within your mind and your brain (as they are pretty intertwined).

Let's take a look (no particular order)

* hunger

* thirst

* libido

* desire to avoid pain

* drug addiction

* aethetics

Please note, non of these, in their most basic form, happen for status reasons.

Obviously, there are evolutionary reasons for most of these desires. But that is not why you desire this in a certain moment. We all desire these because *our body/brain/mind demands it*. (this distinction is very important!)

As for aesthetics, I'd like to claim that this is the result of some weird feedback loop in the brain. And therefore, at the very moment of experiencing something being beautiful or not, coming from within the brain.

And because these desires are independent from other people they are intrinsic desires. Real desires. I'd like to call these personal desires.

Most desires are partially personal, partly social. Like eating 'good food' when you are slightly hungry. Or you want to learn something about butterflies, because there beauty fascinates you, and now you can't stop, because you'd be a loser who failed.

A well functioning society supports "helpful" (for the society/the other) desires, thus leveling your status and giving you resources. This strengthens the community.

Therefore you would be a loser if you failed. This means, the more your goals align with society, the harder it will become to have a purely personal desire. In fact, nobody has a purer personal desire than the homeless person wanting heroin. (no attached status etc)

So, what to take from here?

What is healthy?

(This is my opinion, feel free to argue)

People need personal desires, that's what makes us individuals.

We don't exactly *need* social desires (that's why they are looked down upon) but they are healthy both on an individual level (society provides resources for playing the game) and on a society level (the rules make us playing together rather than against each other).

I dont know, what the healthy ratio is.

Expand full comment
George H.'s avatar

Hmm, I was listening to Lex Fridman and David Buss, and from that conversation I would say that high status is partially about positioning yourself in the mating hierarchy... access to 'better' mating partners. And that it is mostly men who seek status, because high status men is something women desire. (After all, since Eden, it has all been women's fault... :^) (that's meant to be funny.)

Expand full comment
vriendothermic's avatar

I think this pretty much makes sense, with a few caveats.

1) I'm not sure if this really contradicts anything you've said: there are good reasons to believe that the desire for status (i.e., a safe and meaningful position within your social matrix), which is what supplies "social desires" with so much motivational weight, is itself an "intrinsic" desire, in that it is intrinsic to being human. It is more like an *organ* than a *parasite*. We are social animals, after all.

2) you seem to assume that social desires, in comparion to personal desires, are superfluous at worst and pathological at best. I don't think there is good reason to believe either thing. Humans are a cooperative species just like honeybees or wolves are cooperative species. Social desires are a part of what it means to be a healthy member of that species. As for the supposed virtue of personal desire, doesn't this break down pretty easily once you run up against heroin addiction? If "being an individual" is best expressed by heroin addiction, then it can't matter all that much. I think you should replace your emphasis on individuality with an emphasis on *autonomy*, meaning, a state where your motivations and actions align with the roles and values you identify with. Both personal and social desires can play a part in expressing/cultivating your autonomy. And, according to Rawls and others, self-respect (which may or may not be identical to status) is a precondition for autonomy for philosophical reasons I don't want to go into unless someone asks, so desiring self-respect/status is perfectly healthy and rational *as long as you don't go overboard*.

Expand full comment
Boinu's avatar

Is this inspired by Scott's 'Sadly, Porn' review and/or the stab at Lacan?

I've just gotten around to reading the former book, and so far its main thesis is that real desires are impossible to act on, and most of us require surrogate reasons and external permission structures to get anything significant done - and usually end up with surrogate, sanctioned desires anyway.

Wirehead/heroin/etc. nullify all this, but then they nullify individuality, too.

Expand full comment
Xpym's avatar

> Most desires are partially personal, partly social.

Yep, and I don't see why this wouldn't include aesthetics too. Peoples tastes are plenty influenced by society.

>So, what to take from here?

Have you read this old Scott's post? Seems to have much in common with your framework here. https://slatestarcodex.com/2013/03/04/a-thrivesurvive-theory-of-the-political-spectrum/

Expand full comment
Xpym's avatar

Programmers are as lazy as they can get away with, no real mystery there.

Expand full comment
Nowis's avatar

There is this old adage that software speed halves every 18 months, compensating exactly Moore's Law.

https://en.wikipedia.org/wiki/Wirth%27s_law

Quoting qntm https://twitter.com/qntm/status/1502014304043864069

"

There are exactly two forces at work in software performance:

1. Developers' ability to consume unlimited amounts of processing power (this is unbounded)

2. What users will put up with (this is a fixed constant)

That's all.

These forces reach a point of natural equilibrium where developers put in the bare minimum amount of performance work so that users don't give up and use something else instead.

Note that the location of this point of equilibrium has nothing to do with the absolute performance characteristics of the hardware or the engine or whatever. We can saturate anything. We don't even have to think about it.

(This is a cynical, hyperbolic thread, there's a lot more to it than this)

"

Expand full comment
Michael Kelly's avatar

Absolutely, and the best programmers are the laziest.

In business, what's the goal? Better-Faster-Cheaper. Deliver a better product to market faster for less money. This means don't do shit that doesn't deliver to those goals. Does your TPS Report deliver to those goals? Then don't do it. When I worked for INTC, we had a motto "do more by doing less." Do more of the shit that matters and less of the shit that doesn't matter.

Do you need to count some shit on paper every day (to make sure it got done), make the computer do this at 5AM, and send you an email report where you can read a line or two, and say "yup, done," or "nope, redo."

Yup I'd spend half a day automating a 5 minute job to a 1 minute job, but two things happened. One, after 60 days, my automation had efficiency payoff; meaning I was now more productive than before. Two, I became that much smarter and could now more quickly automate other tasks. After a couple of months, my days evolved from hustle to days with my feet up on my desk reading/studying how to automate things better.

My boss came to me and said "Michael, I see you with your feet on your desk all day reading google"—it was 1999, that was a thing. I said "everything is getting done." He said "Dave struggled to get all this done every day." I said, "Dave did everything manually, I've automated everything. Now I spend my day learning more things."

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Programmers aren’t working from their bedrooms these days. It’s the companies who release the product who are responsible. In fact by not using native apps it’s who the company hires that matters.

Expand full comment
Xpym's avatar

Sure. I guess, to generalize you might say that ensuring that it works fast wasn't a priority for the developer.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

What I’m saying is that the developer, unless that’s a synecdoche for the entire company, isn’t the primary responsible party here.

Expand full comment
Acymetric's avatar

Yes, usually when someone has a hard-hitting complaint about "the devs" the people they actually have a grievance with are the people the devs work for.

Expand full comment
Ch Hi's avatar

While this isn't wrong, you also need to consider speed of development. E.g. it's a lot faster to develop in Python than in C, and I know both languages. If I need to develop something quickly, I'll choose Python. If I need to develop it to be quick, I'll choose C (well, actually C++ because of libraries. and because private/public variables are *really* useful).

Expand full comment
The Solar Princess's avatar

Supplementing with Citicoline cleared up my brain so hard within a week, that if I was around when Scott did his survey on nootropics, I would rate it as "10 -- life changing". My cognition, mood, memory, reading comprehension, creativity -- all skyrocketed so high I wonder how I ever got things done before that.

Reading studies and reports, I don't see anyone reacting to Citicoline that well. Nobody is taking it and becoming superhuman. What's going on?

My first thought is that normal people feel like that *all the time*, and I'm just deficient in something or am otherwise "damaged" in a way that Citicoline can fix it, but it wouldn't further buff anyone who's not "damaged".

Could this be true? Or is it something else?

Expand full comment
Scott Alexander's avatar

Yeah, you'll see that choline is the fifth lowest-ranked nootropic in https://slatestarcodex.com/2016/03/01/2016-nootropics-survey-results/ .

I think it's plausible that you were doing abnormally poorly because of some choline deficiency - but also that on average most people have a couple of similar problems in some system or other, so maybe you weren't any worse off than anyone else.

I also think it's possible choline has made you hypomanic. This isn't something I've ever heard of before with choline in particular, but when someone says a nootropic had amazing unbelievable effects for them, it's usually about 50-50 they're hypomanic. Please come back in six months and let us know how you're feeling then.

Expand full comment
Phil Getts's avatar

Scott, what are your thoughts on triggering hypomania deliberately and temporarily, for some particular task?

Expand full comment
The Solar Princess's avatar

I will try to keep an eye on this and I will get back in touch when I have more anecdata, but reading articles on hypomania, no, this does not look like hypomania. I don't feel any more energized or wired than before. I'm not irritable and my thoughts are not racing. Quite the reverse, I feel Zen and in control. But I guess that's exactly what a hypomanic in denial would say, so whatever.

If most people are so deficient in things that a simple supplement would increase their performance so much, why don't we investigate this harder? We would all look very stupid if the reason why we're not all super geniuses is because we don't have enough iodine in our diet or something. I'm recalling your recent update on biodeterminist guide to parenting, where most of the thing was focused not on what to do to make your child smarter, but on what not to do not to make your child stupider. "Do not drink vodka when pregnant" is not a nootropic, but it's pretty effective. No amount of things that end with racetam will help as much as not having fetal alcohol syndrome.

What other things are there that could impair one's mental functioning? Fixing something that is broken is much more effective that improving something that is already functional. So why don't we start fixing things?

Expand full comment
nifty775's avatar

Re: your last paragraph- as I've said here before, taking magnesium supplements low-key changed my life. It resulted in a permanent 20-30% drop in anxiety, plus it also had the unexpected effect of sharpening my long distance vision. I've been taking magnesium for about a year now and there hasn't been any tolerance or drop-off in effect- it's just a permanent improvement to my quality of life.

I only know about magnesium supplementation because there was a Hacker News post on how modern agricultural practices leech the magnesium out of foods (which I would've ignored, sounds very Joe Rogan-esque). But then the comments sections had a dozen plus people saying 'I started taking magnesium and it changed my life'. That's been my experience too....

Expand full comment
nelson's avatar

What do you take? How much?

Expand full comment
nifty775's avatar

I take Magnesium L-Threonate supplements from Amazon. I just take whatever the recommended dosage is

Expand full comment
Neike Taika-Tessaro's avatar

"If most people are so deficient in things that a simple supplement would increase their performance so much, why don't we investigate this harder?"

I've asked myself this same question ever since supplementing vitamin B12 stopped me from sliding into complete abject misery and reversed my course. (I was deficient, but my doctor didn't manage to diagnose it, because my folic acid was fine and that masked the symptoms, and I wasn't vegan and shouldn't have had the problem in the first place. Took a neurologist suggesting I should get a blood check to find. Wish I'd noticed sooner; I took some permanent damage from this mess.)

It basically instantly switched me from "person experiencing constant struggle, like many of my friends describe having to deal with" to "high-functioning, happy and naturally optimistic person".

Ever since then I wonder how many of the struggling people (who also constantly ask themselves "but I don't really have any objective major problems in life at all, I'm in a good place with good social support, so why am I struggling so hard?") have deficiencies they don't know about, assume the answer is 'not all, but probably a lot', and wonder why this isn't, at least on some level, something we're trying to solve.

Naively, to me, it seems like there'd be various low-hanging fruits for general well-being and productivity to pick here, but maybe I just can't appreciate how hard it would be to test everyone for even the full gamut of just the most common deficiencies (vitamins and minerals).

Expand full comment
Evesh U. Dumbledork's avatar

I got a blood test result 10 days ago which tested for B12 for the first time. It came low. I started supplementing and had a much better week than usual wrt general stress and productivity. My toilet bullet chess rating went up >150 points. I think it was chance / placebo / natural variability (like, this fast an effect?), but this comment makes me hopeful the changes are here to stay. (also, fine folic acid levels, constant struggle for no good reason)

Expand full comment
Neike Taika-Tessaro's avatar

The effect I observed from supplementation was drastically fast and extreme (where I had expected close to none - I had mainly expected it would just stop things from getting worse), probably because I'd been so critically low.

My symptoms were that I was sensitive to everything - light was unbearable, sound was unbearable, I couldn't hear myself think while someone else was talking, I had memory issues, I had very bad constipation, I could barely sleep because everything and anything would wake me up. After a week the subjective changes were so strong that I was *this* close to swearing the lighting in the bathroom had been changed by my boyfriend, but of course he'd done nothing at all. Sleep quality went from none to 'stable sleep for several hours' (still with interruptions at first, but far fewer of them).

Things kept steeply improving for several weeks, then the improvements slowed more and more, and eventually I hit a plateau, about a year or so later. But it was a really, really good plateau to land on.

If your symptoms were less severe I would expect the effect of supplementing to be less pronounced, as well. But I'm extremely glad you figured this out and you're on the road to recovery! Hope this turns out to be your magic bullet.

Expand full comment
Nancy Lebovitz's avatar

I don't know why this isn't addressed more but I think it's related to the lack of interest in modest intelligence increase.

People who believe in IQ also believe that people with IQ below some limit (90?) are apt to have bad lives, but (I've asked) there's damned little interest in cranking up low IQs by 10 or 20 points. Instead, the interest is in getting people who are already smart to be 50 or 100 points smarter.

It's possible that this is because being smart is fun and presumably being smarter would be more fun. It's also possible that people think the problem of people having low intelligence would be easier to solve if smart people were smarter.

My cynical guess is that there's little interest in the problem because solving it would take dealing with large numbers of low-status people, but I'm guessing.

I also suspect there's a background pessimism, a belief that how people are doing is what they really are.

Expand full comment
Jonathan Ray's avatar

there have been lots of studies and lots of funding aimed at raising lowish IQs, but aside from avoiding iodine deficiency and fetal alcohol syndrome and stuff like that, none of the interventions had a big effect that stuck around long-term. Head Start IQ gains faded out over time.

Turning a few doctors into Elon Musks would have a pretty big impact on the world, while turning a few janitors into doctors would have a much smaller impact. Economic productivity appears to be exponentially related to IQ, so if we had a fixed number of IQ points to allocate we should max out the Intelligence attribute of a few Wizards in our D&D party.

But nobody's found environmental interventions that will make smart people much smarter, either.

Expand full comment
Sleazy E's avatar

IQ has no moral valence. Increasing it is way, way down on the list of priorities we should be making as a society.

Expand full comment
Sleazy E's avatar

Intelligence is massively overrated, and those labeled as having low intelligence are seen as less worthy because our society has a problem: the cult of smart, as Freddie DeBoer titled his excellent book on the subject.

Expand full comment
Michael Kelly's avatar

I think we often put too much into "intelligence." For starters, there's many types of intelligence, I can count 6, Woods (I think) enumerates 7, but I think he had to conjure up one to make a Christian based culturally pleasing odd number. But still, whatever makes you think someone less educated than you has less life satisfaction, is less happy? The guy who goes out and builds a house—I'm sure—has a life satisfaction level which is way off of your life satisfaction scale. I'm a scientist now, but in a previous life, I had a herd of 66 cows. The life satisfaction in those days was way off the scale compared to today. I often consider going back to that life ... yet it has it's downsides.

Expand full comment
Nancy Lebovitz's avatar

It may have been a mistake to write about both IQ and being smart.

People who believe in IQ believe in g-- general intelligence.

The person who's bad at school but can build a house isn't the kind of person with low IQ leading to bad outcomes, I think. They seem to be talking about people who have poor impulse control and can't manage ordinary tasks like making change, though I grant that making change is less needed these days.

Expand full comment
Neike Taika-Tessaro's avatar

I don't think this necessarily has much to do with intelligence, I was more than fine on that axis even during my deficiency (although it was certainly continually declining as the deficiency continued undiagnosed and unchecked) -- and as upper middle class, financially self-sufficient and with a stable job and (several) stable relationship(s), I probably didn't register as 'low status' to anyone in particular.

Nonetheless, a definite part of the problem was that people thought it was an inherent personality issue - even the neurologist who eventually told me to get a blood test was *first in the process of dismissing me* with "you need a psychologist, not a neurologist, you have some deep-seated issues", and only then, in an off-hand comment as I was leaving, suggested I get a blood test, which literally saved my life.

(B12 deficiency is brutal. You can absolutely get dementia from a bad B12 deficiency, I was experiencing that progression, and it felt absolutely, mind-bogglingly terrifying. The effect that made me finally see a neurologist was that I far too frequently realised "I just made a decision actual, literal five seconds ago - I remember making the decision, but I don't remember what it was". No, "five seconds" is sadly not an exaggeration. It really was terrifying.)

There was definitely surprise that this fixed up everything, but it seemed the sort of surprise people have when they learn something unexpected about science (biology in this case), not something unexpected about how people function as individual social creatures. No one seemed to be suspicious about how I was suddenly behaving much more upbeat, or expecting me to revert to "how I really was".

Still, it's of course entirely possible that the overall thought was "well, Neike is just emotionally unstable and (increasingly) forgetful, that's how it is" and after that was fixed, the realisation "huh, I guess it was just a vitamin deficiency!" was firmly applied to my case, but didn't make anyone update on their prior about people as a whole generally being how they are. But I think that would at least point to that it's not a very *strong* prior, which in turn should mean there ought to be enough people questioning it out there that this should be a solved problem, if that sort of thinking were indeed the bottleneck.

But also, my case is a sample size of one. ¯\_(ツ)_/¯ e.g. maybe in other cases people do remain suspicious of the change, in which case this hypothetical prior would be more firmly entrenched than my observations would suggest.

Do you maybe have ideas of how we could test your hypothesis?

Expand full comment
Deiseach's avatar

I think the trouble is that the default assumption is "if you're not morbidly obese or anorexic, you have a reasonable diet so you shouldn't be deficient". Most blood tests are basic ones because doctors aren't looking for zebras, so they only test for the basic deficiencies. If that comes back okay, they don't bother searching further unless you are literally spasming on the floor of their office.

Also, the lack of knowledge about deficiencies. People on here have talked about metformin as a miracle drug but I was on it a few years before, quite by accident, I learned that it strips out magnesium:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3760872/

(I had leg cramps at night and looked up what was good for that, magnesium was recommended and it mentioned by-the-bye that if you're on metformin, you're likely to be deficient). My doctor never mentioned that because I'm fairly sure she has no idea about it. I'm taking B-complex supplements for the same reasons - can't hurt so long as I don't overdose on them.

Getting thyroid tests are more trouble than its worth; the basic test comes back "no, you're fine" even though I suspect I do have some hypothyroidism going on. My sister, who is a normal weight and has no other physical health problems, had that eventually diagnosed because she got referred to a psychiatrist and that finally got specialised blood testing done, and she has hypothyroid. I just don't have the energy to try and fight my doctor into getting these tests done, so I'm sticking to supplements and reading advice online.

I do think it's entirely possible that people on seemingly adequate diets are deficient in some elements.

Expand full comment
Majuscule's avatar

I second this. It’s easy to hear look for a physiological explanation” as a brush off, but seriously 90% of my anxiety and depression were fixed by things like eating more protein, getting more sunshine, and treating a thyroid condition so mild I never felt major symptoms. From now on I’ll always check what’s going on with my habits and my body when I feel the kinds of feelings that pervaded my mind from childhood until my 20s. A huge chunk of what ailed me was headdesk levels of silly.

Expand full comment
The Solar Princess's avatar

> "but I don't really have any objective major problems in life at all, I'm in a good place with good social support, so why am I struggling so hard?"

Feeling this in my bloody bones. Maybe I should take the Pascalian medicine approach and supplement with *everything*?

Scott, you're in an excelent position to investigate this, and this could be extremely useful! What are some low-hanging fruits we should take in this regard

Expand full comment
Neike Taika-Tessaro's avatar

I wouldn't recommend that - you *can* overdose on many micro-nutrients, i.e. experience bad side effects from oversupplementing. (Pure B12 (i.e. NOT the supplements that also give you *other* B vitamins) is actually one of the few things you can effortlessly experiment with without having side-effects, and it's over the counter pretty much everything, I think, so I do recommend trying to that if you want to experiment, because it literally cannot hurt and after a week of supplementing you should know if it's making you feel better.)

But I do in the strongest possible terms recommend getting a blood test for vitamin and mineral deficiencies!

Expand full comment
Phil Getts's avatar

Can you provide a name for a specific general screening test?

Expand full comment
Robert Mushkatblat's avatar

Most "slow" UI experiences in modern software are the result of bottlenecks other than "how long does it take to render these pixels", such as "how long does it take to get this data from the server (which relies on a poorly optimized database query)", or some similar failure mode.

Expand full comment
Bernie's avatar

expanding on this: most of those bottlenecks are network IO. e.g You click on a drop down menu and you have to wait until your browser fetches from the server a list of items. Rendering those items is less than 1% of total time. The other common bottlenecks which are not network related are typically things like filtering, sorting and searching. If you have a big enough data set there's only so much you can do to optimize the kind of interactivity expected from modern software.

The main difference between rendering graphics and traditional software lies in parallelisation. Turning a geometry into an array of pixels is the text book example of paralelizable computation -in broad strokes- computing each pixel can be done independently of each other and GPUs are basically designed to do this. Traditional software on the other hand is mostly non-paralellizable stuff. Take a big spreadsheet were many cell's values depend on the values of other cells and updating data will be pretty slow, the actual rendering time of the menues and whatnot will be a rounding error.

Expand full comment
Erica Rall's avatar

Besides disk and network latency, the other major way to waste billions of CPU cycles is carelessness with computational complexity. Pretty often, a naïve algorithm's runtime might scale with the square of the size of the input.(or worse), while a smarter algorithm might get the same work done in a way that scales linearly or N*log(N). This'll work just fine on small data sizes, but as the data gets bigger it will take much, much more CPU cycles.

A program intended from the start to push the limits of modern hardware will force the programmers to be aware of this, because using an algorithm with bad computational complexity with simply be unacceptably slow. But if your doing something much less inherently demanding, then you'll be able to get away (up to a point) with letting the hardware brute force its way through a bad algorithm.

Modern hardware makes it quite a bit easier to get away with bad algorithms on small to medium data. Try running an n^2 algorithm on hundred or thousands of pieces of data on an Apple II or a Commodore 64, and you'll be waiting until the state withers away and true Communism arrives.

Conversely, modern hardware and increasingly mature content ecosystems also facilitate throwing more data at existing algorithms, which can get you into situations where your algorithm is now processing enough data for its bad scaling to become a real problem.

Expand full comment
Retsam's avatar

The other half of this is just that optimizing takes a *lot* of work, and the incentives almost never align to do more optimization than necessary.

Businesses and customers both tend to prioritize new features and bug fixes over improving performance - and even then most projects have a list of known bugs they just don't have bandwidth to fix.

Expand full comment
Michael Kelly's avatar

Computer slowness is typically anti-virus software scanning the programs after they're loaded into memory, but before they're executed.

Heavy duty graphics processing happens in the GPU on the graphics card.

Expand full comment
apxhard's avatar

Came here to say this.

Expand full comment
Pycea's avatar

The other thing is that most everything on a webpage is done in javascript, which means it has to go through about 14 layers of abstraction before anything happens. Games are usually compiled so they don't have to deal with any of that.

Expand full comment
Anti-Homo-Genius's avatar

Javascript is not actually that slow, it's indeed very dynamic (a death sentence for performance in a programming languages in general) and interpreted, but the JS engine in mainstream browsers is a marvel of engineering and represents person-centuries of work by some of the most capable language implementers. People routinely do cryptography and graphics (very math-heavy shit) in vanilla JS since 2015, and it's pretty good. To reiterate, JS is very slow next to C++, Rust or even Java, but not "Substack-UI" slow.

No modern programming language on the planet can be blamed for the atrocity of a slow UI, this is entirely the fault of the programmer(s). Even "too much libraries and framework" is the fault of the programmer who decided the architecture (or lack thereof) of the application. A modern computer being slow is like a country sitting on oil fields from border to border and still having starving population, you know there is some abnormally obscene shit going on behind the scenes.

Expand full comment
Erica Rall's avatar

It's not quite as bad as that, at least in terms of the inherent perf hit of using JS. In general, browser JS is about 50% slower than native code running the same logic, and NodeJS or V8 is about 20% slower. Modern JS engines are heavily optimized to minimize language overhead, and Node and V8 do quite a bit of Just-In-Tine compilation.

The bigger problem is that JS programming patterns tend to be much less perf-friendly than native coding. For one thing, the language is much more opaque to what's happening under the hood than something like a C++, so it's less apparent to a programmer when they're doing something expensive.

For another, JS programmers tend to make heavy use of third-party modules (this being strongly supported by JS toolchains), many of which are built atop multiple layers of other third-party modules, and often aren't particularly optimized or well-documented in terms of perf characteristics (and it now occurs to me that this might be most of the "14 layers of abstraction" you were referring to). Native coders tend to use mostly built-in language and platform libraries and perhaps a handful of well-chosen middleware libraries, all of which are generally designed with performance in mind and well-documented in terms of computational complexity.

Expand full comment
Ferien's avatar

Where you getting these 20% figures from? Yes, job done by developers of JS engines is amazing, but these small overheads are true only for some specific tasks, small, synthetic benchmarks which do nothing useful except report cycles per second, and even then such cases are cherry-picked.

JS is a dynamically typed language and garbage collected language and these could not be optimized out by a JIT. JIT, by itself, works well when you have much better hardware than program actually needed if it was native compiled.

Expand full comment
Erica Rall's avatar

I got the numbers from a quick googling. I did a slightly more in-depth googling just now, and will revise my claim to 20% being Google's claimed perf numbers for V8. Perf comparisons depend on workload, of course, and I expect Google did some cherry-picking to get that number, or at a minimum focused their benchmarks on the kinds of workflows that they optimized the engine to handle well.

The worst-case perf hit my quick googling turned up seems to be around 2x to 4x for heavy number-crunching workloads, which is a lot worse than the 20% claim I'd previously uncritically repeated, but nowhere near enough to be the sole or primary reason that modern UIs are often perceptibly slow on hardware that seems like it should be able to handle that sort of thing very easily.

Expand full comment
Lain Steiner's avatar

Let's not forget that "thanks" to Electron, Javascript is becoming omnipresent. Windows 11' UI is at least partly Javascript, meaning that your OS is becoming a browser.

Expand full comment
Godoth's avatar

Thank heavens that MacOS will never go that way. It’s bad enough having to deal with horrifying Electron apps like Discord.

Expand full comment
Yitz's avatar

What about offline software then? I’m often in places with no internet, and I still get noticeable lag on 2d software (or 2d menus within 3D software!)

Expand full comment
Sylvilagus Rex's avatar

Well there's still time needed to hit the disk usually. In video games, when you hit a new area it tries to load as much crap into VRAM ahead of time(which is also way faster than system RAM!) or if it's open world, it tries to stream in what you need ahead of time in order to make the application more real time performant. This is what you want in a game, but if your browser/mp3 player/etc behaved like this, your ability to multitask would go down precipitously and you'd wonder why you chugged horrendously with more than 3 windows open

Expand full comment
sclmlw's avatar

Most of this lag (that which doesn't come from internet download speeds) comes from simple copying between non-volatile memory to volatile memory. RAM's read/write is several times faster, but it goes away when you cut the power. Even the best SSDs are several times slower than RAM.

The holy grail would be system memory that operated at RAM speed, so you wouldn't need this hybrid system, which incurs the time-wasting step of copying from one type of memory to the other. Instead of the constant auto-save feature, there would be no such thing as 'saving' because you'd never lose anything from a sudden loss of power. Instead of program loading times - time to load instructions into the 'working' RAM - you'd just launch an application in the same state you left off last time. Same with 'boot' times. No need for all that copying of the OS instructions into the RAM from the hard drive, just press the button and you're back in business.

Expand full comment
sclmlw's avatar

There are some candidates for non-volatile memory with RAM-like speeds. I remember when the memristor was first announced and was pretty excited about the prospect of finally realizing this kind of computing. The 'problem' is that volatile RAM speeds keep increasing, so we can't get transition from the two-step hybrid memory model down to a single system memory solution.

Some day, maybe? Maybe not? It's hard to say. We've been working with this kludged together system now for decades, and nobody seems to care about it other than a few hundred ms delay every once in a while, or a few seconds in reboot times. People really care when they lose a document they were editing but hadn't saved, but we've got passable solutions to that as well. So if we do ever get non-volatile RAM/system memory, by the time we get there it might not be that transformative anymore.

Expand full comment
Ricardo Cruz's avatar

VRAM is faster than RAM? Why only games use VRAM then?

Expand full comment
Erica Rall's avatar

VRAM is faster for the GPU to use than system RAM, for a couple reasons that don't apply for CPU usage.

One is the location of system RAM vs VRAM in the computer architecture. VRAM is located on the GPU card and is connected directly to the GPU's processor. Likewise, the system RAM is on the motherboard and is directly connected to the CPU via a dedicated memory bus. The cross connection between the motherboard and the GPU card, however, has to go through the PCIe bus, which is also pretty snappy in a modern system but which is less optimized for memory workflows than the memory busses and which might bottleneck if there's enough other stuff going on.

The other issue is what we mean by "fast". VRAM is designed to be fast in the ways that matter most for central examples of GPU workloads: massively parallel vector operations on huge chunks of data. And system RAM is designed to be fast for central examples of CPU workloads: between one and a couple dozen independent threads each working on a little bit of data at a time. As a result, VRAM is generally much much faster in terms of parallel bandwidth, while system RAM is faster in terms of clock speed and first-byte latency.

To oversimplify hideously, system RAM is a sports car and VRAM is a freight train. A sports car will get one passenger to an office in the suburbs much faster than the freight train will, but the freight train will get a dozen boxcars worth of cargo to the industrial center of downtown much faster.

Expand full comment
Sylvilagus Rex's avatar

Huh, I did think VRAM was actually clocked faster as that used to be true a decade ago but TIL. Great description, good analogy.

Expand full comment
Erica Rall's avatar

Yeah, an RTX 3090 (near the top of the line for consumer gaming cards) has a designed effective clock rate of 1219 MHz, while system RAM is available that support up to 6400 MHz (DDR5, which only recently rolled out and is only supported by the newest Intel CPUs and chipsets) or ~5000 MHz for DDR4. 3200 MHz seems to be a price/performance sweet spot for DDR4, especially because much faster than that and you're trading off between clock speed and cycles of latency and because of other potential bottlenecks in the system.

Expand full comment
Domo Sapiens's avatar

In line with Vim's comment, does this really explain the several order of magnitude in loss of performance? Or are you just another "Software developer [that] will often be quick to come up with excuses as to why it's actually reasonable, that everything is more complex now, and have you thought of X and Y"?

Expand full comment
Sylvilagus Rex's avatar

I don't really work in software related to either of these questions, I do boring infrastructure things. Just attmepting to give a 100 yard view as to why. sidereal gave some good specific details. I second what he said, that these latencies and inherent characteristics of the system are not insurmountable things to overcome, but at most companies, you're not going to get PM to sign off on spending lots (or any) of development time optimizing things that aren't going to really make a noticable dent in sales if they are left the way they are. As long as it passes whatever mirror-fogging UX testing it goes thru, no one is really going to care that it occasionally takes a few 10ths of a second for a dialog or a menu to pop up if the data is not cached in RAM. If customers tomorrow suddenly stopped using software because of these reasons, then people would really care and you might even see big OS and hardware architectural revamps to make these kinds of applications more performant.

Expand full comment
Sylvilagus Rex's avatar

In other words, if you rephrase the question "Why is the software that needs to be fast the most performant software?" the question kind of answers itself. This obviously loops in other kinds of software, as well, though mostly not COTS things

Expand full comment
sidereal-telos's avatar

You might find "Latency numbers every programmer should know" worth reading: https://gist.github.com/hellerbarde/2843375

Modern CPUs are absurdly fast, but *everything else* isn't. If you have a 5 Ghz processor then just reading from main memory, which you probably think of as "free", costs 500 cycles. Reading from your "pretty fast" SSD costs 750,000 cycles. And if have to read from a hard drive 50,000,000 cycles, and that's most of your frame budget right there.

And these are not always avoidable or predictable - for example, most systems will try to evict things from RAM that it thinks won't be used soon to make space for things that will be. This actually reduces latency and improves performance on average, but when you do inevitably need that memory you get occasional latency spikes while it gets loaded back.

None of this is impossible to avoid. You can structure your application so that blocking operations only happen in the background so at least the UI will continue to update, you can pin data in memory so it won't be evicted (but that slows down the user's other applications, so maybe you shouldn't), you can prioritise tasks so important things run first.

Chrome, as in the native browser UI, actually does a pretty good job at this internally, for all it gets criticised a lot as "bloated and slow". I think the only times I've seen it actually drop frames are just after starting (when you can't avoid loading program code from disk) or during a heavy compile (which takes up most of the systems resources).

Web pages, on the other hand, are much worse. I don't fully understand why, but "runs at too a high level of abstraction to do those things" is probably a good approximation.

Expand full comment
Edward Scizorhands's avatar

I was just thinking that the Apple 2 had much better latency than my OSX machine from nearly 40 years later, so I'm glad to see it proven.

Expand full comment
Sylvilagus Rex's avatar

Modern OSes also seem to be more abusive to spinning rust, also, every windows 10 install I've seen on a disk has either been annoyingly hitchy or been tweaked using various types of regedit-fu to keep the disk from being constantly slammed. I think MS just focuses on SSDs and dont care if they thrash a disk with a lot of random read/write

Expand full comment
Robert Mushkatblat's avatar

To be clear, video games probably do cause your computer to render pixels substantially faster than most other kinds of software, but your browser renders pixels much faster than humans are capable of perceiving a delay, so the issues are primarily elsewhere.

Expand full comment
Sylvilagus Rex's avatar

Not to mention all the dedicated hardware/drivers/APIs designed to do the triangles thing in real time. Not to say no one spends time optimizing UIs, but it doesn't make the software unusable if it hitches for a tenth of a second unlike a video game, so the level of concern over optimizing the experience isn't there.

Expand full comment
User's avatar
Comment deleted
May 10, 2022
Comment deleted
Expand full comment
dorsophilia's avatar

As you say, the articles are "intended more to draw attention than to offer much insight."

This is newspaper reporting. I think you are expecting too much if you want a deep dive into a poorly understood topic. Calling this a "media narrative' makes it sound like you don't think there is a problem, so perhaps the NYT provided the statistics and anecdotes in order to nudge you towards considering the issue more deeply, which is the reason for the article.

Expand full comment
User's avatar
Comment deleted
May 8, 2022Edited
Comment deleted
Expand full comment
Jam's avatar

Find something to talk to them about using open-ended questions (yes/no questions are dead ends) and pursue the convo thread to the exact extent they seem engaged that day. Repeat on subsequent meetings, and desist when they signal disinterest or discomfort.

Expand full comment
Joshua William's avatar

Yeah, I do that. Thanks for commenting though.

Expand full comment
Deiseach's avatar

Today I had to learn the term "polar-gender".

I don't have any strategies for approaching people because I'm not interested in approaching people, but I might suggest that dumping the necessity for mental hoop-jumping about "do I find this person attractive in the sense of my taste as deduced from their look, and is that in a polar-gender or intra-gender sense, and how am I myself doing gender?" you will save valuable time, at the very least the attractive person will not have left the vicinity by the time you finally conclude "Yes, I think they're hot".

It's very hard to ask someone who is not there out.

Expand full comment
User's avatar
Comment deleted
May 8, 2022
Comment deleted
Expand full comment
Jam's avatar

Tongue-in-cheek, actually. I think you grossly misinterpreted and came off as the asshole here tbh.

Expand full comment
Deiseach's avatar

Don't worry, it was me at a hulking 5' 4" bullying this delicate little 6' 4" shrinking violet, no wonder he lashed out defensively 😁 We've since shaken hands like ladies and gentlemen and put this to bed.

Expand full comment
Joshua William's avatar

Fucking lolll.

Expand full comment
User's avatar
Comment deleted
May 8, 2022Edited
Comment deleted
Expand full comment
a real dog's avatar

Based on this comment thread, I'll make an educated guess that getting over yourself could help your game. I'm not even joking.

Expand full comment
User's avatar
Comment deleted
May 8, 2022
Comment deleted
Expand full comment
Deiseach's avatar

Annnnd here we have the real reason why you find it hard to approach people and have them reciprocate your interest.

Expand full comment
Joshua William's avatar

I don’t find it hard to approach people. That’s not why I’m asking. I seek to perfect my skills, and people might have nuggets I don’t know.

Expand full comment
Deiseach's avatar

You certainly could use help perfecting your skills, and here's a free tip: if a woman says something you don't agree with, don't call her a cunt. Not unless you know her *very* well and mutual exchanges of jocular insults are already established between you.

Expand full comment
User's avatar
Comment deleted
May 8, 2022Edited
Comment deleted
Expand full comment
Ben W's avatar

Identify something they've done deliberately that you admire - one apparel choice, attending a particular event, etc. Genuinely express appreciation and curiosity to find out more about how or why they made that choice.

If there's not really anything standing out (maybe you're both waiting somewhere), tell them upfront that you're interested in just killing time, and ask their opinion on something (existentially lightweight) going on in your life. Drama, something you've learned, an upcoming decision.

Both of these get more difficult if that attractive person is in a group; that's where "pickup artistry" actually starts to have some insight.

Expand full comment
Joshua William's avatar

what's your approach with singling out someone from a group? I'd generally always evade an approach at this level, though, if I'm to do so I'd address everyone firstly, welcoming myself and warming myself then I'd, if they seem in a relaxed vibe, isolate the person of interest, and if the group seem bothered I may even ask [charmingly] them if the target of interest would be interested in moving away a little so the group may not be disturbed with my interest, and then thank them when returning the target back after some minutes or so. What do you think?

Expand full comment
Joshua William's avatar

This is insightful. Ama’ sit with it for a while and I’ll get back to you if necessary.

Expand full comment
Mystik's avatar

I don’t, but that’s also not an optimal strategy probably

Expand full comment
User's avatar
Comment deleted
May 8, 2022
Comment deleted
Expand full comment
Mystik's avatar

In college, I just passively met and got to know enough women that I didn’t really need a particular strategy. Now that I’m out of college, I’m still looking for a viable strategy.

The thing I optimize against is basically “making the woman uncomfortable (bad)” vs “getting a date.” My college had a very strong “talking to women uninvited is borderline harassment” vibe, and I have yet to figure out a strategy around it, even now that I’m graduated.

Expand full comment
User's avatar
Comment deleted
May 8, 2022
Comment deleted
Expand full comment
Lars Petrus's avatar

> first see if they’re attracted to you via. eye contact

How on Earth do you do that?

Expand full comment
Viliam's avatar

Eyes are typically located about 20 cm above the boobs. Smile happily and look into her eyes. When she looks back, keep looking for a second or two, then break eye contact by moving your sight sideways (not downwards) while still smiling.

At some later moment, look at her again. If she is smiling at you, she is attracted to you. If she calls the cops, she it not.

The most successful pick-up line is: "Hello." Generally, the less you talk, the better. Most people prefer if you let them talk; be a good listener. Also, if she keeps talking to you and smiling, she is still attracted.

Expand full comment
Mystik's avatar

Yeah, I agree that that seems like the standard strategy. It’s more just that I have yet to overcome the fear of seeming like I’m harassing them that sort of got instilled into me. Which is why I think my strategy is suboptimal; I just haven’t gotten around to adopting a more optimal strategy yet.

Expand full comment
User's avatar
Comment deleted
May 8, 2022
Comment deleted
Expand full comment
Pete's avatar

Contrary to others, I'd say #3 - because in the traditions of sauna-going where it's not a "dry sauna" (90°C / very dry) but a bit less hot sauna combined with periodically throwing some water onto the heated rocks, the sudden change of hot air being a bunch more humid can literally scald/burn within a second if you're too eager with the water, and no amount of sweating can counteract it rapidly enough.

Expand full comment
Kenny Easwaran's avatar

Wet-bulb temperature is how they usually measure this: https://en.wikipedia.org/wiki/Wet-bulb_temperature

Expand full comment
a real dog's avatar

#2.

In 100% humidity you cannot lose heat by sweating, which makes 40°C dry alright and 40°C wet potentially lethal.

Expand full comment
magic9mushroom's avatar

Basically #2. Evaporating water takes 2 kJ/g, so until you run out of available water to sweat and to evaporate through the lungs, you can keep your body temperature relatively low.

There's also the issue that 100% humidity air at 90°C would drown you as it deposited its water into your lungs (along with searing them, of course).

Expand full comment
Lambert's avatar

Probably 3.

I found a psychometric chart that goes up to 120°C (https://www.ashrae.org/File%20Library/Technical%20Resources/Bookstore/UP3/SI-3.pdf).

If you find the point at 90°C on the x-axis and 5% relative humidity on the curved lines, then follow the enthalpy isolines up and left to 100% relative humidity, it shows that latent heat of vaporisation is enough to cool the air below 40°C. (I think. I've not done any thermo for like 2 years)

The opposite of this is why steam causes such bad scalds.

Expand full comment
User's avatar
Comment deleted
May 8, 2022
Comment deleted
Expand full comment
Cry6Aa's avatar

Not speaking towards your comment here, but the idea that Japan will die out is inane for the same reasons that fears of overpopulation were in the 70s.

People aren't just monotonically going to have the same number of kids forever until the last Japanese child inherits an empty island chain, they're responding to their environment.

Japan is 'overpopulated' (too expensive, too crowded, lack of opportunities for advancement, harsh

working hours etc) in terms of having kids right now, and so people aren't having any. When that changes (presumably due to all the old boomers finally kicking the bucket and making demographic space) then people will start having more again (although likely not as many as during the early days of the demographic transition).

Turns out that we're just boring mammals that end up with a sigmoid growth curve after bumping up against our local carrying capacity.

Expand full comment
dionysus's avatar

The quality of life in Japan has never been higher in its entire history, yet the birth rate is at a historic low. You can't explain that by carrying capacity or things being too expensive.

Expand full comment
Cry6Aa's avatar

People aren't homo economicus - we don't care about GDP, development indices or 'quality of life' when having kids. This should be obvious with even a moment's contemplation of history.

As it stands, Japanese citizens (especially those living in large cities) are crowded together, pay a lot for food and (especially) shelter, and work incredibly long hours (often well into the evening). They have a spectacular suicide rate and a word for death by overwork, for goodness sake. All of which is what matters to our mammal brains - we don't want to have kids when we're pressed together cheek by jowl, or when we're stressed out (which is just saying the same thing twice, really). And the demographics bear this out - the parts of Japan with higher birth rates are invariably more rural, more spaced out and enjoy lower-stress lifestyles.

Again - I'm not saying that giving everyone space and setting working hours to 4 a day would result in families having 8 kids again - that was also an artefact of demographic transition from a pre-industrial regime, of high childhood mortality society without birth control, to a post-industrial paradigm of low-mortality and choice in child rearing. But I do think that the birth rate will stabilize around the replacement rate after the big demographic bulge that the boomers represent has passed and the population has dropped. Guessing further, I'd say that the 'sustainable' population is something like 80-100 million.

Expand full comment
dionysus's avatar

Right, people aren't homo economicus, but your comment and your previous comment both posit that economic reasons are central to the drop in birth rate. I'm the one disputing that.

Is food and shelter more expensive in Japan relative to income than it was in, say, 1600 AD? Of course not. The crowding explanation might make more sense, but I'm not convinced that Japanese people hundreds of years ago had more personal space than people today, given that living with extended families (not to mention with a spouse and children) is less common now than it was in the past.

I also don't think fertility rates are stabilizing at replacement. In almost every developed country and many developing ones, they've dropped below replacement and show no sign of recovery. The rate is 1.84 in France, 1.77 in the US, 1.74 in the UK, 1.61 in Germany, and 1.45 in Poland. Even the world fertility rate is only barely above replacement, at 2.44, and it's still falling. What I predict will happen is that the ideologies dominant in today's world will lose out with the demographic decline of their followers, leading to a future that looks more like the past: rigid gender roles, high inequality, high religiosity, and above all natalism. I'm not in favor of those things, but they are on the right side of history, because their believers will numerically dominate the world and end up writing the history books.

Expand full comment
Cry6Aa's avatar

My comments posit environmental/biological reasons as central to the drop in birth rates. Which is how I end up arguing with both lefties and righties on this point.

Simply put, I think nearly all developed countries are "overpopulated" in a way that our ape brains can recognise, and instinctively make us want to have less kids as a result. Nor do I think that this is new - cities have been both densely populated and demographic sink holes since forever (albeit that one can argue as to the contribution of disease here). The big counter-example is, of course, the US - which is economically developed but doesn't have the same magnitude of issues that developed Europe and Asia has. Because it's big and relatively "underpopulated".

I also don't disagree that social factors could change this somewhat, but I do note that some very authoritarian states (cough China cough) can't seem to increase their birth rates on a whim. Even the Saudis (which are your demographically 'ideal' authoritarian hell-hole) have a falling fertility rate.

Expand full comment
magic9mushroom's avatar

"'Till our women bore no more children, and the men lost reason and faith,

And the Gods of the Copybook Headings said: 'The Wages of Sin is Death'."

-Rudyard Kipling, 1919

The big question is to what degree automation changes the paradigm.

Expand full comment
magic9mushroom's avatar

Birth rate and death rate are not really what matters. What matters is fertility rate - the number of children born per woman. If it's over 2.1, the population will go up. If it's under 2.1, the population will go down (it's 2.1 rather than 2 because slightly more men are born than women, although if you started screwing with the gender ratio it'd change).

Reducing death rate doesn't matter unless it affects fertility rate (the big thing here would be that if you extended the fertile period of women, the natalist sects like Quiverfulls would simply multiply their fertility by the improvement; right now the Quiverfulls manage ~8 children per couple with ~25 years of female fertility, but if there were 200 years of female fertility they'd presumably manage 64 children per couple).

Expand full comment
Pycea's avatar

> it's 2.1 rather than 2 because slightly more men are born than women

Also because fertility rate only counts women of child bearing age, so infant mortality skews the replacement rate higher.

Expand full comment
Pycea's avatar

I think that increasing lifespan doesn't actually decrease death rate in the long term. Everyone dies eventually, so unless you're continuously increasing lifespan towards infinity, it will eventually stabilize back to what it was.

More intuitively, to maintain a population, every person has to have an average of one kid before they die. Whether they live 100 or 1000 years doesn't matter, if the average kid count is below 1, eventually the population will shrink.

Expand full comment
demost_'s avatar

"I think that increasing lifespan doesn't actually decrease death rate in the long term. "

This is just factually wrong, but probably due to a misunderstanding of the terminology.

"Death rate" is defined as the fraction of people which die *per year*. That's why it is called a rate. So if 1% of the population die per year, this corresponds to a life expectancy of 100 (very roughly). If 0.5% of the population die per year, this corresponds to 200.

(Of course, usually we compute death rates stratified by age and gender, e.g., we compute the fraction of all 60-year old females which die within one year.)

Expand full comment
Arbituram's avatar

This is incorrect. Let's work through it with some example societies, assuming an even spread of population from 0 to life expectancy.

Society 1: Fertility rate 2 (which we'll assume is replacement for this society), people die at 30. Each year, 1/30th of the population dies, for a death rate of 3.3%.

Society 2: Fertility rate 2, people die at 90. Each year, 1/90th of the population dies, for a death rate of 1.1% (which, incidentally, is almost exactly Japan's - https://data.worldbank.org/indicator/SP.DYN.CDRT.IN?locations=JP).

Both of these societies have a stable population with very different death rates (although Society 2 would stabilise at a population 3x that of Society 1).

Expand full comment
Pycea's avatar

You are right of course. I should have just left it at fertility rate is what determines population growth or shrinkage long term.

Expand full comment
magic9mushroom's avatar

>every person has to have an average of one kid before they die

Two, not one, because each kid has two parents. If everybody has one kid (and there's still a 50:50 ratio), the population halves every generation.

Expand full comment
Pycea's avatar

Right - I guess I was simplifying to the case where humans reproduce by budding.

Expand full comment
demost_'s avatar

I don't know good resources, but there is a helpful invariant: Women above ~45 don't reproduce. So the death rate in this model does not influence how many children are born (unless you have substantial child mortality, which has become less common). It's just that every newborn is around for a longer time.

So if you double the average lifespan, then this just means that the population is twice as large as with the shorter lifespan, but it does not affect any population trends. If everyone born before 2050 has a lifespan of 100 years, and everyone born later than that has a lifespan of 200 years, then the factor of two will appear in the period from 2150 (newborns would have start dying with old lifespan) to 2250 (newborns start dying with new lifespan). It's easiest to see in the scenario where in each year exactly the same number of babies is born, but the math extends to scenarios where this number changes. You simply get a factor of 2 (eventually) between any two scenarios where the lifespans differ by a factor of 2.

In reality, things smear out even more because the lifespan increases gradually, by a little bit in every (non-corona) year.

Expand full comment
sclmlw's avatar

I'm curious what priors people on this thread have for lifespan increasing to anything over 120 years. I'm not saying it's impossible, but there doesn't seem to be a precedent for extending lifespan appreciably.

Most people conflate lifespan and life expectancy. Life expectancy has always been a bimodal distribution, with higher rates of death at both ends and lower rates in the middle. Most of the improvements in life expectancy in the 20th century came from dramatic decreases on the young end of the distribution, with relatively minor gains on the older end - mostly driven by control of high-mortality infectious diseases through water treatment and vaccination.

It seems the expectation for the next few decades should be to see losses when considering the increases in heart disease, diabetes, and cancer. We should hope to get better at treating chronic disease, but I see incidence rates of chronic disease - which now are the leading causes of death - rising, not falling. I'm not saying we'll never get there from here. I'm just trying to figure out why people assume we're headed the opposite direction from the one the numbers are pointing.

Expand full comment
demost_'s avatar

I agree that a lot of increase of life expectancy comes from decreased infant mortality. But I would be very surprised if that was the only improvement. I expect that if we compare death rates of a 90 year old person now and 50 years ago, then the difference would be quite dramatic.

Edit: I checked, and it is correct: https://ourworldindata.org/life-expectancy#life-expectancy-by-age-in-england-and-wales

My prior for "anything over 120" is very high (>90% if there is no X-event or collapse of civilization or something like this), because I believe that this will be obtained by following the general trend. The lifespan almost everywhere in the world has been steadily increasing for decades. (Except for the corona dip, which is an outlier. And as so often, the US were a sad exception before corona, too, due to drug abuse.)

And this is despite the fact that the obesity wave has been sweeping through the industrial world for decades, as have cancer and other age-related diseases. The trend points very clearly towards longer life, not shorter life.

But there is another reason to be optimistic: there have been some real breakthroughs in research in the last 5-10 years on this subject. In mice we can now increase their lifespans dramatically by giving them a cocktail of chemicals (Yamanaka factors). Even more important is the way this cocktail changes the physiology of the mice. It's not just that they stop ageing and dying. Instead, in many aspects they apparently become young again. Atherosclerosis goes away, arteries become flexible again, and so on. This suggests that it's not just possible to stop ageing, it suggests that ageing might be a reversible process. (Parts of it. Age-related farsightedness is unlikely to be reversible. But the important things might be reversible.)

You can read more here: https://www.lesswrong.com/posts/ui6mDLdqXkaXiDMJ5/core-pathways-of-aging

Success in mice doesn't mean that we have working treatments, and there is a good chance that we will never have them. So my prior that we can go for >200 years due to these lines of research is moderate, perhaps 20-40%. But this estimate is a wild guess, and there are more knowledgeable people on this.

Edit: Yuri Deigin has a blog where he sometimes comments scientific progress related to ageing and Yamanaka factors.

https://yurideigin.medium.com/epigenetic-clock-of-aging-709c1fe1e554

https://yurideigin.medium.com/death-becomes-her-or-why-aging-is-an-epigenetic-program-c55a32fd8c74

Expand full comment
sclmlw's avatar

To be clear: I didn't claim that all improvements in life expectancy over the 20th century came from infant and child mortality improvements. I said, "with relatively minor gains on the older end - mostly driven by control of high-mortality infectious diseases through water treatment and vaccination".

Thanks for those links. I'll look at it a little more. I'm open to the possibility of new as-yet unimplemented discoveries allowing novel pathways to improving lifespan. I'm not as bullish on them as you are, but I can respect a difference of forecasting here.

As to the idea that lifespan is also increasing appreciably, the data you linked does have a very modest improvement. Over a 160+ year period, from 1850-2013 life expectancy of 70yo went from 79.1 to 85.9, an increase of 6.8 years, an increase of about 0.42 years/decade, with more improvements weighted toward recent dates. During that same period, life expectancy at birth doubled, from 41.6 to 81.1, supporting my point that most of the 20th century gains came at the low end of the distribution, not in old age. I still think this impacts how people perceive "gradual increases in life expectancy" in historical terms, and expecting commensurate improvements in the future. It's possible that the curve will continue an upward trend - driven by entirely new technologies - but it's also possible we will see sigmoid growth take over.

The projected life expectancy graph increases less than 10 years for the highest country (Japan) by the end of the century. That would seem to require more people to be regularly hitting 120 years (lifespan improvement), or perhaps just the elimination of many chronic diseases that are currently the leading cause of death.

I will say that the graph on "years of healthy life expectancy" appears to demonstrate a constant increase in healthy life commensurate with overall life expectancy, which is a good sign, and evidence that the rise in chronic disease incidence is not having the kind of impact I would have expected.

Expand full comment
Arbituram's avatar

Although the two variables seem symmetrical, in the longer term they are not, under the following assumptions, all of which I currently believe are likely when speaking of industrialised societies which have already undergone the demographic transition to low birth/low death demographics:

1) Births rates do not 'rubber band' - there's no intrinsic tendency for low birth rates to increase or high birth rates to decrease.

2) Further reductions in death rates occur almost entirely along the elderly, who are post child rearing age.

3) Reductions in death rates are individually small, and occur gradually, with improvements occuring mainly at the disease level rather than at some fundamental 'aging' level. This type of progress may asymptotically approach life expectancies of 100-120.

Under these assumptions, death rate improvements have a linear impact on populations, while birth rate changes have an exponential impact. The birth rate changes will always dominate over generations.

Expand full comment
Arbituram's avatar

#3 is the interesting one here, of course, and the one that people are trying to change. The reason so many researchers are now focusing on aging in general is because of very high disease comorbidity in old age, which means that curing one thing extends life by less than you would expect. In the UK, where cancer survival rates for young people are now quite good, curing *all* cancers would improve life expectancy by... About two years ( https://www.google.com/url?sa=t&source=web&rct=j&url=https://publishing.rcseng.ac.uk/doi/pdf/10.1308/rcsbull.2016.190&ved=2ahUKEwjR257Xws_3AhXKbsAKHd3aAswQFnoECAYQAQ&usg=AOvVaw2Ux7RSa2TG2Au9R0VXKQFp) .

Not only that, but those two years would be more likely to be accompanied by dementia, osteoporosis, and other chronic diseases.

Expand full comment
NoPie's avatar

Surprising. That wouldn't even put the UK to the level of Japan by life expectancy (currently 4 years difference). The war on cancer was a big overreaction.

Expand full comment
Arbituram's avatar

I'm not sure that follows - one can be less healthy and not be dead (even if you never get lung cancer from smoking, it is still likely to mess up your lungs). Very, very few children die of asthma in the UK, but it would be much better regardless if fewer children *had* asthma (this is top of mind for me, having just had a child in a fairly inner city area).

Expand full comment
NoPie's avatar

True but that's only supports my point. I meant to say “war on cancer not car”. Sorry for the mistyping. Cancer kills quite fast compared to other conditions and if we had no cancer at all, the benefits measured in life expectancy were not that significant. This makes to think that cancer by itself is not that big part of the problem and the resources allocated to research it, could have been better spent elsewhere.

You mention smoking and definitely if all people would stop smoking there would be many benefits. One is that it reduces lung cancer cases but also improves cardiovascular health. Possibly those other benefits are even more significant and our primary emphasis on cancer was misplaced and not supported by data. Cancer attracts more attention because it is such a nasty disease and in unfair manner even to people with healthy lifestyle. And yet all other measures can give more return on investment.

We spend so much money on “improved” costly (more than $50,000 per course) cancer drugs that sometimes prolong life expectancy by only few months to the individual. People beg for those drugs to be financed, charities are involved etc. But when it comes to statins which also prolong life but cost comparatively so little even with long-term use (atorvastatin 30 tablets cost $1 in the UK, $12 per year, $360 for 30 years), there is so much resistance and doubt. I read the calculation that the average benefit from statins is not that big either, about 4 months (it varies individually). And yet in one case those 4 months are bought by $360 and in other >$36000, a hundred times more costly.

Sorry for those numbers. I just feel a need to have a perspective of cost-effectiveness of every medicine.

Expand full comment
User's avatar
Comment deleted
May 8, 2022Edited
Comment deleted
Expand full comment
George H.'s avatar

Hmm I'm going to say that internalizing your mistake is not something under your direct control. It seems like something that happens unconsciously. It goes from the slow part of my thinking to become part of the fast part... to use the slow/ fast thinking metaphor.

Expand full comment
Kenny Easwaran's avatar

That point about opponents is only relevant in specific status competitions over positional goods (like societal power). It’s not too important in most people’s private life (though we might be tuned to see more things as status competitions than actually are).

Expand full comment
Sarabaite's avatar

I think that acknowledging mistakes can be so difficult/rare that I am unwilling to chastise many people - including myself - for getting there, and then taking a little while to get to the second step.

Also, sometimes actions are experiments. One does a thing in the hypothetical expectation of outcome A. If the experiment fails, does that mean that a repeated effort also won't work? If it generally works but didn't this time, does that mean that one should give up on the experiment?

As for opponents - eh. Life goes on. One must constantly update in the face of more information. If they never admit mistakes, they are going to be left behind eventually. Have patience and charity, and play the long game.

Expand full comment
Michael Kelly's avatar

This is enlightening ... We often tend to think that people who score lower on intelligence tests are less intelligent. When in fact, they're just less educated. But they're less educated because they're less willing to acknowledge and correct mistakes ... i.e. more stubborn. They proceed less far in education. Does this make them less intelligent, or less just flexible?

Expand full comment
Sarabaite's avatar

Eh, I am pretty convinced that intelligence is actually measuring a real capacity, and this capacity affects educational attainment. But curiosity and openness also track with intelligence, so things are intertwined.

Expand full comment
Michael Kelly's avatar

Yes, but cultural exposure, cultural values, family values, and family wealth also affects educational opportunities.

Expand full comment
Sarabaite's avatar

Absolutely! In the most fair world...well, maybe I would have to think about it more, but I think that in the most fair world, the role of basic intelligence would far overshadow those other factors.

Expand full comment
User's avatar
Comment deleted
May 8, 2022Edited
Comment deleted
Expand full comment
skybrian's avatar

I guess it depends on the app, but modern browsers do use the GPU quite a bit and they are largely vector based (for example, fonts are and lots of graphics are too. I don't know if rendering is parallel, but it's pipelined. Web page rendering can be very fast if the application isn't bogging it down with lots of network requests.

Expand full comment