Links For December
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
2: Claim via NPR: When Brazil had high inflation in the 1990s, some economists developed a plan: price everything in inflation-adjusted units, so that people felt like things were “stable”, then declare that the Inflation Adjusted Unit was the new currency. How Fake Money Saved Brazil. Also interesting: they tried it because the new finance minister knew no economics, recognized his ignorance, and was willing to call up random economists and listen to their hare-brained plans.
3: In the 19th century, a group of Tibeto-Burman-speaking former headhunters along the India/Burma border declared themselves the descendants of Manasseh (one of the Ten Lost Tribes) and converted en masse to Judaism. In 2005, the Chief Rabbinate of Israel accepted their claim and expedited immigration paperwork for several thousand of them.
4: John Wentworth on How To Get Into Independent Research On AI Alignment. "I’m an independent researcher working on AI alignment and the theory of agency. I’m 29 years old, will make about $90k this year, and set my own research agenda. I deal with basically zero academic bullshit...best of all, I work on some really cool technical problems which I expect are central to the future of humanity. If your reaction to that is 'Where can I sign up?', then this post is for you."
5: Related: AI Safety Needs Great Engineers. “If you could write a pull request for a major ML library, you should apply to one of the groups working on empirical AI safety: Anthropic, Cohere, DeepMind Safety, OpenAI Safety and Redwood Research.”
6: Aella's twitter polls on eugenics. EG: "A lesbian couple is looking for a sperm donor, and choose their donor based on how healthy, smart, and happy the donor seems to be. Is this: A) not eugenics / B) eugenics I don't support / C) eugenics I support?"
7: Related: I’ve previously written about why selecting for intelligence doesn’t necessarily mean that animals will get worse on other traits. But here’s a story about someone selecting guppies for intelligence (successfully) and finding that they had smaller guts and lower fertility. I still think this isn’t necessarily true, but it looks like some of the lowest-hanging fruits if you just breed kind of randomly will be tradeoff genes.
9: Best of Twitter, 2021 edition:
(see also this comment)
10: WSJ article on the early days of Amazon. Great source of funny stories, eg:
Among the early mistakes, according to Mr. Bezos: ‘We found that customers could order a negative quantity of books! And we would credit their credit card with the price and, I assume, wait around for them to ship the books.’
One of his more controversial early decisions was to allow customers to post their own book reviews on the site, whether they were positive or negative. Competitors couldn't understand why a bookseller would allow such a thing. Within a few weeks, Mr. Bezos said, "I started receiving letters from well-meaning folks saying that perhaps you don't understand your business. You make money when you sell things. Why are you allowing negative reviews on your Web site? But our point of view is [that] we will sell more if we help people make purchasing decisions."
11: Why COVID variants skipped from Mu to Omicron: “In a statement, the WHO said it skipped Nu for clarity and Xi to avoid causing offense generally.” Rolling my eyes at “offense generally” and the idea of deliberately averting nominative determinism.
12: In 1799, British-American fugitive William Bowles fled to Florida, moved in with the local Indians, became their chief, led a series of raids on the US, and declared independence as the State Of Muskogee (he was defeated by a US/Spanish alliance in 1803).
13: Claim of the first successful deepfakes based hacking. Looking through comments elsewhere, I think this claim falls apart, which means that AFAICT after several years of the technology existing I still know of no instance of any deepfakes actually fooling anyone and causing damage.
14: An attempt to replicate various “poverty causes cognitive problems” studies goes…well, about the way replication attempts usually go. I was always suspicious of these, people got too excited about this field for political reasons. Related:
15: This series of tweets makes an interesting case study on science communication. An anti-incarceration group reviews the evidence on recidivism, which they summarize as "our report shows that people convicted of homicide are extremely unlikely to commit another violent crime after release". But someone reads the report, finds it says there’s a 22% chance they do, and calls them out for lying. I would have been willing to let this pass if they had just said “unlikely” - somebody might honestly think 22% is unlikely compared to some hypothetical belief that it’s near-certain. At “very unlikely”, yeah, I agree they’re pushing it.
16: Related: eigenrobot vs. bad critiques of predictive policing.
18: Does “Moore’s law of genome sequencing” still hold? If not, who we should blame? Here’s a Twitter discussion.
19: Lots of people supported me when NYT doxxed me. I feel like I should pay this forward by signal-boosting when other people are going through the same thing. So: the news magazine Toronto Life doxxed some people running a local Instagram account who preferred to remain anonymous. I think this is bad. In the extraordinarily unlikely even that I ever care about anything in Toronto, I will try to find and link sources other than Toronto Life.
20: Markus Strasser on why projects along the lines of “use AI to extract insights from journal articles” are doomed. I read this the week I was considering lots of ACX Grants applications about these, so if I didn’t fund your brilliant AI journal extraction idea, blame Markus.
21: Noahpinion on new technologies to be excited about for the coming decade. I’m split on this, because I agree that many things look promising. But also, if all the promising things pan out, there will be many more new exciting non-information technologies in the 2020s than the 2010s or 2000s. That suggests that maybe we’re being too optimistic and most of them won’t pan out, unless we have some reason to think advances will start coming faster now than in the past generation. Theories I’ve heard along those lines include: we’ve spent the past few decades “paying off” the “debt” incurred by our old technologies being environmentally unfriendly, and now that we’ve solved environmentalism (wait, what?) we can start advancing again. Or, maybe we got really excited picking the low-hanging fruits in information technology these past few decades, and now that we’ve saturated that space (wait, what?) we can move back to the physical world again. Or maybe Silicon Valley has been building a new tech ecosystem separate from the old dinosaur one, and now that it’s fully mature (wait, what?) it can start working on big physical-world projects.
22: Sorry for getting too optimistic there, we now return to our regular doomerism:
23: EA Forum: Movement building at top universities
24: Ask Hacker News: Are most of us developers lying about how much work we do? “I have been working as a software developer for almost two decades. I have received multiple promotions. I make decent money, 3x - 4x my area's median salary, so I live a comfortable life. I have never been fired or unemployed for more than a few months total over my entire career. Through most of that time I have averaged roughly 5 - 10 hours of actual work a week…Are most of us secretly lying about how much we are working? Have I just been incredibly lucky and every boss I have had is too incompetent to notice?”
25: Another study suggesting microdosing doesn’t really work.
Mormon and Utah readers, do you know what’s going on here? Please only give answers that explain why this has happened in the past 10-15 years specifically, not vague “rise of secularism” or whatever.
27: Vitalik: the bulldozer vs. vetocracy political axis. This is a really good crystallization of a line of thinking that’s been vaguely floating around the political/economic blogosphere recently.
28: Divia Eden has been cataloguing inappropriate uses of the phrase “no evidence” since April 2020.
29: Matt Levine wrote some good stuff (which I can’t link directly) arguing that although lots of crypto projects are Ponzi schemes, that might be good for certain applications. The usual problem with social media is that nobody wants to join new things: it’s pointless to be the fifth user of a new social media site that doesn’t have anyone else you want to talk to, and much easier to just stay on Facebook where your friends are. Ponzi schemes have the exact opposite property: you always want to be one of the first few users, and it’s useless getting in on the same one everyone else is. So social media sites that are also sort of Ponzi schemes might align incentives better than either of those things alone, and that’s what a lot of new crypto apps are. More at Dror Poleg: In Praise Of Ponzis.
30: Glad to see the “we should try to stop global warming for altruistic reasons, but it’s not going to destroy humanity or kill your family” perspective picking up more traction:
31: The Vitamin D / COVID debate continues, with a recent meta-analysis finding no effect but a Phase II trial of a patented formulation seeming to be successful. I have to admit I’ve kind of clocked out at this point and have no strong opinion on recent developments.
32: Nate Silver, Tyler Cowen, and Garrett Jones come out in favor of “the public health establishment deliberately delayed the COVID vaccine by a month so it wouldn’t make Trump look good before Election Day”. I haven't checked if it’s plausible that public health officials had political motives, but the fact is they made a deliberate decision to make the process take an extra month, and that some four-to-five-digit number of people died because of this decision. Even if we conclude they made this decision for less sinister reasons (like being over-cautious), it deserves to be scrutinized with the same rigor as other decisions that have killed this many people, like the decision to ignore intelligence warnings about 9-11.
33: Ten Minutes With Sam Altman. A weird cute vignette by a would-be entrepreneur about his Y Combinator interview. I always like unusual experiences related by good writers, although this is weirdly short and leaves me wanting more.
34: Medieval Asian incense clocks: