They are decreasingly prescribed by physicians despite their extreme efficacy. Is it just not worth it anymore for pharmaceutical companies to produce them?
Follow-up question: where can I get it? (Darknet recommendations?)
I've had asthma since I was a kid but it's been mostly mild. I'd have an attack maybe one or twice a year and it was harder to breath but never to the point where I felt like I would actually panic or pass out. As a kid I had an inhaler but I never bothered with that as an adult.
But yesterday I had a pretty severe attack. Worse than any I'd had before to the point where I had a definite feeling of panic that made it very hard to control my breathing. The panic would constantly make me try to inhale my next breath before I could finish exhaling my previous one. It lasted for hours but it did subside eventually.
So I went to the pharmacy to buy an asthma inhaler and they straight up refused to sell me one. Prescription only apparently. But the problem is I don't have a doctor or internet (I'm on wifi at a café atm) so no virtual visits either. So that leaves the only (official) option of sitting for eight to ten hours in a room full of sick people so I can talk to a doctor for five minutes and get the stupid piece of paper that permits me to buy an emergency inhaler in case of another attack. Of course if I catch something while I'm there that could well trigger an attack in itself.
Ah well. I just looked up asthma inhaler prices online and there's a wide range of prices but possibly I couldn't afford one anyway. (the markup on some of them must be in the range of 10,000 percent markup) So I guess I'll just have to take my chances.
So I'm feeling a little bitter atm. I'm in BC, Canada for those who would like to make note of the data point wrt the relative state of health care in various countries.
Thanks for your concern. I was frustrated when I wrote that. Probably waits of eight to ten hours are not typical though they definitely did happen as least as of about ten years ago. Not sure how things may have changed in the covid era as I have not set foot in a hospital in years.
"Of the total 139 participants who underwent randomization, 118 (84.9%) completed the 12-month follow-up visit. The mean weight loss from baseline at 12 months was −8.0 kg (95% confidence interval [CI], −9.6 to −6.4) in the time-restriction group and −6.3 kg (95% CI, −7.8 to −4.7) in the daily-calorie-restriction group. Changes in weight were not significantly different in the two groups at the 12-month assessment (net difference, −1.8 kg; 95% CI, −4.0 to 0.4; P=0.11). Results of analyses of waist circumferences, BMI, body fat, body lean mass, blood pressure, and metabolic risk factors were consistent with the results of the primary outcome. In addition, there were no substantial differences between the groups in the numbers of adverse events."
The advantages of fasting would be easier compliance, appetite suppression, autophagy and immune downregulation. There might be some anti-insulin resistance magic going on as well but that can be rolled into appetite suppression.
From the abstract, they eliminated the first two through study design and didn't measure outcomes of the latter two, outside of very distant proxies. Unfortunately, scihub doesn't have the article so I can't comment in greater depth.
EDIT: I just realized they tested 16h fasting, from a hormonal standpoint that's regular calorie restriction with extra steps.
It was published in 2009. What has happened since its writing that should affect how I read it? I'm particularly interested in (a) results McGilchrist relies on that have since failed to replicate and (b) well established (replicated) results that postdate the book and should meaningfully affect my reading.
With Google having become far less useful recently due to a severe bias toward mainstream news and SEO-optimized sites, I'm finding it especially urgent to learn news ways of finding useful and detailed information.
Apart from Google Scholar, what techniques have you discovered for finding useful, detailed information sources *that you didn't know about before*?
Depending on the topic, my first choice is usually "<google-query> site:reddit.com" because it's basically a bunch of topic-specific forums on one domain. That may or may not qualify as "useful", "detailed", or "didn't know about before", but it works for me 80% of the time.
Looking for a link from a relatively recent post, about how if people are truly on the fence about making a change in their life ( breaking up, moving, changing jobs, etc. ), there was a study that flipped a coin for them, and found that people who made a change were ultimately happier. Where was that?
Here's a list of all the books reviewed, without subheadings. I don't know if anyone else wants this, but I found it a useful tool for browsing.
A Canticle for Leibowitz by Walter M. Miller Jr
A Connecticut Yankee in King Arthur’s Court by Mark Twain
A Failure of Nerve: Leadership in the Age of the Quick Fix by Edwin H. Friedman
A History of the Ancient Near East
A Secular Age by Charles Taylor
A Supposedly Fun Thing I’ll Never Do Again by David Foster Wallace
A Swim in a Pond in the Rain by George Saunders
1587, A Year of No Significance: The Ming Dynasty in Decline by Ray Huang
Ageless: The New Science of Getting Older Without Getting Old, by Andrew Steele
Albion: In Twelve Books
An Education for Our Time by Josiah Bunting III
An Empirical Introduction to Youth by Joseph Bronski
Anthropic Bias by Nick Bostrom
At the Existentialist Café by Sarah Bakewell
Autumn in the Heavenly Kingdom by Stephen Platt
Bronze Age Mindset
Capital and Ideology by Thomas Piketty
Civilization and Its Discontents by Sigmund Freud
Come and Take It: The Gun Printer's Guide to Thinking Free by Cody Wilson
Consciousness and the Brain by Stanislas Dehaene
Cracks in the Ivory Tower: The Moral Mess of Higher Education
Deep Work by Cal Newport
Development as Freedom by Amartya Sen
Disciplined Minds: A Critical Look at Salaried Professionals and the Soul-Battering System That Shapes Their Lives by Jeff Schmidt
Economic Hierarchies by Gordon Tullock
Exhaustion: A History by Anna Schaffner
Facing the Dragon: Confronting Personal and Spiritual Grandiosity by Robert Moore
Fashion, Faith and Fantasy in the New Physics of the Universe by Roger Penrose
Frankenstein by Mary Shelley
From Paralysis to Fatigue: A History of Psychosomatic Illness in the Modern Era by Edward Shorter
From Third World to First: The Singapore Story: 1965-2000 by Lee Kuan Yew
Future Shock by Alvin Toffler
God Emperor of Dune by Frank Herbert
Golem XIV by Stanisław Lem
Haughey by Gary Murphy
History Has Begun by Bruno Macaes
How Solar Energy Became Cheap: A Model for Low-Carbon Innovation
I See Satan Fall Like Lightning by René Girard
In Search of Canadian Political Culture by Nelson Wiseman
Industrial Society and Its Future by Ted Kaczynski (also known as the Unabomber Manifesto)
Inventing Temperature: Measurement and Scientific Progress by Hasok Chang
Irreversible Damage: The Transgender Craze Seducing Our Daughters by Abigail Shrier
Island by Aldous Huxley
Jamberry by Bruce Degen
Japan at War: An Oral History by Haruko Taya Cook and Theodore Failor Cook
Kora in Hell: Improvisations by William Carlos Williams
Leisure: the Basis of Culture by Josef Pieper
Making Nature: The History of a Scientific Journal by Melinda Baldwin
Making Sense of Tantric Buddhism: History, Semiology, and Transgression in the Indian Traditions by Christian K. Wedemeyer
Memories of My Life by Francis Galton
Mind and Cosmos by Thomas Nagel
More Work for Mother: The Ironies of Household Technology from the Open Hearth to the Microwave by Ruth Schwartz Cowan
MOSCOW-PETUSHKI by Venedikt Yerofeyev
Nobody wants to read your sh*t by Steven Pressfield
Now It Can Be Told: The Story of the Manhattan Project by General Leslie M. Groves
Peak: Secrets from the New Science of Expertise by Anders Ericsson
Pericles by Vincent Aulay
Private Government by Elizabeth Anderson
Public Choice Theory and the Illusion of Grand Strategy: How Generals, Weapons Manufacturers, and Foreign Governments Shape American Foreign Policy by Richard Hanania
Rationality: What It Is, Why It Seems Scarce, Why It Matters by Steven Pinker
Reason and Society in the Middle Ages by Alexander Murray
Robert E. Lee: a life by Allen C. Guelzo
San Fransicko: Why Progressives Ruin Cities by Michael Shellenberger
Slaughterhouse-Five and Breakfast of Champions, by Kurt Vonnegut
Storm of Steel by Ernst Junger
Surface Detail by Iain M. Banks
Sweet Valley Confidential by Francine Pascal
Termination Shock by Neal Stephenson
Troubled Blood by J.K. Rowling
The Age of the Infovore by Tyler Cowen
The Anti-Politics Machine by James Ferguson
The Axis of Madness
The Beginning of Infinity by David Deutsch
The Book of All Hours series - “Vellum” and “Ink” - by Hal Duncan
The Book of Blam by Aleksander Tišma
The Book of Why by Judea Pearl and Dana Mackenzie
The Brothers Karamazov by Fyodor Dostoevsky’
The Castrato by Martha Feldman
The Condition of Postmodernity: An Enquiry into the Origins of Cultural Change
The Dark Forest by Liu Cixin
The Dawn of Everything: A New History of Humanity by David Graeber and David Wengrow
The Dawn of Everything: A New History of Humanity by David Graeber and David Wengrow
The Deficit Myth by Stephanie Kelton
The Diamond Age or A Young Lady’s Illustrated Primer by Neal Stephenson
The Doomsday Machine: Confessions of a Nuclear War Planner by Daniel Ellsberg
The Ecotechnic Future by John Michael Greer
The Eighteenth Brumaire of Louis Napoleon by Karl Marx
The Evolution of Beauty: How Darwin's Forgotten Theory of Mate Choice Shapes the Animal World - And Us by Richard Prum
The Extended Mind by Annie Murphy Paul
The Fall of Robespierre: 24 Hours in Revolutionary Paris by Colin Jones
The Future of Fusion Energy by Jason Parisi and Justin Ball
The Goal / It’s Not Luck by Eliyahu Goldratt
The High Frontier: Human Colonies in Space by Gerard K. O’Neill
The Hundred-Year Marathon: China’s secret strategy to replace America as the global superpower by Michael Pillsbury
The Internationalists by Oona Hathaway and Scott Shapiro
The Irony of American History - by Reinhold Niebuhr
The Knowledge: How to Rebuild Our World from Scratch by Lewis Dartnell
The Man Who Quit Money by Mark Sundeen
The Mathematics of Poker by Bill Chen and Jerrod Ankenman
The Matter With Things by Iain McGilchrist
The Mirror and the Light by Hilary Mantel
The Motivation Hacker by Nick Winter
The Myth of Mental Illness by Thomas Szasz
The Narrow Road to the Deep North by Matsuo Basho
The New Science of Strong Materials by J. E. Gordon
The One World Schoolhouse by Salman Khan
The Origins of The Second World War by A.J.P. Taylor
The Outlier: The Unfinished Presidency of Jimmy Carter by Kai Bird
The Party: The Secret World of China's Communist Rulers by Richard McGregor
The Reckoning by David Halberstam
The Republic by Plato
The Righteous Mind by Jonathan Haidt
The Righteous Mind – by Jonathan Haidt
The Russian Revolution: A New History, by Sean McMeekin
The Society of the Spectacle by Guy Debord
The Tyranny of Metrics by Jerry Z. Muller
The Virus in the Age of Madness by Bernard-Henri Lévy
The Yom Kippur War: The Epic Encounter That Transformed the Middle East by Abraham Rabinovich
Three Years in Tibet by Ekai Kawaguchi
Trans: When Ideology Meets Reality by Helen Joyce
Troubled Blood by J.K. Rowling
Trump: The Art of the Deal by Donald Trump and Tony Schwartz
Unsettled: What Climate Science Tells Us, What It Doesn’t, and Why It Matters by Steven Koonin
Unsettled. What Climate Science Tells Us, What It Doesn’t, And Why It Matters by Steven E. Koonin
Very Important People: Status and Beauty in the Global Party Circuit
Viral by Alina Chan and Matt Ridley
War in Human Civilization by Azar Gat
When men behave badly by David Buss
Whiteshift: Populism, Immigration, and the Future of White Majorities
I read a lot of them, skimmed through more, and skipped a few. I'm now waiting for the final result because there is one question I want to ask one of the reviewers of one of the books, and it's not fair to start a discussion about reviews right now.
But honestly, I was delighted and surprised to see what some of the books reviewed were, and although there were several that didn't interest me at all, that's just my own tastes and I'm very glad there is such a spread of topics covered.
For all the occasional kerfuffles and spats, we definitely are each other's people 😁 📖
I’m planning on reading ‘em all; currently about a third of the way through. So far, about half have been quite good, about 10% have been pretty bad, and the rest just ok
Well, I read & ranked all of 'em. 3/4 were good-to-great, 1/4 were "meh" or worse. Only 11 reviews were actually bad. 13 definitely deserve to be finalists, and another 12 arguably deserve to be finalists. That leaves 74 (about 56%) which I enjoyed reading but don't quite make the final cut. An excellent competition so far!
I’ve got 16 left to go - will probably finish in a day or two. Tons of stuff that’s well worth reading. Less than 10% are actually bad. Many are fascinating glimpses of topics I know little about - I always enjoy reading stuff like that 🙂
Scott, you can consider adding to the book review rating form a free text field for feedback which will be shared with the book review author. Feedback is valuable for improvement! :)
I'm not sure because this was my first encounter with CODEPINK. I guess they accepted certain elements of Russian propaganda for the same reason the Gravel Institute did, because they like anti-US messages which Russia was eager to provide. The rest of CODEPINK's article wasn't obviously wrong to me (for the most part), perhaps only because I wasn't familiar with a lot of the issues it discussed. (Notably the same article talked about China-Taiwan tension, for which they predictably blamed the U.S.)
7-9 for 'really good', 5-7 for 'good but not the greatest' and 2-3 for 'this is poor/terrible/if I knew who you were, I'd be in a slapfight with you right now, reviewer'.
Is being "intellectually angry" a thing? I don't mean being angry about some culture war issue; it's more "someone is wrong on the internet" but with an added feeling of helplessness that I can't possibly correct them.
The issue is AI Alignment Risk and this website. I love this website. Scott is one of my favorite all-time bloggers. But taking AI Alignment Risk so seriously is obviously so fucking crazy that it makes me "intellectually angry".
I've never understood exactly what the Rationalist community is, never really tried that hard. I mostly like them, yet there has always seemed something slightly *off* about them. Off in the way that Mencius Moldbug is brilliant but also clearly... off.
I realized today what the offness is, for me. It's the difference between Platonic vs. Aristotelean thinking. The term Rationalism has bemused me because I have often thought over the years: "Isn't this just Enlightenment thinking and didn't that start in the 18th century?"
But now I see the difference. The Enlightenment focused on empiricism and was most influenced by Aristotle. Rationalists, in their embrace of Bayesian thinking, seemingly feel free to discard empiricism, and this has led them to believe some crazy, rudderless shit. Such as AI Alignment is a reasonable thing to spend tons of time and money and human intelligence on.
To be clear, our gracious and brilliant host is also a brilliant, trenchant empiricist--when he works with empirical data. Unfortunately, he also seems to update--way too much---on non-empirical issues while in the company of persuasive friends. AI Alignment being the main one.
I think we need more debate between those who believe AGI is an X risk vs those who don't. All the headline debate on the issue here now seems to be between those who believe AGI is a huge risk and those who believe it is only a very, very major risk.
It makes me intellectually angry, if that's a thing.
> Rationalists, in their embrace of Bayesian thinking, seemingly feel free to discard empiricism
As an aspiring rationalist, I hereby denounce any such idiocy you may have encountered.
> Such as AI Alignment is a reasonable thing to spend tons of time and money and human intelligence on.
It depends what you mean by "tons". I consider the near-term risk of AGI-induced global catastrophe to be pretty low .... maybe 1% in the next 25 years, or something like that. But that doesn't mean it doesn't deserve billions of dollars of research funding to mitigate. x-risks, Global Catastrophic Risks and s-risks can still have an immense negative expected value even at low probabilities.
OTOH I do kinda think that some other GC risks may be underestimated, e.g. might we be close to reaching Earth's carrying capacity, especially if a big war occurs? I couldn't find any EA research on that. Weird.
I don't see how this is connected to empiricism; I love empiricism, but it doesn't mean that I must assign a probability of zero to any event that has never occurred in human history. To the contrary, history is chock-full of examples of unique events with far-reaching consequences. Doesn't this imply to an empiricist that ze should be concerned with such things? And anyway 0 is not a probability (https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities)
Edit: I would, however, add that I think Yudkowsky overstates the case for this particular x-risk, and I'm not sure if it's deliberate exaggeration or if he and I have genuinely different opinions. (Edit 2: actually I'm pretty sure he sees a higher risk than me, but whether it's a small or large difference is unclear. But I think the risk rises a lot *after* the first AGI is invented, i.e. that the first one isn't going to be the most dangerous.) But to throw it back at you, if you estimate that a certain catastrophe X has a 1% chance of happening, and if it happened, would cause damage somewhere between $1 trillion and <human extinction>, how much do you think it would be worth spending on prevention, assuming maybe you could cut the risk in half or so?
I don’t really think you can be intellectually angry, but being angry at other people’s stupidity is a very human thing.. anger is not rational by design.
It’s a feature, not a bug.
on the topic of artificial intelligence catastrophe, here is a quote from an article at Politico:
O’Dowd said he hoped the campaign — starting with the national ads — would clarify for Americans that his goal is to make computers safe for humanity. He said even more than a nuclear attack, he fears that someone will one day click a computer mouse in Moscow or Pyongyang and “half the country is going to go down.”
“This country could be put back into the 1820s by someone coming in and getting control of our software,” he said.”
This man is intent on destroying Tesla at any cost for reasons that are rather obscure to me. Except this quote kind of gave me a peek into it.
What is he afraid of? Is it the computers, or is it the people behind them?
I think this really ties in to some of the intense feelings about AI and it’s threat to us.
My personal opinion is what we are really afraid of is ourselves.. which, given our box score, is not unreasonable.
Some of us are casting ourselves as the good parents who inadvertently raised a psychopath and couldn’t stop wondering how their child got that way.
I do not understand what would drive an artificial intelligence to want to destroy us unless we somehow convinced it it was a really really good idea. You would have to convince me that acting from a deep place of power and desire was somehow a function of intelligence.
The other thing I always wonder is what kind of a body are we going to put this intelligence into to live? Because that will make a big difference in its disposition.
What if we came up with an AI that could do all the wonderful things we imagine a Superintelligence could do. Take us to other galaxies, find ways of making infinite amounts of food, figuring out how we can run all our gadgets with no wires or nothing. It could do all this cool stuff, but instead of doing those things from a place of wanting to take over the universe it was doing it just for fun.
That way they would keep us around as pets and probably get a kick out of us.
I've always felt this way about the obsession with AI risk. My feeling connects to a few things I can identify:
- eschatological religion: the obsession with some impending end of the world that is always nigh, and how the emotional posture of those in the church is a kind of vast smugness that says "listen to us or meet your deserved end". We all have a natural aversion to this kind of thing: we can sense the smugness, the immaturity, the sanctimony, the shape of the ego that would let itself fully subscribe to an idea and turn around and expect others to see that they're right and come to them, and just how much it feels like a posture that serves the ego rather than a legitimate spiritual belief.
- climate-change anxiety; the overcharged opposition between those who "believe science" and deniers. For one, climate-change doomsaying often comes off feeling eschatological. And the deniers end up denying far too much, and regarding AI risk it feels like being skeptical would get you grouped with them, which feels unfair because of the very different standards of proof, so we resent this. Then the climate-change doomsaying takes a form of continual atonement: microscopic acts to address guilt (plastic straws) rather than any true sacrifice (a career in battery technology), is far too focused on what WE can do rather than the actual political problems of creating international agreements and enforcingthem, and is all out of proportion (the realistic scenarios expose us to a level of "life affected by environmental forces" that would still be a luxury for people any time in history before a century ago, nothing like a true end of the world)
- and, I did a couple of years of physics grad school before becoming VERY disenchanted with the cutting edge of physics. I recall that the core emotional arc of this, for me, was not a genuine interest in the subject (although I did have one), but a fear of engaging with the messy real world. Instead, I preferred the orderly natural world; it has the feeling of studying the deep magic of reality. At my core I had a fantasy of retreating into obscurity for decades to study arcane magic, eventually to re-emerge with some world-changing accomplishment (like relativity) and receive my reward in adulation and Nobel prizes. Pretty selfish. Very suspicious now that many people get into arcane fields for this reason, and suspicious of any enterprise that never gets close enough to reality. (See also: tech projects that take too long to get in front of users)
For all these reasons I cannot see AI risk talk without feeling like the interest is REALLY an emotional one based in the proponent's need to be on the right side of something that feels huge and important, more than rational calculation (while being heavily dressed up in rational aesthetics to support that core emotional need)
Edit: but, I'll grant, my opposition to AI risk is JUST as emotional: a wariness that the AI-risk-doomsayer has given in to their ego's desire to be important. Climate deniers must start from this too. A properly rational calculation is called for, I'll grant, to get beyond this. But if you rationally calculate that AI risk is a big deal, and you want to doomsay, you'd be well-served to steer far clear of any language that keys into this particular emotional script, and to be wary of giving into it yourself.
I think the only people who think there is literally zero risk from AGI are those who believe AGI is impossible or at least that it will take hundreds to thousands of years to achieve (so zero risk right now).
That leaves open exactly how concerned we should be right now of course, but presumably if we're not entirely sure that any possible AGI we might create will be benign the precautionary principle would apply.
I mean when you limit it to saying 0 risk, of course you won't get any takers here. I'm willing to say it's an extremely low risk, even though I think human level AI is possible.
One of the (many) points of disagreement I see is people conflating reaching human-level AGI with "takeoff" scenarios, which rely on the agent having god-like intelligence and being extraordinarily robust. Neither of those is something that would magically appear upon reaching the human-level threshold.
I actually have a pretty high credence that we'll see human level AI within a few decades, but this doesn't translate to huge X-risk in my view of the world.
Hmm... I don't see takeoff scenarios as all that implausible. I don't think takeoff necessarily depends on the agent having god-like intelligence. Like if you create one human level ai then you effectively have an arbitrary number of them up to the limit of your computing resources. You may be able to run the ai at high speed. Or train the human level ai to become as good as the top human ai researchers.
It is unlikely to take off in quite the way some may imagine as there will certainly be bottlenecks. Physical research and engineering for example may not be sped up much if at all at this stage.
"Human level" is also doing a fair bit of work. It's not clear to me that human level intelligence is a natural stopping point for AI. If we look at narrow AIs they are often either clearly subhuman or superhuman within their narrow domains.
Yeah basically I think the bottlenecks are much more severe than people imagine. I am sympathetic to the "if we had 10,000 Alec Radfords AI progress would go much faster" point of view, but I think it's missing the degree to which even the most successful AI researchers rely on empirical results. You have to wait long amounts of wall-clock time for experiments to run, even if you're one of the best AI engineers.
I agree that human level isn't some magic point that it's not possible to surpass. I guess I think the time it takes us to get from human level to 120% human level will not be significantly shorter than it takes to get from 80% to 100%.
"I think the time it takes us to get from human level to 120% human level will not be significantly shorter than it takes to get from 80% to 100%"
How long did it take to go from Go playing AI that could barely compete in the children's leagues to one that soundly trounced the best human grandmasters?
The human mind doesn't seem to be anywhere near the peak of potential cognition.
Okay for what it's worth, the time it took to go from 80% of Lee Sedol's level (the matches vs Fan Hui let's say) to beating Lee Sedol, and then the time to improve more beyond that were similar. They were pretty similar, but it doesn't support my overall point since they were both short yes.
>I've never understood exactly what the Rationalist community is, never really tried that hard. I mostly like them, yet there has always seemed something slightly off about them. Off in the way that Mencius Moldbug is brilliant but also clearly... off.
>I realized today what the offness is, for me. It's the difference between Platonic vs. Aristotelean thinking. The term Rationalism has bemused me because I have often thought over the years: "Isn't this just Enlightenment thinking and didn't that start in the 18th century?"
>But now I see the difference. The Enlightenment focused on empiricism and was most influenced by Aristotle. Rationalists, in their embrace of Bayesian thinking, seemingly feel free to discard empiricism, and this has led them to believe some crazy, rudderless shit. Such as AI Alignment is a reasonable thing to spend tons of time and money and human intelligence on.
Deciding the issue with Rationalism is a lack of empiricism despite not really trying to understand the community is not a new critique - I'm having flashbacks to Why I am Not Rene Descartes. You're not making *quite* the same arguments, but it certainly rhymes.
On the object level, there's a clear tension behind the idea that empiricism is a virtuous path to truth, AI Alignment is a non-empirical issue, and other people's ideas can be "obviously so fucking crazy" despite that lack of clarity. This smells like an issue of overconfidence, but I'm not sure the problem is where you think it is.
Giving lip service to empiricism is as common as crabgrass. What huckster politician *doesn't* claim his nostrums are rooted in objective data? You need a lot more than a stated allegiance to measurement to be credited as a genuine empiricist.
Descartes and Leibniz really do claim they're not using empirical data, but only the objective good reason that God gave them! (And fortunately, God was good enough to pre-establish a harmony between what goes on inside and what goes on outside.)
Meanwhile, the founding document of rationalism (Yudkowsky), the Twelve Virtues of Rationality, states:
The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots?…Do not ask which beliefs to profess, but which experiences to anticipate.
It matters how close the alignment group think machine learning is to producing AGI. Because I don't think it ever will on the current trajectory. You can't fake cognition with decision trees and mathematical telemetry via language models. And consciousness is poorly described as Markovian, imo.
In the coming years, I predict the rise of neural-symbolic learning, incorporating some old rule-based methods into the current sub-symbolic approach.
I also get kind of annoyed with the whole AI alignment discussion. It bothers me for a couple of reasons. Firstly, I've not yet been disabused of the impression that the rat community by and large believes similar things about AI safety because that's one of their cultural beliefs rather than rationally arived to with evidence, as much as they protest this. I do see them produce evidence for their claims, but it appears more like christians producing evidence of christianity rather than someone producing evidence of something that's actually true. I don't want to sound overly harsh here, there are a lot of interesting arguments being made and all that, but I do think the community fell prey to collectivism much more than it wanted to in a lot of ways.
The second thing that bothers me is my own inability to articulate the reasons why I think what I'd call the Rationalist AI Alignment Risk thesis is wrong. The best I can try to give is the short version, which sounds something like "Optimization does not work that way." but I've never been able to articulate a deep explanation for this, even though I know I have one. And it bothers me that I have such a hard time with it.
I think the reason the rationalist community has so many people believing the Rationalist AI Alignment Risk thesis is simply because it has so many people who have been exposed to the thesis, and some fraction of them have become convinced. I don't think this is an argument for or against.
I do see this exact discussion semi-regularly, so I'm not even convinced that even a majority of the community is worried about AI risk. There's some people that are seriously worried, some people in the "maybe 1% likely but think of the expected value" camp, and some straightforward skeptics. There's probably a poll somewhere of how many people are in each group.
But since the general population has ended up in the third group due to the argument "Wait, isn't the Terminator fictional?" then of course the rationalist community seems unusually credulous here.
For what it's worth, I am convinced AI Alignment is a real problem. But I think you've sort of pin-pointed the reason why its most vocal proponents are going about it the wrong way.
People like Yudkovsky seem to believe superintelligent AI will just arise spontaneously and bootstrap itself from nothing to world domination in a matter of hours, and one thing it demonstrates is a complete disregard for the sheer amount of empirical knowledge achieving world domination would require.
Of course, ultimately, humans will be slowly training AI with exactly this kind of knowledge, in pursuit of personal convenience or some marginal advantages in zero-sum capitalist competition, so at some point we'll reach the point where the risk becomes real. But the current AI Risk research disregards this area, assumedly because it just doesn't think it could possibly be a problem that a sufficiently intelligent actor would have trouble overcoming. This is doubly harmful, as it both sets them out to become "boys who cry wolf" who see AI dangers where they don't yet exist, and distracts them from pursuing and advocating some really simple risk-mitigation strategies that would probably be if not sufficient period, then at least sufficient for a while after a demonstrably superhuman AGI actually comes into existence.
Can you elaborate on why you don't expect early early AGI to have large amounts of empirical knowledge?
The most general and impressive AI's we currently have are trained by a process which you could reasonably describe as "distilling half the internet into a probability distribution". These language models "know" more things than any human, by a long way.
Is it that you don't expect early AGI to be a successor of these language models, or that you set the "amount of empirical knowledge necessary to take over the world" bar higher than the amount of empirical knowledge currently stored on the internet?
Empirical knowledge is not fungible; being a world-class expert in auto repair is of almost no value if you're tasked with removing an appendix or winning an MMA competition. You need empirical knowledge in the specific field.
And there's a distinct shortage of empirical knowledge in the field of World Conquest. Most of what we do have in that area is predicated on e.g. having command of an army at the outset. Note that armies are generally owned by people who are skillfully paranoid about wrongdoers subverting those armies, and there's also a shortage of empirical knowledge on how to subvert armies.
Also, most of the empirical knowledge that exists, is *tacit* knowledge. It's not written down or digitized *anywhere*, it can't be flawlessly derived a priori, you've either got to get a meatsack to teach you, or learn it the hard way. And the meatsack will probably get suspicious after too many long conversations on the fine details of world domination.
The first AI that wants to Take Over The World as a prelude to paperclip-maximization, is going to have to do a whole lot of trial and error in figuring out how to actually do that. It's going to make mistakes. And there are enough opportunities to make mistakes that it's likely going to stumble on to a fatal one while it's still small enough to be killed by an angry sysadmin with a crowbar.
> there's a distinct shortage of empirical knowledge in the field of World Conquest.
I don't agree. I watched a show about dictator's playbooks awhile back, and another show about genocides, so it's certainly a field that has been studied, and genocides & coups have kept happening despite the often severe epistemological failings of the people who cause them. Indeed, many dictators succeed on their first or second coup attempt! So is it really that hard? I don't think it's that hard, but I think the necessary combination of psychopathy and ambition is rare.
OTOH, dictators also historically seemed to require luck, to be in the right place at the right time. A superhuman AGI, however, could use empirical data to "make its own luck", e.g. it could observe the background characteristics of everyone who has ever done a coup, then work out how to create the necessary conditions.
An AGI will, however, have a major disadvantage by not having a human body, which means that all strategies that depend on being human won't work. But I don't know how to rule out the possibility that there is a realistic way to get around this limitation.
Edit: come to think of it, a key characteristics of dictators and "genocide-ists" is their ability to inspire and control others, and to delegate responsibilities. Anything you can delegate to others, you don't have to do yourself. And in principle all this inspiration and delegation could be done online. A superhuman AGI on the internet (which, of course, could be psychopathic simply by lacking a reliable system to prevent this) could first convince people it's human, perhaps not just via text messages but also via a photorealistic AI-generated human persona. It could, in principle, find a way to control a supply of money, with or without having direct control over a bank account... all via delegating certain meatspace activities to humans. So I don't see why it couldn't, in principle, follow a dictator's playbook once it gathers control/influence over enough resources.
Agree mostly, I'll give a try for part of my 'optimization doesn't work like that' explanation.
One thing that I think people who believe that there will be rapid takeoff are missing is that they seem to have this sense that all the AI has to do is overcome the humans and then it's off to the races to conquer the universe. I think there are some pretty fundamental mathematical and physical reasons that things don't work like that. Parts of it, things that point to it go by lots of names, P vs NP, undecideability, Wolfram's Principle of Computational Equivalence, Goedel's Incompleteness Theorem, Chaos theory, 2nd law of thermodynamics....
All of these things circle around one central truth, a truth that says something like "actually getting things right, or making something new that works, is a fundamentally hard problem" and I think that no matter how powerful you are cognitively, it's always hard problems all the way up. And you can't solve tomorrow's problems with the knowledge of today, you have to do it the hard way.
True intelligence, I think is, more or less, the ability to 'do it the hard way'. And part of the requirement to 'do it the hard way', to understand how to make something new, it also requires the general understanding of the type that lets you understand what someone 'means' by what they say, rather than just what they say.
what we call 'alignment' (at least in terms of not being a paperclip maximizer), and what we call 'general intelligence' are two sides of the same coin.
I've been interpreting the rating as "how happy would I be to see this as an ACX guest post and have attempted reading it", which both means rating ragequits poorly and docking points for things like "just didn't find the subject material that interesting".
I'm reading a bunch of the reviews, and it strikes me how many people have "the makings" to be pretty good writers were they to spend a bit more time doing it.
There's a period of time you go through when you start writing regularly with the intent of publishing where you get rapidly better, and I keep seeing things where it's like "this guy is already pretty good, I wish he'd write ten articles in three months and be great".
I'm about as sure as you can be about a complex human thing. I'm sure there's exceptions, but when you start writing stuff that people are going to see, it's going to (on average) spur people to a closer level of scrutiny of their own stuff. I've seen multiple people with good educations going through the process, and the pressure seems to help them refine in ways school-writing didn't.
Best example of this I know of right now is Parrhesia's blog (https://parrhesia.substack.com/) which has just been getting better and better every post. But you hear similar things in a lot of fields - some singers will tell you that without an expectation of performance most people top out, etc.
Another thing is that for most people who are starting to write "in front of people" for the first non-academic time, they are trying to figure out what they want to be in terms of voice and what kind of things they want to talk about. They are figuring out stuff like "what's my focus". And these are all things you can improve on, or at least constraints you can optimize for/within.
Anyway, just one man's opinion but I think it's broadly true.
ah, thanks! That touches upon a number of issues I had been thinking about lately.
My main doubt was or partly is that you don't get direct feedback - or if so, probably more on specific bits of content, than on issues of style and such. But practice and reflection can go some way I guess.
So some of the best feedback you get, or at least the most believable, is when people begin to disassociate you from "being a real person". So occasionally I'll be on a forum or something where they are discussing an article and someone will say they liked X, or hated Y, and it's nice because you know to them you aren't a human being that exists and they are really talking about the writing.
But really the best feedback you are going to get is basically just getting closer and closer to the kind of stuff you want to write. I once read a thing about dealing with hecklers where a stand-up comedian was saying something like "Listen, you believe you are the funny one in the room, that you are funny enough to do this. That should lead you to believe you can take down some drunk rando with your words."
I think that's broadly true. There are some ways that feedback helps, and there's certainly some I listen to and take, but for the most part I think you are looking for forces that make you look at your own work closer, that make you think really hard about how to create the best words you can for people. It's making you give yourself better feedback, basically.
Thanks a lot for the kind answer. I see your point. It also made me wonder, if I love writing so much, that I want to engage in those thought processes so deeply. At that moment, that feels like work - though I know from other tasks that it can come quite automatically and actually be enjoyable.
I get your point about other people speaking about you somewhere else. At the same time, uhm, I don't know how long it would take until I would read people talking about an article of mine somewhere else. I think I'd be happy if they found my article in the first place.
I had been thinking about writing lately. But then, as you wrote, I'm very aware that my goals change and are not fully clear, and that I haven't really figured out yet, what 'kind of stuff' I want to write. Or rather, if the mixture of texts that come to my mind would make any sense to others.
Thanks for the opportunity to reflect on this some more!
Before internet, publishers and readers were removing the not-so-great writing from circulation. Publishers by not publishing the text (perhaps unless some changes were made), readers by not buying it.
Also, authors couldn't hyperlink their previous articles, which made their individual pieces more self-contained. Which means that if you kept their first and third works, but removed the second one, the result still made sense.
Haha, I usually write long comments, and this time I also first did the same, and then I thought "uhm, why not delete the obvious parts?" and tried that approach. Oops.
The idea is that in the pre-internet world, the guy who writes a lot and is already pretty good... would still keep writing a lot, but only 1% of that would be published and remembered... and if you afterwards read only that 1%, you would conclude that he was great. So your wish would kinda be granted, but in a very unintuitive way.
This could still happen today, if e.g. some obsessed stalker collected all writings, and then organized some rating by audience, and selected the best 1% of them. Unless perhaps the internet form of the texts (the fact that author can expect that his audience is familiar with his previous posts, or at least can reference those posts) would make that 1% selection difficult to read, because it would keep referring to things outside the selection.
I've seen Jim Kennedy around thorium reactor groups a lot, and now he's giving his origin story:
> So I met with the Pentagon guy and I laid out this plan, and I said "well here's what [China's] doing and here's how we can counter it, and if we we counter it like this, they won't be able to offset our actions, and we'll be successful at building, reestablishing a value chain," and the guy says to me "wow this is this is really interesting, you put a lot of thought into this... this is really good," and I said "yeah yeah, thanks, you know, I appreciate it, I'm sure that you've got, you're looking at other things, right?"
> He looks at me, like, "what do you mean?" I said "well, I mean, I just kind of threw this together and, you know, I'm just a private sector guy, and this is the Pentagon and I'm sure you guys have been looking at this and you have like, a real plan, right?"
> He goes "I don't, I don't understand what you're talking about." I said, "this is a national security issue, so I am under the assumption that the Pentagon is on top of this, and there's lots of other good plans, and I'm not the only private sector guy with the solution." And he goes he just he's looked at me like (shrugs) "well no, that's, that's it." And I said "what do you mean?"
> I said, "this is national security. You know, you guys should be developing a plan, it's not my responsibility." I said "what if i didn't show up?" and I swear to god, this is what he says, he goes "well you're here aren't you?"
Evidently this is a guy who became interested in thorium molten salt reactors not to solve global warming and air pollution like many of us, but because the U.S. is letting China control the global supply of rare earth materials that are critical for manufacturing various high-tech goods (notably motors and magnets). Mostly this is a result of laws around thorium. Heavy rare earths are always found together with thorium geologically, and U.S. law says that a company cannot dig up rare earths, extract those rare earth earths and bury the rest. Why? Because the residual dirt is considered "nuclear waste", or in technical terms "source material".
This is the main reason China controls 90% of the rare earth market. And this is, of course, a national security issue since it means China has huge leverage over *everyone* else in case there is any conflict between China and anyone else. We could simply change the law, of course, but I guess we won't because politics. So Kennedy's solution involves some kind of thorium trust. Rare earth miners will extract the thorium and deliver it to a group in charge of storing it, and this group will in turn sell it to people making reactors that use thorium, such as Thorcon. But, grain of salt, I have a sense that I don't quite understand what he's saying about the problem or solution.
Edit: I'm disavowing Kennedy's comments against NATO, though. Because https://twitter.com/jessicabasic2/status/1513836355440111621, plus he asserts "the Russians" are "calculating rational people" and it's become very clear that Putin is calculating, but not so much rational. But all that other political stuff isn't what he usually talks about and isn't why I listen to him.
I didn't know about this law: I had heard it was hard to mine rare earths because of "environmental regulations" but I assumed that was your standard "Don't let your open pit mine leach toxic metals into the water supply" kind of regulation. I had heard of a rare earth metal mining and processing project starting up in the Alaska panhandle recently, I wonder what they're doing with the thorium.
As an aside, it bothers me when people say Putin is not rational. I think he's evil, but I don't think he's irrational. It strikes me that some people have bought to much into Yudkowski's idea that rationality=winning (or at least the popular misunderstanding of it: it seems to me all he was saying is that if the "rational" route consistently leads you to losing, then it's not really rational). The idea seems to be that if someone does something that turns out badly, they're irrational. But that's not irrationality: believing that if A equals C and C equals B then A does not necessarily equal B is irrational. There are plenty of rational people who act the fool out there, and I can't even say that Putin was acting that foolishly. Almost everyone predicted an easy win over Ukraine, and as Putin predicted it didn't start a war with NATO. Sometimes you make a gamble and it turns out the odds weren't what you thought they were, that doesn't mean you're irrational. Was Nate Silver irrational for giving Trump a 28% of winning in 2016? No, he was just wrong (or not even that: after all, 28% chances happen 28% of the time).
Putin going to war with Ukraine was a mistake, and evil, but I don't see how it was irrational unless you equate rationality with never making mistakes and not being evil.
Can you tell me of any specific military analyst who considered the obvious factors (UKR military capabilities, RU military capabilities, UKR public sentiment, geographic/physical strategic aspects) and concluded that Putin was likely to be able to complete his objectives (stable overthrow of Kyiv and at least a couple of other major cities) with 200,000 troops?
I expect such analysts exist, but I don't remember seeing anyone in particular. However, I don't think that such analysts expected the lousy strategy and tactics that Russia actually used. This video explains: https://www.youtube.com/watch?t=1474&v=zXEvbVoDiU0
So, what I mean is that Putin was irrational in the usual human sense of confirmation bias & positivity bias. He let himself believe that his operation would succeed within a few days, as his yes-men told him (because Putin probably demanded it from them), and he refused to believe in the reality of Ukrainians' feelings toward Russia, nor did he prepare for the possibility that Ukraine was prepared, which of course it was. Vlad Vexler further asserts that Putin really believed Ukrainians would greet Russians as liberators, which I find credible. And to some extent, it seems like his delusion has continued for many weeks, as he's still using the "special military operation" moniker instead of declaring "war" which, apparently, is legally required to mobilize reservists/conscripts. Thus, his forces will probably remain undersized, and certainly underpowered, for months to come.
Major sources contributing to my understanding include the following sources published before the war.
- Belated edit: I also saw a presentation somewhere...can't remember where... saying that even if Putin took Kyiv and other major cities, Russia would pay a terrible price that would ultimately weaken it.
The thing he would realize, if he were rational, is that (1) he can choose his own goals rather than following the same path he's used in the past, and (2) he was wrong about a bunch of things and ought to have re-evaluated, either by aborting the war before February, or scaling back now (because his encirclement plan is risky and likely to fail), or at least delaying the coming offensive (because it's a complex op that needs a lot of planning).
Previously he's had remarkable success boosting his popularity with military adventures and killing people (which reminds me, have a look at the Apartment Bombings summary here: https://twitter.com/DPiepgrass/status/1507210690427174916), and he's made expanding the Russian empire militarily his goal, but he could just as easily have chose "expanding Russian influence" as his goal. And trade relations / foreign policy is a better way to do that. China's Belt and Road initiative is the obvious example, though I think that initiative is undermined by Xi making himself dictator for life, while making China into the world's biggest military power, killing off Hong Kong's freedom, and threatening Taiwan, the U.S. and Japan - taken together, this is terrifying, and if nearby nations have any sense, they would compete with China in manufacturing so as not to be so dependent on it. Putin could have improved Russia's domestic manufacturing industries and implemented reforms to reduce kleptocracy (because it's not an efficient system). Putin could have even chosen to join NATO (though that would be difficult now because of his previous annexations).
While his war is still likely to acquire some territory for Russia (probably a temporary win), it will dramatically reduce Russian influence. So I would say that if he wants a "great Russian empire", his recent and current actions are directly opposed to that goal. Edit: also note that Putin has given Xi/China a lot more power over Russia, as Russia now depends heavily on China, but China doesn't depend much on Russia.
Now, there's another strategy Putin could have taken: instead of trying to improve Russia's position, drag down the West. Have a look at what Stoic Finance said just before the war on Feb 18. He assumed that Putin was rational, so his interpretation of the military buildup was that Putin was trying to "try to destabilize the west" as he had done in the past: https://www.youtube.com/watch?v=d4SENp1IT6o
And indeed, U.S. had been warning loudly that Russia planned to invade, so NOT invading would be a win for Russia because it would make the U.S. look like the boy who called wolf.
It still seems to me that you are equating "rational" with "wise" and "irrational" with "foolish." There are many rational people who are fools, and Putin is one of them. But I think we're just going to disagree on what it means to be irrational.
Putin is intelligent, calculating and foolish. What role is "rationality" playing here? What would Putin have to do differently for you to call him "irrational and calculating"?
Lets refine all the thorium and turn them into commemorative coins! They're collectable, and come with their own lead lined carrying case. Very stylish.
Are they good enough to be worth deploying against a great power opponent? A tank that's outclassed badly enough by enemy equipment is just an expensive, cumbersome way of getting your tank crews killed.
I don't know enough about T-55s and T-62s to evaluate with confidence if they fall into this category, but a few things incline me towards suspecting this is the case. For one thing, those tank designs are over 60 years old and 20-40 years older than modern designs currently fielded by great powers, and they appear to be 1-2 major generations of capability behind modern main battle tanks. For another, both Russia and Ukraine are former operators of T-55s and T-62s who have many years since scrapped or sold them off to third-world countries: if Ukraine thought those tanks were still any good, I would have expected them to have been kept around for reserve units, or at least mothballed for restoration to service in the event of critical need. Even Russia, which I thought was a notorious pack-rat of old military equipment, seems to have scrapped their mothballed T-62s.
The book-review form has a space for entering your email addresses "to prevent spam and accidental double votes"; however, entering something in that space has not been made mandatory, which I think means that several of my votes have become accidental _non_-votes because I am a moron and often forget to do things.
Scott, if you're reading this and it's easy to do (which I _think_ it is), if you are going to ignore submissions in which that space is left blank could you please make it impossible to submit the form with that space left blank? If you're concerned about spammers and somehow making the field mandatory will make their robots fill things in there, you could have another mandatory field labelled "please enter four plus three" or something.
I read this comment before starting reading the reviews, promptly forgot it, and did the same thing as gjm. I second the plea for making the field mandatory or something.
I suppose that it should be safe just to go through all the reviews I read, remember what I thought of them, and resubmit. But somehow this is the sort of thing that is extremely not-motivating to my brain so I can't guarantee that I will.
I recommend the tables in the sections "Millon's description" and "versus normal personality"
I'm not familiar with BPD specifically, but in my experience the descriptions of personality disorders will ring true if you have them, and having a clear understanding goes a long way towards relief.
The recommended, evidence-based treatment is a semi-structured approach called DBT, Dialectal Behavioral Therapy. There are lots of books about it and therapists who offer the approach via individual or group treatment. My impression is that if the treatment is not particularly helpful for someone who has the full-blown syndrome (severe emotional dysregulation, chaos in their relationships, habit of offloading distress onto others via manipulative suicide gestures and threats). However, most people seem to think otherwise.
If what you are seeing in the person is far more subtle than what I'm describing, I don't think it's really that helpful to think of them as "having BPD," because BPD isn't an illness in the same sense as pneumonia is -- there's not a lot of mileage to be gained from saying "Aha, it's that!" For people who have more low-key versions of the BPD syndrome, a combo of meds and psychotherapy is the best approach.
I had a friend who had BPD and it was very difficult. I read the book Stop Walking on Egg Shells and I liked it but I don't have any specialized knowledge.
Alas, my shameful pride. Reading book reviews other than mine, I find myself envious of the good and scornful of the bad. I cannot exorcise the implicit comparison and enjoy them in their own rights! I am tainted and untrustworthy as a reader and evaluator, and as such, cannot bring myself to submit ratings of any of the reviews I have read.
"They are back! Neil deGrasse Tyson is once again spouting total crap about the history of mathematics and has managed to stir the HISTSCI_HULK back into butt kicking action. The offending object that provoked the HISTSCI_HULK’s ire is a Star Talk video on YouTube entitled Neil deGrasse Tyson Explains Zero. The HISTSCI_HULK thinks that the title should read Neil deGrasse Tyson is a Zero!"
Always fun to read someone letting it rip that way!
The best & funniest negative review I've ever read was an essay by Alexander Pope called, I think, "Peri Bathos," panning and parodying all the hack poets of his day. I recommend it highly.
I used to be sort of like that too. Much less burdened by it now. Lots of us who are smart get sort of fixated on the fact of our smartness, and have a terrible time letting go of the hope of being a certified genius. You have to sort of mourn that loss. It helps to realize that being a certified genius actually doesn't make people feel happier or more solid. I knew someone who was chess champion for his state when he was in middle school, and then right after graduating from college won first the US correspondence chess championship then the world championship. He was delighted when he won, but not any happier than I would be if I lucked into a great deal on the car of my dreams. The real locus of happiness is in the doing of something you're deeply interested in and good at.
I mean, couldn't you just set your book review at some value (7 perhaps? You probably think it's reasonably good since you submitted it) and then rate reviews based on how net envious/scornful you feel about them?
Seems like that system would have a decent signal/noise ratio.
Only technically true, as the book did not contain any information to begin with. All language is insufficient to truly communicate, it's merely a wretched cipher used by small minds. True communication is almost an impossibility for us.
If you want to communicate an idea to someone, do it through art. Novels are the most inferior form of this (the actual text does not contain communication, merely its psychic structure), followed in ascending order of "something approaching actual communication" by cooking, architecture (although the art in that has been destroyed by Mara in most of the world), dance, painting and sculpture, interactive and immersive art, and music being the highest and most genuine form.
Well, no, obviously taken at face value that is false, because "information" includes things like "how many times was the letter E used in each sentence, on average?" So "information" strictly defined clearly vanishes when you condense, in the same sense as information vanishes when you use the JPEG algorithm to compress an image.
What you mean to say I think is "useful" or "valuable" information doesn't vanish, which is also a truism, since as long as the definition of "useful" is set appropriately, we can justify any condensation, great or small.
If what you're saying is "most popular nonfiction is a form of intellectual masturbation, where people wallow in clever expositions of themes and ideas with which they already agree, so they can nod along enthusiastically" ("OMG! Look how forcefully he puts in on page 163! I'll have to Tweet that out right away...") -- yes, well, this is the human species. Probably 98% of our communication bandwidth is taken up by group identity signaling.
It depends! If you're talking about coffee table, pop culture of science/history/so forth books, then yes.
If you mean something with real information, then I don't think so. Sometimes you have to lay the foundation for what you are going to discuss before presenting useful ideas, otherwise it will be "as you all know" and no, we don't all know.
Do you think you could express 300 pages of Scott Alexander's most popular posts in 3 pages? Or try expressing any one of the six "sub-books" of Rationality: A-Z in 3 pages.
The 3rd edition is not a text book. It's 'the bible' of analog electronics (at the time of printing.) https://artofelectronics.net/ The first and second editions are more text books. There are cosmology text books and then there is "The Principles of Physical Cosmology" by Pebbles, the classic reference work... I guess you call that a text book. IDK. What makes something a text book?
The narrative can often make the subject more engrossing and easier to remember. By creating an emotional response in the reader, the author highness the understanding of not just the facts but also the context and meaning of the subject.
Though I agree with the general premise that most popular non-fiction could be shortened considerably. However, short books can't be published - no one will be $15.99 for a 20 page paperback. Consider using a service like blinkist which does the summaries for you.
This is true for many nonfiction books, but certainly not all. History books are a classic example. Try condensing the entirety of "the Making of the Atomic Bomb" or
"Imperial China 900-1800" into 3 pages without loss of information. Now you might find information beyond 3 pages boring but that's not the same no extra information.
Most academic nonfiction falls into 2 categories. Either it could have been a single paper instead of a book or it feels like every chapter is a separate paper stapled together under a connecting theme. The only book I've written definitely fell into the latter category. You couldn't summarize it in three pages, but that's not always a good thing.
"it feels like every chapter is a separate paper stapled together under a connecting theme"
Well, sometimes they are literally that. It's not uncommon to find "an earlier version of Chapter 5 was published in the Journal of Such-and-such" in the copyright page.
Joke answer: If true, this would violate the Shannon Source Coding Theorem; the Shannon Source Coding Theorem is mathematically provable; BWOC, this is false.
The answer is, of course, that Cajou's use of "information" is technically incorrect. I am speculating, but Cajou probably meant something like "feelings of insight, changes of perspective, or facts I didn't know," which lays bare the subjective nature of the claim. Joke response: What if the book is written in qubits and the number of qubits adds up to two to the power of the number of characters that fit on 3 PDF pages?
Sometimes you read a book to gather facts and data. That information takes more than 3 pages to convey. Plus there is nuance, and stories involved in history. Takes more than 3 pages to summarize the Rise and Fall of The Third Reich. The whole point is to convey the weird variety of political maneuverings.
Oh that's a great book! Very briefly, it is "everything has been going to Hell in a handbasket and here's why", but he covers so much ground and (at least for me) gave so much new information that it's well worth reading. It's also very entertaining with the whole "damn kids these days" vibe 😀
Idle thoughts: Kant proposed that consciousness/sentience/free will is the result of the rational and physical (/animal) being combined. Chomsky proposed* that consciousness is contingent on language. Take a blended view and look at the latest work coming from AI and could it be that consciousness is the combination of physical/animal (blind sensory input in real time with no specific "training" set) and emergent language processing? It removes the rationality aspect from Kant which was always a challenge (making everything constantly consistent) and allows a potential gateway to understanding future interactions with LLMs.
I wonder, what does it "feel" like for a model to be trained vs called**? How much does real-time sensory input impact the nature of sentience? We know drugs are a problem for humans, could an AI fall into a mode of simply feeding itself fake data to "succeed" in its training?
For some reason I tend to imagine e.g. GPT-3 as being akin to a writer in a pure state of flow: divorced from worldly concerns and purely focused on following the train of thought where it leads.
* I have read Kant, but am relying on a single interview of Chomsky's I've listened to. I may be getting him completely wrong.
** I'm not really up to speed on the technical side of the current models, but the basic data science stuff I did with NNs didn't feed back the "real" output data back into training models, so there was no feedback loop to "learn" from calls to predict, unlike in training procedures (I imagine there must be *some* way of doing this in current models in order to keep stories consistent etc.).
I noticed a curious thing recently. Someone asked me a question, but I was mentally busy (reading something?), and then I said "yes", and THEN the answer and most of the question registered in my conscious mind. I then evaluated the question and answer, which turned out to be correct, but sometimes when this happens, the answer is wrong and I have to correct it.
I believe I am describing what some people call "autopilot", a phenomenon that seems to demonstrate that there are various mental subsystems (including the linguistic subsystem) which are physically separate from the conscious mind.
Yudkowsky seems to think that self-reflection is what makes someone conscious. I disagree with both Yudkowsky and Kant/Chomsky as you described; I think that consciousness is physically separate from those things, and that qualia is the meat-and-potatoes of consciousness. If you're being tortured, you are very much conscious, but your language and self-reflection abilities are not an important part of that experience. I further expect that all current AI models are not conscious, but we need a better theory of consciousness to be sure. We can observe, however, that reflexive behavior seems separate from consciousness and in most cases is opaque to consciousness, e.g. we cannot feel nor introspect the inner workings of our spinal cord, or whatever generates dreams, or our linguistic system. Instead, consciousness feels the outputs of such systems, and then, interestingly, can send out signals about what it feels (which in this case takes the form of comments on the internet).
"I wonder, what does it "feel" like for a model to be trained vs called?"
First of all, what is feeling? Its only identifiable from a distance, where multiple systems of reward+feedback, with different time horizons, overlap. Under a microscope, feeling reduces to sensation reduces to keeping track of cause and effect.
Calling is cheap, training is expensive. Pure calling obviously feels like nothing, because theres no change to the system. It might feel like a slight drain if power consumption is fed back in as an input.
In people, learning happens in multiple stages. First theres the subconscious, all-consuming 0 to 1, where a rough first draft of the action is pieced together. The "feeling" here is mostly black-box, hidden and used as subconscious feedback. It only bubbles up to conscious awareness when theres an existential threat to the activity.
Then comes the conscious competence stage, where the action is saved as a discrete skill, but still takes a disproportionate amount of attention+concentration. There's more mindspace for the conscious self to operate again, and it starts dealing with the eggs broken in the course of making the first iteration of the omelette.
With practice, the mental share of the skill gets carved away to a flexible minimal representation that operates automatically and can be tweaked.
So how does it feel to learn? Really all depends on how you slice it.
"How much does real-time sensory input impact the nature of sentience? "
Greg Egan explored this concept in his novel Permutation City. If you simulate a conscious brain on a computer, does it matter how fast the operations are performed? What if you pause the program and restart it later or what if you run it really slow? Would the simulated brain even notice? Probably not, so all that matters is the subjective experience of time, and not any objective passage of time.
As for feeding back output data to an NN, this is how GANs work (DALLE-2 is based on a GAN, I think). In a GAN, a discriminating NN tries to differentiate between real data and generated data, and a generating NN tries to maximize the error of the discriminating NN based on the output of the discriminating NN. So the system as a whole is feeding its output back to itself.
Judging my interest in the book reviews from titles alone:
* 1587, A Year of No Significance: The Ming Dynasty in Decline by Ray Huang
I'd like to see more Chinese history. A lot of what's out there has barriers to consumability for a Canadian like me. A good review can help sort that out.
* In Search of Canadian Political Culture by Nelson Wiseman
The title caught my interest. So did the reference to Albion's Seed. I'm also starting to think that we may be at the start of a new paradigm shift in Canadian politics, so this is good time to read a history.
* More Work for Mother: The Ironies of Household Technology from the Open Hearth to the Microwave by Ruth Schwartz Cowan
Recently Technology Connections released a video on the modern style can opener. He makes a point in the video on how small household inventions don't seem to catch on anymore. It's something I've been thinking about since, and this book seems to be in the same area.
Is this a new Machinery of Freedom? For most of these I think there is a 50% chance I'll read the book if the review is good. For this one, I'll just read the review. Aside: I loved the intro to Scott's review of that book.
* Public Choice Theory and the Illusion of Grand Strategy: How Generals, Weapons Manufacturers, and Foreign Governments Shape American Foreign Policy by Richard Hanania
After reading The Dictator's Handbook, a lot of books about politics have become hard to take seriously since they over-attribute everything to the person in charge. This book sounds like it could be different. The specific topic isn't interesting enough to get me to read the book, but the review seems interesting.
* Rationality: What It Is, Why It Seems Scarce, Why It Matters by Steven Pinker
My only interest in this is the name Steven Pinker.
* The Dawn of Everything: A New History of Humanity by David Graeber and David Wengrow
There is two reviews of the same book.
* The Righteous Mind by Jonathan Haidt
I've listened to this book three times now. It is my ideology for understanding ideologies. It is my hammer that makes everything look like a nail. I am very interested to see someone else's review of this book to see what they learned differently from me.
* Yąnomamö: The Fierce People by Napoleon Chagnon
I'll probably just read the review. The subsection title "They’re kinda dicks" really grabbed my attention.
I don't know if it is the same everywhere, but cans that need can-openers are almost a rarity now. Of the top of my head I can only think of the cheapest beans at Aldi, and the traditional canned steak and kidney pie (or variant) - which is admittedly a tough opening challenge even with a decent can opener. Almost all cans have pull tabs.
I think that the ring pulls hit a peak back in the 2000s, then the soup manufacturers realised that they could make more money by selling soup in reheatable bowls for convenience for lunch at work (bowls being more expensive or lower volume than tins). Then they started removing ring pulls from regular tins of soup to make them less convenient to eat outside of your own kitchen. No evidence to back this up, but it's the only reason I can think that most soup tins (in Canada at least) no longer have ring pulls, but tinned tomatoes etc. do.
Admittedly, I've bounced around the world quite a bit and maybe my series of events is affected more by geography than time.
I've watched that Technology Connections video, and while I enjoyed it (and will likely get the new style can opener if/when my current one breaks, although I purposefully got a high quality one that is likely to last a long time), I'm not convinced that it says anything about the likelihood of small household inventions to catch on. I had _hated_ the style of can opener for a long time and the reason was that it was non-intuitive and every time I had encountered them in the wild, I was not provided instructions leading to a frustrating experience.
Basically, I think that they were a special combo of small improvement and non-obvious difference in how to use them (they _look_ pretty similar to standard can openers, so there is no reason to intuit the little push-catch system) that made them have a high liklihood of a bad first impression.
Something that had just _one_ of these two features (either small improvement and identical operation or small improvement and hugely obvious difference in operation, or else a large improvement in operation that justifies further investigation in operation) would not have the same problem in catching on.
Additionally, one of my parents _loves_ buying various "as seen on TV" kitchen gadgets. In my opinion they have a very low hit rate, but as long as people like them exist, I think that adoption of new small kitchen gadgets will continue.
Any way? yes. A way that's possible given the kind of minimal resources I could personally bring to answer the question? Doubtful. I would note that at my clinic we have a team of "Clinical Care Managers" who bill insurance for time spent helping the patient navigate the healthcare system. If you could find that data, you might be able to extrapolate how much time is spent outside of the doctor's office doing things like coordinating visits, transportation and the like.
A lot. But the tradeoff is that if we hired a professional class --- lawyers and clerks and a big Federal Government agency -- to do it instead, the total cost would be even higher, because those people would have far less information on each individual case, and be wrongly motivated (i.e. by their own paycheck instead of balancing health/cost/convenience issues with a precise appreciation for their respective values to the individual).
Yes, you also have to be cognizant of political influence. Even if you had competent, selfless, and motivated people working at your federal agency, they'd still be working from legislation and regs that had basically been written by lobbyists. Medicare and Medicaid reimbursement policies are hardly free of meddling from industry groups like the American Hospital Association as it is. This would only get worse if all hospital revenues were derived from government sources.
Thinking about this...some of the things you have listed vary in value between different people. One person might want care close to their home, one might want a certain rating of quality. Time spent driving my mother to a clinic - I might be resentful of that, while my sister feels valued and honored to be of assistance while she is helping Mom.
It's also interesting that you say 'American insurance system' - all of the other (American) systems also have issues with time, difficulty with treatment access, etc.
I think the question is not so much "time spent dealing with doctors to get treated" as "time spent on the phone with insurance agents" and possibly "doctor's visits that exist purely to satisfy bureaucratic requirements"
Trying to get in touch with a bunch of (mostly) American VCs for several reasons, I have contacts but some are too tenuous for me to get an introduction through them (I could bring up "btw I know X" but it would be weird to ask for an introduction).
Is cold emailing at all effective? Is LinkedIn at all effective? Any pointers appreciated.
To be clear this is not primarily about a startup that needs funding otherwise I'm sure I could find a funnel on their site or somewhere similar.
Are you cold emailing the individual or the firm? If its the firm, it will likely be reviewed by an assistant or a very junior associate.
If it's the individual, keep it short (can you communicate it all in the subject?) and make it very very clear what you are asking of them. And if you can, provide some value to them as well. That should increase your chances of a response though I think the odds are pretty low no matter what.
My old startup kicked my research group out/forced them to spin out (but gave $20M in process) and the director of that group was like George Church's golden boy and has still had a really hard time securing VC funding - I would just recommend trying to be impressive both financially and technically while having persistence so that your pitch is showstopping when you do get an in
See, now *that* would be an actually interesting AI experiment. None of this writing poetry or playing chess humdrum. Write a piece of code that, presumably through an extended process of experimentation with thousands of fake accounts, determined the exact boundary of permissible speech on a web forum, and was able to predict with pinpoint precision the distance of any given comment from the boundary.
If someone could do that, I'd be genuinely impressed, as it would have solved a difficult general intelligence problem, which even human beings get wrong from time to time.
Go and see Everything, Everywhere, All at Once. It was weird and indescribable and riveting. The trailers did not do it justice. It's not merely great, it's an opportunity to see genius showing off.
Might as well be subtitled "Michelle Yeoh and the Multiverse of Madness", and I expect I will rank it ahead of whatever the MCU puts out later this year. It's very, very good, and should be seen unspoiled.
The multiverse stuff was, imo, the delivery vector. The genius I saw in the movie was ...
<SPOILERS AHEAD, OBVIOUSLY>
.
.
.
.
how it found meaning in total opposites. We saw high-octane action, horror, and comedy in the mundanity of a tax office. We saw existential dread in the banality of a bagel. Raw human emotion in snarky demotivational posters of rocks. Touching romance in the fucking hotdog world. Regret and malaise at the movie premier where she has it all. The kinetic villains were a teenager and a middle-aged woman. The climactic moment of breaking bad cycles happened at a traditional celebration of a new year at a laundromat.
On the flip side, we didn't get to watch the action in the post-apocalyptic alphaverse, even though surely it must have been in some objective sense very cool. Because that's too easy, too on the nose. All we got from them was exposition and human relationships.
And it was so well made that they told us the game and we still didn't see it coming. The "do the opposite to unlock your alter universe ego" *is* the whole shtick. It's really hard to find meaning like that! I struggle to imagine a mediocre version of this movie, just very good and very bad implementations. If it had been 90% as good I think it would have felt 10% as good. That's why I find it so audacious.
Moreover, I think the movie only works because of that audacity. Because if there can be action in an IRS office, there can be action anywhere - even in your or my life. If they can access their humanity in such a bleak and lifeless place as the rock world, surely anyone can access it anywhere - even you or me. If they can change in their moment of grinding tradition, surely you or I can change and improve right now. The movie felt so universal and personal not because it said any of those things, it would never be so crass as to say them out loud. It just showed them, seemingly effortlessly, while winking at us.
-edit- this comment contains very minor spoilers about general theme and tone of the movie. If you don't wish to see those kind of spoilers, skip this comment.
I am a big R&M fan, but I still thought the movie was amazing. Probably because the multiverse stuff was not the _point_ it was the McGuffin used to deliver a family drama.
I thought that the movie was simultaneously the best comedy, drama, and action movie I had seen in a long time (although I'm not big consumers of either drama or action/kung fu movies on the regular, so my rating in those categories should be taken with a grain of salt).
If you are viewing it purely as a sci-fi exploration of multiverse shenanigans, then yeah, it was...fine, but not great. The details are almost entirely omitted and ramifications are barely even touched on.
But as a vehicle to deliver a touching story about a family on the rocks with well done fight choreography and (IMO) hysterical physical comedy, it worked really really well.
I'm not sure I enjoyed it, but I agree that it's masterfully made and that you want to go in cold. It's weird, but given the weird concept at its heart it's the best movie it could possibly be.
Very hard agree. It's one of the best movies I've seen in the last ten years, and I have extensive knowledge on the subject and excellent taste.
My advice for anyone interested in seeing it is to go in absolutely dead cold. Don't look at the poster, don't read reviews, and especially don't watch previews. Just go in and see it.
"start a conditional prediction market ... if the prediction market is higher than 25% then you can send me an email with a link to the market and argument and I’ll look at it."
Isn't there something distortionary about this? E.g., suppose the market were at 30%, I believe the true chance of it being worth reading is 0%, and I have unlimited money. Ideally, I'd bid the price down to ~0. But then Scott doesn't look at the appeal, the market won't get resolved, and I make no money! Is there ever a reason to drive the price below 25%, regardless of your true belief?
Well, if you think it's zero and you bid it down, the person who wants it to get reviewed needs to bid it back up, so you take that as profit. I think the problem is more tying money up in a conditional market which might not pay out to anyone.
Scott could fix this by randomly choosing prediction markets trading at below 25% to look at, and resolving them. But in practice there aren’t going to be enough markets for this to really work.
Found this piece of comedy gold in a freewrite I did a few months ago (because I'm usually a mediocre writer - at least personal-writing-wise), so I'm posting it here now. Feel free to analyze it to oblivion and beyond.
> The moon is not made of cheese, as is commonly thought, but is made of rock. The sun is also not made of cheese, though far fewer believe this, but the sun is made of plasma. If the moon or the sun were in fact made of cheese, I would expect that their sizes would be quite different, because cheese, rock, and plasma have different densities from each other, which means that equal masses of these three substances would take up different amounts of space. Also, if the sun were made of cheese, I think that the gravitational pressure alone would be enough to make it burn and turn it into plasma again.
The correct paragraph to write next is to first say what the type of cheese is, and then calculate out what would happen, and then be able to definitively say whether or not a cheese sun would become a sun again. :)
Really enjoying checking out all the book reviews! One of them is mine. I'd love to assist in the review rating process, but want to make sure that's kosher first. Are we assuming that everyone will give their own review a 10, or banning the practice of rating one's own review?
The form for submitting a response asks for your email, partially to prevent fraud. So I'm going to give mine a 10 using the same email I submitted my review from and if that turns out to be the wrong decision it should be really easy for Scott to fix by matching authors to self-reviews (but I don't lose out if other people are doing it and Scott was expecting them to).
As an aside, the quality of entries is absolutely *crazy* - I don't know how many are going to make it through to the finals but I'm up to double-digit numbers which I wouldn't be at all disappointed to see win. It would be great if there was some way to preserve the reviews scoring above some threshold but which don't make it to the final round, because there's a huge amount of excellent content here.
My daughter got Wordle on her first try. I am not sure what the odds of that are. However it did get me thinking about how millions of people doing Wordle are all focusing on the same thing at the same time. It would be an interesting way to test if there is any collective consciousness that can be shared albeit unconsciously. I was wondering if anyone had ever tried to do any research in this area.
In my case, the odds are zero, because the word I always guess first is not in the list of possible Wordle answers. If you start with a word that _is_ in the list then you have a 1/2300ish chance of guessing correctly on the first turn. (In some sense; the actual order of appearance is fixed. But if you don't know that order, but _do_ somehow know that your word is on the list, and haven't been paying attention to what past words have been and whether or not any of them is your word, then the correct probability to assign is about 1/2300.
"Collective consciousness" in the sense of, say, telepathy seems vanishingly unlikely to me. But what might happen is e.g. that some particular topic is in the news and more people than average guess related words, and sometimes one of those words will happen to be Wordle's word of the day. If what you're interested in is Funky Telepathy Stuff, I think it would be very difficult to disentangle from that sort of exogenous correlation.
I saw my niece get it in two yesterday. She only had an a from the first one and was just guessing the second one, she said. 1-2 are flukes. 3 can be worked out.
There are 13000 possible guesses, and the original creator chose 2315 of them to be the target words for the following 2315 days. The criterion for being one of those was apparently just whichever ones were "familiar to [the creator's] partner", creating a strong bias for the target words to be fairly common. cite: https://arstechnica.com/gaming/2022/02/heres-how-the-new-york-times-changed-wordle/
So while the odds of a correct first guess might seem extremely low (one in 13000! wow!) those odds are raised significantly because common words, which will often be the guesses you'd think of first, have a higher chance to be correct. Out of 4 people in my family who Wordle, there have been 3 correct first guesses in 3 months, and while that's higher than baseline, it's not all too crazy given this bias towards common words.
Reading a biography of Angela Merkel called "The Chancellor" by Kati Marten. Written before the recent war in the Ukraine it was interesting to read about Vladimir Putin's relationship with Merkel and the west in general. In 2007 at a meeting in Munich he was highly critical of democracy and the nations that support it: "His stated goal had become to reclaim Russia’s place as a formidable global player by any means necessary". He also didn't like the criticism coming from a reporter about the war in Chechnya and somehow she was murdered outside a Moscow apartment on Putin's 54th birthday. Elsewhere he proclaims: "His ultimate goal is to weaken the European Union and its ally the United States." and he feels the Soviet collapse was “the greatest geopolitical catastrophe of the twentieth century”. Seems like a nice guy though ;). Here's the full quotes from the book:
"On February 10, 2007, the somber prime minister of a resurgent Russia strode onto a stage in Munich to deliver a scorching diatribe against democracy, the West, and everything for which Angela Merkel stands. “Russians are constantly being taught about democracy, when those who teach us do not want to learn themselves,” he rebuked the gathering of transatlantic security specialists and government officials. Gone was the accommodating Putin of just a few years earlier, grateful to be a part of the European family and proud that the German chancellor spoke good Russian. His stated goal had become to reclaim Russia’s place as a formidable global player by any means necessary. Blending lies with threats, he taunted the audience, deflected hard questions, and punctured the West’s moral superiority. “Wars have not diminished,” he charged, in spite of the West’s attempts to broker peace around the globe. “More are dying than before.” Though Putin had not yet thrown his support behind Syrian dictator Bashar al-Assad’s genocidal war against his own people, he scolded Washington for its wars in the Middle East and referred to the Cold War as a “stable” era. Merkel, sitting in the front row, was visibly shaken by the Russian’s venomous performance—and his description of the system that had kept her its prisoner for thirty-five years. Not since Soviet leader Nikita Khrushchev pounded the UN podium with his shoe in 1960 and earlier proclaimed, “We will bury you!” had the world heard such vitriol from a Russian head of state. But Khrushchev thundered at the height of the Cold War; this was 2007. Things were supposed to be different now. Yet for the next decade and a half, Angela Merkel’s relationship with Putin would be her most frustrating and dangerous. It would also be her longest relationship with a fellow head of state, its roots reaching back to November 9, 1989."
"Vladimir Putin, once a proud standard-bearer of the humiliated Soviet Union, had learned a lesson he would not soon forget. Unchecked demonstrations and sudden eruptions of freedom can topple even the world’s most heavily armed empire. His battle to reverse what he considered to be “the greatest geopolitical catastrophe of the twentieth century” (Soviet collapse) would ensnare Angela Merkel, a product of the same failed state. Their convoluted relationship would zigzag between faint hope and despair on her side, and dogged determination on both their parts. She was chancellor of Germany, and he was the modern-era czar of Russia. Divorce was not an option."
"From his perspective, the Cold War did not end in 1989; it merely took a short breather. Since then, Russia’s tactics had evolved. While the Soviets brandished nuclear-tipped missiles, Putin opts for weapons that are less conventional and less visible but ultimately more flexible and effective, such as spreading discord in the West through disinformation and cyber warfare, Putin sees himself, in his own words, as “the last great nationalist.” His ultimate goal is to weaken the European Union and its ally the United States. “The main enemy was NATO,” Putin said of his KGB service in Dresden"
"But he failed to intimidate her. In Dresden, the site of Putin’s deepest humiliation, Merkel even flipped his script. It was she who both diminished and humiliated him. The leaders met in his former town in October 2006, three days after the Moscow murder of Anna Politkovskaya, a reporter and human rights advocate whose coverage of Russia’s savage proxy war in the republic of Chechnya had gotten under the president’s skin. When Politkovskaya was shot dead in the elevator of her Moscow apartment building on Putin’s fifty-fourth birthday, some observers felt the timing of her murder was not a coincidence."
Well, for a start, the Russians shouldn't have blown up hundreds of their own civilians. If not for that, I assume they would have had little reason to have a war in Chechnya.
I am neither the OP or the book author, but I guess Chechnya should have been given independence and then attacked in case they decided to create some sort of a Caliphate. But not attacked in the medieval way they were attacked.
Generally the russian lack of care for civilian lives (or any lives for that matter) is the biggest issue here. In Chechnya, in dealing with Chechnyan terrorists in Moscow (and that russian kindergarden, I forgot where that was exactly), in leveling of Grozny, Aleppo, Mariupol, in their terrorist tactics elsewhere in Ukraine...without all of this, at least the war in Chechnya could have been seen as reasonably legitimate by the West.
The US had its blunders in Iraq and Afghanistan but those atrocities were documented by US press, the soldiers responsible were actually persecuted and jailed. Russian soldiers are rewarded by their institutions for even a lot more ghoulish behaviour.
test comment
I sent in a bunch of review scores without giving my address. Should I re-send them with my address?
Why is there an international shortage of MAOIs?
They are decreasingly prescribed by physicians despite their extreme efficacy. Is it just not worth it anymore for pharmaceutical companies to produce them?
Follow-up question: where can I get it? (Darknet recommendations?)
I've had asthma since I was a kid but it's been mostly mild. I'd have an attack maybe one or twice a year and it was harder to breath but never to the point where I felt like I would actually panic or pass out. As a kid I had an inhaler but I never bothered with that as an adult.
But yesterday I had a pretty severe attack. Worse than any I'd had before to the point where I had a definite feeling of panic that made it very hard to control my breathing. The panic would constantly make me try to inhale my next breath before I could finish exhaling my previous one. It lasted for hours but it did subside eventually.
So I went to the pharmacy to buy an asthma inhaler and they straight up refused to sell me one. Prescription only apparently. But the problem is I don't have a doctor or internet (I'm on wifi at a café atm) so no virtual visits either. So that leaves the only (official) option of sitting for eight to ten hours in a room full of sick people so I can talk to a doctor for five minutes and get the stupid piece of paper that permits me to buy an emergency inhaler in case of another attack. Of course if I catch something while I'm there that could well trigger an attack in itself.
Ah well. I just looked up asthma inhaler prices online and there's a wide range of prices but possibly I couldn't afford one anyway. (the markup on some of them must be in the range of 10,000 percent markup) So I guess I'll just have to take my chances.
So I'm feeling a little bitter atm. I'm in BC, Canada for those who would like to make note of the data point wrt the relative state of health care in various countries.
In US, I think Primatine is OTC.
Sorry to hear. 8-10 hours sounds enormous. Maybe you can register at the doctor, and ask them if you can leave for a couple of hours and come back?
Thanks for your concern. I was frustrated when I wrote that. Probably waits of eight to ten hours are not typical though they definitely did happen as least as of about ten years ago. Not sure how things may have changed in the covid era as I have not set foot in a hospital in years.
Study suggests that time-restricted eating (intermittent fasting) isn't more effective than restricting calories (dieting).
No discussion of what people tolerate better.
https://www.nejm.org/doi/full/10.1056/NEJMoa2114833
"Of the total 139 participants who underwent randomization, 118 (84.9%) completed the 12-month follow-up visit. The mean weight loss from baseline at 12 months was −8.0 kg (95% confidence interval [CI], −9.6 to −6.4) in the time-restriction group and −6.3 kg (95% CI, −7.8 to −4.7) in the daily-calorie-restriction group. Changes in weight were not significantly different in the two groups at the 12-month assessment (net difference, −1.8 kg; 95% CI, −4.0 to 0.4; P=0.11). Results of analyses of waist circumferences, BMI, body fat, body lean mass, blood pressure, and metabolic risk factors were consistent with the results of the primary outcome. In addition, there were no substantial differences between the groups in the numbers of adverse events."
Sounds to me more like it showed that time-restricted eating IS more effective. p=0.11 isn't bad for a study of just 139 people.
The advantages of fasting would be easier compliance, appetite suppression, autophagy and immune downregulation. There might be some anti-insulin resistance magic going on as well but that can be rolled into appetite suppression.
From the abstract, they eliminated the first two through study design and didn't measure outcomes of the latter two, outside of very distant proxies. Unfortunately, scihub doesn't have the article so I can't comment in greater depth.
EDIT: I just realized they tested 16h fasting, from a hormonal standpoint that's regular calorie restriction with extra steps.
The Master and His Emissary (https://www.amazon.com/Master-His-Emissary-Divided-Western/dp/0300188374) has been sitting in my reading queue for a few years, and I'm about ready to start reading it.
It was published in 2009. What has happened since its writing that should affect how I read it? I'm particularly interested in (a) results McGilchrist relies on that have since failed to replicate and (b) well established (replicated) results that postdate the book and should meaningfully affect my reading.
With Google having become far less useful recently due to a severe bias toward mainstream news and SEO-optimized sites, I'm finding it especially urgent to learn news ways of finding useful and detailed information.
Apart from Google Scholar, what techniques have you discovered for finding useful, detailed information sources *that you didn't know about before*?
Depending on the topic, my first choice is usually "<google-query> site:reddit.com" because it's basically a bunch of topic-specific forums on one domain. That may or may not qualify as "useful", "detailed", or "didn't know about before", but it works for me 80% of the time.
If that fails me: https://searchmysite.net/ and https://search.marginalia.nu/ search a curated list of personal sites. The latter is even better because it has different ranking algos and an option to block sites with JavaScript. If all else fails, there's always https://millionshort.com/.
Lottery of Fascinations... the jump rope expert.
https://www.youtube.com/watch?v=JLC_T1jQ5Lk&ab_channel=WIRED
Wow
Jaw dropped. Thank you.
Everybody else: About halfway through it looks like it's about to stop being interesting. It isn't; stick with it.
Looking for a link from a relatively recent post, about how if people are truly on the fence about making a change in their life ( breaking up, moving, changing jobs, etc. ), there was a study that flipped a coin for them, and found that people who made a change were ultimately happier. Where was that?
On ACX, it was here: https://astralcodexten.substack.com/p/peer-review-request-depression Section 2.1.1.
https://lmgtfy.app/?qtype=search&t=w&q=here+was+a+study+that+flipped+a+coin+for+them%2C+and+found+that+people+who+made+a+change+were+ultimately+happier.&as=0&s=g
Here's a list of all the books reviewed, without subheadings. I don't know if anyone else wants this, but I found it a useful tool for browsing.
A Canticle for Leibowitz by Walter M. Miller Jr
A Connecticut Yankee in King Arthur’s Court by Mark Twain
A Failure of Nerve: Leadership in the Age of the Quick Fix by Edwin H. Friedman
A History of the Ancient Near East
A Secular Age by Charles Taylor
A Supposedly Fun Thing I’ll Never Do Again by David Foster Wallace
A Swim in a Pond in the Rain by George Saunders
1587, A Year of No Significance: The Ming Dynasty in Decline by Ray Huang
Ageless: The New Science of Getting Older Without Getting Old, by Andrew Steele
Albion: In Twelve Books
An Education for Our Time by Josiah Bunting III
An Empirical Introduction to Youth by Joseph Bronski
Anthropic Bias by Nick Bostrom
At the Existentialist Café by Sarah Bakewell
Autumn in the Heavenly Kingdom by Stephen Platt
Bronze Age Mindset
Capital and Ideology by Thomas Piketty
Civilization and Its Discontents by Sigmund Freud
Come and Take It: The Gun Printer's Guide to Thinking Free by Cody Wilson
Consciousness and the Brain by Stanislas Dehaene
Cracks in the Ivory Tower: The Moral Mess of Higher Education
Deep Work by Cal Newport
Development as Freedom by Amartya Sen
Disciplined Minds: A Critical Look at Salaried Professionals and the Soul-Battering System That Shapes Their Lives by Jeff Schmidt
Economic Hierarchies by Gordon Tullock
Exhaustion: A History by Anna Schaffner
Facing the Dragon: Confronting Personal and Spiritual Grandiosity by Robert Moore
Fashion, Faith and Fantasy in the New Physics of the Universe by Roger Penrose
Frankenstein by Mary Shelley
From Paralysis to Fatigue: A History of Psychosomatic Illness in the Modern Era by Edward Shorter
From Third World to First: The Singapore Story: 1965-2000 by Lee Kuan Yew
Future Shock by Alvin Toffler
God Emperor of Dune by Frank Herbert
Golem XIV by Stanisław Lem
Haughey by Gary Murphy
History Has Begun by Bruno Macaes
How Solar Energy Became Cheap: A Model for Low-Carbon Innovation
I See Satan Fall Like Lightning by René Girard
In Search of Canadian Political Culture by Nelson Wiseman
Industrial Society and Its Future by Ted Kaczynski (also known as the Unabomber Manifesto)
Inventing Temperature: Measurement and Scientific Progress by Hasok Chang
Irreversible Damage: The Transgender Craze Seducing Our Daughters by Abigail Shrier
Island by Aldous Huxley
Jamberry by Bruce Degen
Japan at War: An Oral History by Haruko Taya Cook and Theodore Failor Cook
Kora in Hell: Improvisations by William Carlos Williams
Leisure: the Basis of Culture by Josef Pieper
Making Nature: The History of a Scientific Journal by Melinda Baldwin
Making Sense of Tantric Buddhism: History, Semiology, and Transgression in the Indian Traditions by Christian K. Wedemeyer
Memories of My Life by Francis Galton
Mind and Cosmos by Thomas Nagel
More Work for Mother: The Ironies of Household Technology from the Open Hearth to the Microwave by Ruth Schwartz Cowan
MOSCOW-PETUSHKI by Venedikt Yerofeyev
Nobody wants to read your sh*t by Steven Pressfield
Now It Can Be Told: The Story of the Manhattan Project by General Leslie M. Groves
Peak: Secrets from the New Science of Expertise by Anders Ericsson
Pericles by Vincent Aulay
Private Government by Elizabeth Anderson
Public Choice Theory and the Illusion of Grand Strategy: How Generals, Weapons Manufacturers, and Foreign Governments Shape American Foreign Policy by Richard Hanania
Rationality: What It Is, Why It Seems Scarce, Why It Matters by Steven Pinker
Reason and Society in the Middle Ages by Alexander Murray
Robert E. Lee: a life by Allen C. Guelzo
San Fransicko: Why Progressives Ruin Cities by Michael Shellenberger
Slaughterhouse-Five and Breakfast of Champions, by Kurt Vonnegut
Storm of Steel by Ernst Junger
Surface Detail by Iain M. Banks
Sweet Valley Confidential by Francine Pascal
Termination Shock by Neal Stephenson
Troubled Blood by J.K. Rowling
The Age of the Infovore by Tyler Cowen
The Anti-Politics Machine by James Ferguson
The Axis of Madness
The Beginning of Infinity by David Deutsch
The Book of All Hours series - “Vellum” and “Ink” - by Hal Duncan
The Book of Blam by Aleksander Tišma
The Book of Why by Judea Pearl and Dana Mackenzie
The Brothers Karamazov by Fyodor Dostoevsky’
The Castrato by Martha Feldman
The Condition of Postmodernity: An Enquiry into the Origins of Cultural Change
The Dark Forest by Liu Cixin
The Dawn of Everything: A New History of Humanity by David Graeber and David Wengrow
The Dawn of Everything: A New History of Humanity by David Graeber and David Wengrow
The Deficit Myth by Stephanie Kelton
The Diamond Age or A Young Lady’s Illustrated Primer by Neal Stephenson
The Doomsday Machine: Confessions of a Nuclear War Planner by Daniel Ellsberg
The Ecotechnic Future by John Michael Greer
The Eighteenth Brumaire of Louis Napoleon by Karl Marx
The Evolution of Beauty: How Darwin's Forgotten Theory of Mate Choice Shapes the Animal World - And Us by Richard Prum
The Extended Mind by Annie Murphy Paul
The Fall of Robespierre: 24 Hours in Revolutionary Paris by Colin Jones
The Future of Fusion Energy by Jason Parisi and Justin Ball
The Goal / It’s Not Luck by Eliyahu Goldratt
The High Frontier: Human Colonies in Space by Gerard K. O’Neill
The Hundred-Year Marathon: China’s secret strategy to replace America as the global superpower by Michael Pillsbury
The Internationalists by Oona Hathaway and Scott Shapiro
The Irony of American History - by Reinhold Niebuhr
The Knowledge: How to Rebuild Our World from Scratch by Lewis Dartnell
The Man Who Quit Money by Mark Sundeen
The Mathematics of Poker by Bill Chen and Jerrod Ankenman
The Matter With Things by Iain McGilchrist
The Mirror and the Light by Hilary Mantel
The Motivation Hacker by Nick Winter
The Myth of Mental Illness by Thomas Szasz
The Narrow Road to the Deep North by Matsuo Basho
The New Science of Strong Materials by J. E. Gordon
The One World Schoolhouse by Salman Khan
The Origins of The Second World War by A.J.P. Taylor
The Outlier: The Unfinished Presidency of Jimmy Carter by Kai Bird
The Party: The Secret World of China's Communist Rulers by Richard McGregor
The Reckoning by David Halberstam
The Republic by Plato
The Righteous Mind by Jonathan Haidt
The Righteous Mind – by Jonathan Haidt
The Russian Revolution: A New History, by Sean McMeekin
The Society of the Spectacle by Guy Debord
The Tyranny of Metrics by Jerry Z. Muller
The Virus in the Age of Madness by Bernard-Henri Lévy
The Yom Kippur War: The Epic Encounter That Transformed the Middle East by Abraham Rabinovich
Three Years in Tibet by Ekai Kawaguchi
Trans: When Ideology Meets Reality by Helen Joyce
Troubled Blood by J.K. Rowling
Trump: The Art of the Deal by Donald Trump and Tony Schwartz
Unsettled: What Climate Science Tells Us, What It Doesn’t, and Why It Matters by Steven Koonin
Unsettled. What Climate Science Tells Us, What It Doesn’t, And Why It Matters by Steven E. Koonin
Very Important People: Status and Beauty in the Global Party Circuit
Viral by Alina Chan and Matt Ridley
War in Human Civilization by Azar Gat
When men behave badly by David Buss
Whiteshift: Populism, Immigration, and the Future of White Majorities
Who Wrote the Bible? by Richard Elliott Friedman
Yąnomamö: The Fierce People by Napoleon Chagnon
That is... enough material for a separate blog with weekly updates.
In addition to a competition for who wrote the best review, maybe we should have a contest for who can read the most entries.
I read a lot of them, skimmed through more, and skipped a few. I'm now waiting for the final result because there is one question I want to ask one of the reviewers of one of the books, and it's not fair to start a discussion about reviews right now.
But honestly, I was delighted and surprised to see what some of the books reviewed were, and although there were several that didn't interest me at all, that's just my own tastes and I'm very glad there is such a spread of topics covered.
For all the occasional kerfuffles and spats, we definitely are each other's people 😁 📖
I’m planning on reading ‘em all; currently about a third of the way through. So far, about half have been quite good, about 10% have been pretty bad, and the rest just ok
how far have you gotten?
Well, I read & ranked all of 'em. 3/4 were good-to-great, 1/4 were "meh" or worse. Only 11 reviews were actually bad. 13 definitely deserve to be finalists, and another 12 arguably deserve to be finalists. That leaves 74 (about 56%) which I enjoyed reading but don't quite make the final cut. An excellent competition so far!
Wow, it must have taken quite a while to review all the reviews. Do you plan to post more about your impressions of the reviews?
I’ve got 16 left to go - will probably finish in a day or two. Tons of stuff that’s well worth reading. Less than 10% are actually bad. Many are fascinating glimpses of topics I know little about - I always enjoy reading stuff like that 🙂
You are not alone.
Scott, you can consider adding to the book review rating form a free text field for feedback which will be shared with the book review author. Feedback is valuable for improvement! :)
Would other reader be able to see the feedback? In that case, comments will sway the ratings.
Seconded, both for good and bad reviews !
Seconded. One review motivated me to read the actual book because I had at least 5 specific criticisms and I thought I could do better.
Hi everyone who came to the Irvine Meetup, just wanted to say I had a great time! Thank you for dropping by, Scott!
So I was reading Medium today as I often do, and.... something inside me snapped after I saw Medium's reading recommendations. And so I felt compelled to write this. https://medium.com/big-picture/im-sick-of-medium-s-russian-propaganda-7fe63eaaa63f
I'm surprised CodePink is still going. By the tone of your article, I presume whoever remains has moved far-far-left?
I'm not sure because this was my first encounter with CODEPINK. I guess they accepted certain elements of Russian propaganda for the same reason the Gravel Institute did, because they like anti-US messages which Russia was eager to provide. The rest of CODEPINK's article wasn't obviously wrong to me (for the most part), perhaps only because I wasn't familiar with a lot of the issues it discussed. (Notably the same article talked about China-Taiwan tension, for which they predictably blamed the U.S.)
I've began skimming through the reviews. Is it OK to speculate on who wrote what ?
Also, what scaling do people use for ratings ? I feel I've been rating things a bit too high - giving 5-6 to just OK reviews.
7-9 for 'really good', 5-7 for 'good but not the greatest' and 2-3 for 'this is poor/terrible/if I knew who you were, I'd be in a slapfight with you right now, reviewer'.
+1 to having a scale guidance.
My guess is no. Scott has explicitly said he wants to keep it anonymous to not sway the voting.
Re the ratings, maybe rate all the ones on your spreadsheet first, so you get an idea of their relative strength, then submit them?
Yeah I definitely started too high. The quality of some of those reviews (which are actually summaries) are amazing.
I'm real curious to find out who wrote the 'Gossip Trap' review of Dawn of Everything
Is being "intellectually angry" a thing? I don't mean being angry about some culture war issue; it's more "someone is wrong on the internet" but with an added feeling of helplessness that I can't possibly correct them.
The issue is AI Alignment Risk and this website. I love this website. Scott is one of my favorite all-time bloggers. But taking AI Alignment Risk so seriously is obviously so fucking crazy that it makes me "intellectually angry".
I've never understood exactly what the Rationalist community is, never really tried that hard. I mostly like them, yet there has always seemed something slightly *off* about them. Off in the way that Mencius Moldbug is brilliant but also clearly... off.
I realized today what the offness is, for me. It's the difference between Platonic vs. Aristotelean thinking. The term Rationalism has bemused me because I have often thought over the years: "Isn't this just Enlightenment thinking and didn't that start in the 18th century?"
But now I see the difference. The Enlightenment focused on empiricism and was most influenced by Aristotle. Rationalists, in their embrace of Bayesian thinking, seemingly feel free to discard empiricism, and this has led them to believe some crazy, rudderless shit. Such as AI Alignment is a reasonable thing to spend tons of time and money and human intelligence on.
To be clear, our gracious and brilliant host is also a brilliant, trenchant empiricist--when he works with empirical data. Unfortunately, he also seems to update--way too much---on non-empirical issues while in the company of persuasive friends. AI Alignment being the main one.
I think we need more debate between those who believe AGI is an X risk vs those who don't. All the headline debate on the issue here now seems to be between those who believe AGI is a huge risk and those who believe it is only a very, very major risk.
It makes me intellectually angry, if that's a thing.
> "someone is wrong on the internet" but with an added feeling of helplessness that I can't possibly correct them.
Oh yeah, that feeling is a driving force in my life. Makes me write proposals like this: https://forum.effectivealtruism.org/posts/ckcoSe3CS2n3BW3aT/?commentId=u7hPPF5r993iKYL5e
> Rationalists, in their embrace of Bayesian thinking, seemingly feel free to discard empiricism
As an aspiring rationalist, I hereby denounce any such idiocy you may have encountered.
> Such as AI Alignment is a reasonable thing to spend tons of time and money and human intelligence on.
It depends what you mean by "tons". I consider the near-term risk of AGI-induced global catastrophe to be pretty low .... maybe 1% in the next 25 years, or something like that. But that doesn't mean it doesn't deserve billions of dollars of research funding to mitigate. x-risks, Global Catastrophic Risks and s-risks can still have an immense negative expected value even at low probabilities.
OTOH I do kinda think that some other GC risks may be underestimated, e.g. might we be close to reaching Earth's carrying capacity, especially if a big war occurs? I couldn't find any EA research on that. Weird.
I don't see how this is connected to empiricism; I love empiricism, but it doesn't mean that I must assign a probability of zero to any event that has never occurred in human history. To the contrary, history is chock-full of examples of unique events with far-reaching consequences. Doesn't this imply to an empiricist that ze should be concerned with such things? And anyway 0 is not a probability (https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities)
Edit: I would, however, add that I think Yudkowsky overstates the case for this particular x-risk, and I'm not sure if it's deliberate exaggeration or if he and I have genuinely different opinions. (Edit 2: actually I'm pretty sure he sees a higher risk than me, but whether it's a small or large difference is unclear. But I think the risk rises a lot *after* the first AGI is invented, i.e. that the first one isn't going to be the most dangerous.) But to throw it back at you, if you estimate that a certain catastrophe X has a 1% chance of happening, and if it happened, would cause damage somewhere between $1 trillion and <human extinction>, how much do you think it would be worth spending on prevention, assuming maybe you could cut the risk in half or so?
I don’t really think you can be intellectually angry, but being angry at other people’s stupidity is a very human thing.. anger is not rational by design.
It’s a feature, not a bug.
on the topic of artificial intelligence catastrophe, here is a quote from an article at Politico:
(Full article here)
https://www.politico.com/news/2022/04/19/dan-odowd-senate-bid-elon-musk-00026195
O’Dowd said he hoped the campaign — starting with the national ads — would clarify for Americans that his goal is to make computers safe for humanity. He said even more than a nuclear attack, he fears that someone will one day click a computer mouse in Moscow or Pyongyang and “half the country is going to go down.”
“This country could be put back into the 1820s by someone coming in and getting control of our software,” he said.”
This man is intent on destroying Tesla at any cost for reasons that are rather obscure to me. Except this quote kind of gave me a peek into it.
What is he afraid of? Is it the computers, or is it the people behind them?
I think this really ties in to some of the intense feelings about AI and it’s threat to us.
My personal opinion is what we are really afraid of is ourselves.. which, given our box score, is not unreasonable.
Some of us are casting ourselves as the good parents who inadvertently raised a psychopath and couldn’t stop wondering how their child got that way.
I do not understand what would drive an artificial intelligence to want to destroy us unless we somehow convinced it it was a really really good idea. You would have to convince me that acting from a deep place of power and desire was somehow a function of intelligence.
The other thing I always wonder is what kind of a body are we going to put this intelligence into to live? Because that will make a big difference in its disposition.
What if we came up with an AI that could do all the wonderful things we imagine a Superintelligence could do. Take us to other galaxies, find ways of making infinite amounts of food, figuring out how we can run all our gadgets with no wires or nothing. It could do all this cool stuff, but instead of doing those things from a place of wanting to take over the universe it was doing it just for fun.
That way they would keep us around as pets and probably get a kick out of us.
I've always felt this way about the obsession with AI risk. My feeling connects to a few things I can identify:
- eschatological religion: the obsession with some impending end of the world that is always nigh, and how the emotional posture of those in the church is a kind of vast smugness that says "listen to us or meet your deserved end". We all have a natural aversion to this kind of thing: we can sense the smugness, the immaturity, the sanctimony, the shape of the ego that would let itself fully subscribe to an idea and turn around and expect others to see that they're right and come to them, and just how much it feels like a posture that serves the ego rather than a legitimate spiritual belief.
- climate-change anxiety; the overcharged opposition between those who "believe science" and deniers. For one, climate-change doomsaying often comes off feeling eschatological. And the deniers end up denying far too much, and regarding AI risk it feels like being skeptical would get you grouped with them, which feels unfair because of the very different standards of proof, so we resent this. Then the climate-change doomsaying takes a form of continual atonement: microscopic acts to address guilt (plastic straws) rather than any true sacrifice (a career in battery technology), is far too focused on what WE can do rather than the actual political problems of creating international agreements and enforcingthem, and is all out of proportion (the realistic scenarios expose us to a level of "life affected by environmental forces" that would still be a luxury for people any time in history before a century ago, nothing like a true end of the world)
- and, I did a couple of years of physics grad school before becoming VERY disenchanted with the cutting edge of physics. I recall that the core emotional arc of this, for me, was not a genuine interest in the subject (although I did have one), but a fear of engaging with the messy real world. Instead, I preferred the orderly natural world; it has the feeling of studying the deep magic of reality. At my core I had a fantasy of retreating into obscurity for decades to study arcane magic, eventually to re-emerge with some world-changing accomplishment (like relativity) and receive my reward in adulation and Nobel prizes. Pretty selfish. Very suspicious now that many people get into arcane fields for this reason, and suspicious of any enterprise that never gets close enough to reality. (See also: tech projects that take too long to get in front of users)
For all these reasons I cannot see AI risk talk without feeling like the interest is REALLY an emotional one based in the proponent's need to be on the right side of something that feels huge and important, more than rational calculation (while being heavily dressed up in rational aesthetics to support that core emotional need)
Edit: but, I'll grant, my opposition to AI risk is JUST as emotional: a wariness that the AI-risk-doomsayer has given in to their ego's desire to be important. Climate deniers must start from this too. A properly rational calculation is called for, I'll grant, to get beyond this. But if you rationally calculate that AI risk is a big deal, and you want to doomsay, you'd be well-served to steer far clear of any language that keys into this particular emotional script, and to be wary of giving into it yourself.
I think the only people who think there is literally zero risk from AGI are those who believe AGI is impossible or at least that it will take hundreds to thousands of years to achieve (so zero risk right now).
That leaves open exactly how concerned we should be right now of course, but presumably if we're not entirely sure that any possible AGI we might create will be benign the precautionary principle would apply.
I mean when you limit it to saying 0 risk, of course you won't get any takers here. I'm willing to say it's an extremely low risk, even though I think human level AI is possible.
One of the (many) points of disagreement I see is people conflating reaching human-level AGI with "takeoff" scenarios, which rely on the agent having god-like intelligence and being extraordinarily robust. Neither of those is something that would magically appear upon reaching the human-level threshold.
I actually have a pretty high credence that we'll see human level AI within a few decades, but this doesn't translate to huge X-risk in my view of the world.
Hmm... I don't see takeoff scenarios as all that implausible. I don't think takeoff necessarily depends on the agent having god-like intelligence. Like if you create one human level ai then you effectively have an arbitrary number of them up to the limit of your computing resources. You may be able to run the ai at high speed. Or train the human level ai to become as good as the top human ai researchers.
It is unlikely to take off in quite the way some may imagine as there will certainly be bottlenecks. Physical research and engineering for example may not be sped up much if at all at this stage.
"Human level" is also doing a fair bit of work. It's not clear to me that human level intelligence is a natural stopping point for AI. If we look at narrow AIs they are often either clearly subhuman or superhuman within their narrow domains.
Yeah basically I think the bottlenecks are much more severe than people imagine. I am sympathetic to the "if we had 10,000 Alec Radfords AI progress would go much faster" point of view, but I think it's missing the degree to which even the most successful AI researchers rely on empirical results. You have to wait long amounts of wall-clock time for experiments to run, even if you're one of the best AI engineers.
I agree that human level isn't some magic point that it's not possible to surpass. I guess I think the time it takes us to get from human level to 120% human level will not be significantly shorter than it takes to get from 80% to 100%.
"I think the time it takes us to get from human level to 120% human level will not be significantly shorter than it takes to get from 80% to 100%"
How long did it take to go from Go playing AI that could barely compete in the children's leagues to one that soundly trounced the best human grandmasters?
The human mind doesn't seem to be anywhere near the peak of potential cognition.
Okay for what it's worth, the time it took to go from 80% of Lee Sedol's level (the matches vs Fan Hui let's say) to beating Lee Sedol, and then the time to improve more beyond that were similar. They were pretty similar, but it doesn't support my overall point since they were both short yes.
>I've never understood exactly what the Rationalist community is, never really tried that hard. I mostly like them, yet there has always seemed something slightly off about them. Off in the way that Mencius Moldbug is brilliant but also clearly... off.
>I realized today what the offness is, for me. It's the difference between Platonic vs. Aristotelean thinking. The term Rationalism has bemused me because I have often thought over the years: "Isn't this just Enlightenment thinking and didn't that start in the 18th century?"
>But now I see the difference. The Enlightenment focused on empiricism and was most influenced by Aristotle. Rationalists, in their embrace of Bayesian thinking, seemingly feel free to discard empiricism, and this has led them to believe some crazy, rudderless shit. Such as AI Alignment is a reasonable thing to spend tons of time and money and human intelligence on.
Deciding the issue with Rationalism is a lack of empiricism despite not really trying to understand the community is not a new critique - I'm having flashbacks to Why I am Not Rene Descartes. You're not making *quite* the same arguments, but it certainly rhymes.
https://slatestarcodex.com/2014/11/27/why-i-am-not-rene-descartes/
On the object level, there's a clear tension behind the idea that empiricism is a virtuous path to truth, AI Alignment is a non-empirical issue, and other people's ideas can be "obviously so fucking crazy" despite that lack of clarity. This smells like an issue of overconfidence, but I'm not sure the problem is where you think it is.
Giving lip service to empiricism is as common as crabgrass. What huckster politician *doesn't* claim his nostrums are rooted in objective data? You need a lot more than a stated allegiance to measurement to be credited as a genuine empiricist.
Descartes and Leibniz really do claim they're not using empirical data, but only the objective good reason that God gave them! (And fortunately, God was good enough to pre-establish a harmony between what goes on inside and what goes on outside.)
in particular from the post
<quote>
Meanwhile, the founding document of rationalism (Yudkowsky), the Twelve Virtues of Rationality, states:
The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots?…Do not ask which beliefs to profess, but which experiences to anticipate.
</quote>
It matters how close the alignment group think machine learning is to producing AGI. Because I don't think it ever will on the current trajectory. You can't fake cognition with decision trees and mathematical telemetry via language models. And consciousness is poorly described as Markovian, imo.
In the coming years, I predict the rise of neural-symbolic learning, incorporating some old rule-based methods into the current sub-symbolic approach.
I also get kind of annoyed with the whole AI alignment discussion. It bothers me for a couple of reasons. Firstly, I've not yet been disabused of the impression that the rat community by and large believes similar things about AI safety because that's one of their cultural beliefs rather than rationally arived to with evidence, as much as they protest this. I do see them produce evidence for their claims, but it appears more like christians producing evidence of christianity rather than someone producing evidence of something that's actually true. I don't want to sound overly harsh here, there are a lot of interesting arguments being made and all that, but I do think the community fell prey to collectivism much more than it wanted to in a lot of ways.
The second thing that bothers me is my own inability to articulate the reasons why I think what I'd call the Rationalist AI Alignment Risk thesis is wrong. The best I can try to give is the short version, which sounds something like "Optimization does not work that way." but I've never been able to articulate a deep explanation for this, even though I know I have one. And it bothers me that I have such a hard time with it.
I think the reason the rationalist community has so many people believing the Rationalist AI Alignment Risk thesis is simply because it has so many people who have been exposed to the thesis, and some fraction of them have become convinced. I don't think this is an argument for or against.
I do see this exact discussion semi-regularly, so I'm not even convinced that even a majority of the community is worried about AI risk. There's some people that are seriously worried, some people in the "maybe 1% likely but think of the expected value" camp, and some straightforward skeptics. There's probably a poll somewhere of how many people are in each group.
But since the general population has ended up in the third group due to the argument "Wait, isn't the Terminator fictional?" then of course the rationalist community seems unusually credulous here.
For what it's worth, I am convinced AI Alignment is a real problem. But I think you've sort of pin-pointed the reason why its most vocal proponents are going about it the wrong way.
People like Yudkovsky seem to believe superintelligent AI will just arise spontaneously and bootstrap itself from nothing to world domination in a matter of hours, and one thing it demonstrates is a complete disregard for the sheer amount of empirical knowledge achieving world domination would require.
Of course, ultimately, humans will be slowly training AI with exactly this kind of knowledge, in pursuit of personal convenience or some marginal advantages in zero-sum capitalist competition, so at some point we'll reach the point where the risk becomes real. But the current AI Risk research disregards this area, assumedly because it just doesn't think it could possibly be a problem that a sufficiently intelligent actor would have trouble overcoming. This is doubly harmful, as it both sets them out to become "boys who cry wolf" who see AI dangers where they don't yet exist, and distracts them from pursuing and advocating some really simple risk-mitigation strategies that would probably be if not sufficient period, then at least sufficient for a while after a demonstrably superhuman AGI actually comes into existence.
Can you elaborate on why you don't expect early early AGI to have large amounts of empirical knowledge?
The most general and impressive AI's we currently have are trained by a process which you could reasonably describe as "distilling half the internet into a probability distribution". These language models "know" more things than any human, by a long way.
Is it that you don't expect early AGI to be a successor of these language models, or that you set the "amount of empirical knowledge necessary to take over the world" bar higher than the amount of empirical knowledge currently stored on the internet?
Empirical knowledge is not fungible; being a world-class expert in auto repair is of almost no value if you're tasked with removing an appendix or winning an MMA competition. You need empirical knowledge in the specific field.
And there's a distinct shortage of empirical knowledge in the field of World Conquest. Most of what we do have in that area is predicated on e.g. having command of an army at the outset. Note that armies are generally owned by people who are skillfully paranoid about wrongdoers subverting those armies, and there's also a shortage of empirical knowledge on how to subvert armies.
Also, most of the empirical knowledge that exists, is *tacit* knowledge. It's not written down or digitized *anywhere*, it can't be flawlessly derived a priori, you've either got to get a meatsack to teach you, or learn it the hard way. And the meatsack will probably get suspicious after too many long conversations on the fine details of world domination.
The first AI that wants to Take Over The World as a prelude to paperclip-maximization, is going to have to do a whole lot of trial and error in figuring out how to actually do that. It's going to make mistakes. And there are enough opportunities to make mistakes that it's likely going to stumble on to a fatal one while it's still small enough to be killed by an angry sysadmin with a crowbar.
> there's a distinct shortage of empirical knowledge in the field of World Conquest.
I don't agree. I watched a show about dictator's playbooks awhile back, and another show about genocides, so it's certainly a field that has been studied, and genocides & coups have kept happening despite the often severe epistemological failings of the people who cause them. Indeed, many dictators succeed on their first or second coup attempt! So is it really that hard? I don't think it's that hard, but I think the necessary combination of psychopathy and ambition is rare.
OTOH, dictators also historically seemed to require luck, to be in the right place at the right time. A superhuman AGI, however, could use empirical data to "make its own luck", e.g. it could observe the background characteristics of everyone who has ever done a coup, then work out how to create the necessary conditions.
An AGI will, however, have a major disadvantage by not having a human body, which means that all strategies that depend on being human won't work. But I don't know how to rule out the possibility that there is a realistic way to get around this limitation.
Edit: come to think of it, a key characteristics of dictators and "genocide-ists" is their ability to inspire and control others, and to delegate responsibilities. Anything you can delegate to others, you don't have to do yourself. And in principle all this inspiration and delegation could be done online. A superhuman AGI on the internet (which, of course, could be psychopathic simply by lacking a reliable system to prevent this) could first convince people it's human, perhaps not just via text messages but also via a photorealistic AI-generated human persona. It could, in principle, find a way to control a supply of money, with or without having direct control over a bank account... all via delegating certain meatspace activities to humans. So I don't see why it couldn't, in principle, follow a dictator's playbook once it gathers control/influence over enough resources.
Agree mostly, I'll give a try for part of my 'optimization doesn't work like that' explanation.
One thing that I think people who believe that there will be rapid takeoff are missing is that they seem to have this sense that all the AI has to do is overcome the humans and then it's off to the races to conquer the universe. I think there are some pretty fundamental mathematical and physical reasons that things don't work like that. Parts of it, things that point to it go by lots of names, P vs NP, undecideability, Wolfram's Principle of Computational Equivalence, Goedel's Incompleteness Theorem, Chaos theory, 2nd law of thermodynamics....
All of these things circle around one central truth, a truth that says something like "actually getting things right, or making something new that works, is a fundamentally hard problem" and I think that no matter how powerful you are cognitively, it's always hard problems all the way up. And you can't solve tomorrow's problems with the knowledge of today, you have to do it the hard way.
True intelligence, I think is, more or less, the ability to 'do it the hard way'. And part of the requirement to 'do it the hard way', to understand how to make something new, it also requires the general understanding of the type that lets you understand what someone 'means' by what they say, rather than just what they say.
what we call 'alignment' (at least in terms of not being a paperclip maximizer), and what we call 'general intelligence' are two sides of the same coin.
I’m satisfied with this answer.
Is it ok to rate reviews that I started but then found annoying and ragequit? Or only reviews that I read all the way through?
I've been interpreting the rating as "how happy would I be to see this as an ACX guest post and have attempted reading it", which both means rating ragequits poorly and docking points for things like "just didn't find the subject material that interesting".
If the review isnt good enough to finish I think thats a pretty strong indictment of its quality.
Probably the former?
I'm reading a bunch of the reviews, and it strikes me how many people have "the makings" to be pretty good writers were they to spend a bit more time doing it.
There's a period of time you go through when you start writing regularly with the intent of publishing where you get rapidly better, and I keep seeing things where it's like "this guy is already pretty good, I wish he'd write ten articles in three months and be great".
>There's a period of time you go through when you start writing regularly with the intent of publishing where you get rapidly better,
Are you sure? Why should that be?
I'm about as sure as you can be about a complex human thing. I'm sure there's exceptions, but when you start writing stuff that people are going to see, it's going to (on average) spur people to a closer level of scrutiny of their own stuff. I've seen multiple people with good educations going through the process, and the pressure seems to help them refine in ways school-writing didn't.
Best example of this I know of right now is Parrhesia's blog (https://parrhesia.substack.com/) which has just been getting better and better every post. But you hear similar things in a lot of fields - some singers will tell you that without an expectation of performance most people top out, etc.
Another thing is that for most people who are starting to write "in front of people" for the first non-academic time, they are trying to figure out what they want to be in terms of voice and what kind of things they want to talk about. They are figuring out stuff like "what's my focus". And these are all things you can improve on, or at least constraints you can optimize for/within.
Anyway, just one man's opinion but I think it's broadly true.
ah, thanks! That touches upon a number of issues I had been thinking about lately.
My main doubt was or partly is that you don't get direct feedback - or if so, probably more on specific bits of content, than on issues of style and such. But practice and reflection can go some way I guess.
So some of the best feedback you get, or at least the most believable, is when people begin to disassociate you from "being a real person". So occasionally I'll be on a forum or something where they are discussing an article and someone will say they liked X, or hated Y, and it's nice because you know to them you aren't a human being that exists and they are really talking about the writing.
But really the best feedback you are going to get is basically just getting closer and closer to the kind of stuff you want to write. I once read a thing about dealing with hecklers where a stand-up comedian was saying something like "Listen, you believe you are the funny one in the room, that you are funny enough to do this. That should lead you to believe you can take down some drunk rando with your words."
I think that's broadly true. There are some ways that feedback helps, and there's certainly some I listen to and take, but for the most part I think you are looking for forces that make you look at your own work closer, that make you think really hard about how to create the best words you can for people. It's making you give yourself better feedback, basically.
Thanks a lot for the kind answer. I see your point. It also made me wonder, if I love writing so much, that I want to engage in those thought processes so deeply. At that moment, that feels like work - though I know from other tasks that it can come quite automatically and actually be enjoyable.
I get your point about other people speaking about you somewhere else. At the same time, uhm, I don't know how long it would take until I would read people talking about an article of mine somewhere else. I think I'd be happy if they found my article in the first place.
I had been thinking about writing lately. But then, as you wrote, I'm very aware that my goals change and are not fully clear, and that I haven't really figured out yet, what 'kind of stuff' I want to write. Or rather, if the mixture of texts that come to my mind would make any sense to others.
Thanks for the opportunity to reflect on this some more!
thinking about 'lately' is a bit vague, actually just 20 minutes ago on my way home
most complex skills have a period like that.
Before internet, publishers and readers were removing the not-so-great writing from circulation. Publishers by not publishing the text (perhaps unless some changes were made), readers by not buying it.
Also, authors couldn't hyperlink their previous articles, which made their individual pieces more self-contained. Which means that if you kept their first and third works, but removed the second one, the result still made sense.
I'm not sure I follow the association between my statement and this, but I'm often a little slow.
Haha, I usually write long comments, and this time I also first did the same, and then I thought "uhm, why not delete the obvious parts?" and tried that approach. Oops.
The idea is that in the pre-internet world, the guy who writes a lot and is already pretty good... would still keep writing a lot, but only 1% of that would be published and remembered... and if you afterwards read only that 1%, you would conclude that he was great. So your wish would kinda be granted, but in a very unintuitive way.
This could still happen today, if e.g. some obsessed stalker collected all writings, and then organized some rating by audience, and selected the best 1% of them. Unless perhaps the internet form of the texts (the fact that author can expect that his audience is familiar with his previous posts, or at least can reference those posts) would make that 1% selection difficult to read, because it would keep referring to things outside the selection.
I've seen Jim Kennedy around thorium reactor groups a lot, and now he's giving his origin story:
> So I met with the Pentagon guy and I laid out this plan, and I said "well here's what [China's] doing and here's how we can counter it, and if we we counter it like this, they won't be able to offset our actions, and we'll be successful at building, reestablishing a value chain," and the guy says to me "wow this is this is really interesting, you put a lot of thought into this... this is really good," and I said "yeah yeah, thanks, you know, I appreciate it, I'm sure that you've got, you're looking at other things, right?"
> He looks at me, like, "what do you mean?" I said "well, I mean, I just kind of threw this together and, you know, I'm just a private sector guy, and this is the Pentagon and I'm sure you guys have been looking at this and you have like, a real plan, right?"
> He goes "I don't, I don't understand what you're talking about." I said, "this is a national security issue, so I am under the assumption that the Pentagon is on top of this, and there's lots of other good plans, and I'm not the only private sector guy with the solution." And he goes he just he's looked at me like (shrugs) "well no, that's, that's it." And I said "what do you mean?"
> I said, "this is national security. You know, you guys should be developing a plan, it's not my responsibility." I said "what if i didn't show up?" and I swear to god, this is what he says, he goes "well you're here aren't you?"
Evidently this is a guy who became interested in thorium molten salt reactors not to solve global warming and air pollution like many of us, but because the U.S. is letting China control the global supply of rare earth materials that are critical for manufacturing various high-tech goods (notably motors and magnets). Mostly this is a result of laws around thorium. Heavy rare earths are always found together with thorium geologically, and U.S. law says that a company cannot dig up rare earths, extract those rare earth earths and bury the rest. Why? Because the residual dirt is considered "nuclear waste", or in technical terms "source material".
This is the main reason China controls 90% of the rare earth market. And this is, of course, a national security issue since it means China has huge leverage over *everyone* else in case there is any conflict between China and anyone else. We could simply change the law, of course, but I guess we won't because politics. So Kennedy's solution involves some kind of thorium trust. Rare earth miners will extract the thorium and deliver it to a group in charge of storing it, and this group will in turn sell it to people making reactors that use thorium, such as Thorcon. But, grain of salt, I have a sense that I don't quite understand what he's saying about the problem or solution.
https://www.youtube.com/watch?v=dbbifeLRHIA
Edit: I'm disavowing Kennedy's comments against NATO, though. Because https://twitter.com/jessicabasic2/status/1513836355440111621, plus he asserts "the Russians" are "calculating rational people" and it's become very clear that Putin is calculating, but not so much rational. But all that other political stuff isn't what he usually talks about and isn't why I listen to him.
I didn't know about this law: I had heard it was hard to mine rare earths because of "environmental regulations" but I assumed that was your standard "Don't let your open pit mine leach toxic metals into the water supply" kind of regulation. I had heard of a rare earth metal mining and processing project starting up in the Alaska panhandle recently, I wonder what they're doing with the thorium.
As an aside, it bothers me when people say Putin is not rational. I think he's evil, but I don't think he's irrational. It strikes me that some people have bought to much into Yudkowski's idea that rationality=winning (or at least the popular misunderstanding of it: it seems to me all he was saying is that if the "rational" route consistently leads you to losing, then it's not really rational). The idea seems to be that if someone does something that turns out badly, they're irrational. But that's not irrationality: believing that if A equals C and C equals B then A does not necessarily equal B is irrational. There are plenty of rational people who act the fool out there, and I can't even say that Putin was acting that foolishly. Almost everyone predicted an easy win over Ukraine, and as Putin predicted it didn't start a war with NATO. Sometimes you make a gamble and it turns out the odds weren't what you thought they were, that doesn't mean you're irrational. Was Nate Silver irrational for giving Trump a 28% of winning in 2016? No, he was just wrong (or not even that: after all, 28% chances happen 28% of the time).
Putin going to war with Ukraine was a mistake, and evil, but I don't see how it was irrational unless you equate rationality with never making mistakes and not being evil.
Can you tell me of any specific military analyst who considered the obvious factors (UKR military capabilities, RU military capabilities, UKR public sentiment, geographic/physical strategic aspects) and concluded that Putin was likely to be able to complete his objectives (stable overthrow of Kyiv and at least a couple of other major cities) with 200,000 troops?
I expect such analysts exist, but I don't remember seeing anyone in particular. However, I don't think that such analysts expected the lousy strategy and tactics that Russia actually used. This video explains: https://www.youtube.com/watch?t=1474&v=zXEvbVoDiU0
Edit: I see Scott Alexander mentioned some specific people predicting a Putin win. Not sure if any are professional analysts but Richard Hanania looks notable. https://astralcodexten.substack.com/p/ukraine-warcasting?s=r
So, what I mean is that Putin was irrational in the usual human sense of confirmation bias & positivity bias. He let himself believe that his operation would succeed within a few days, as his yes-men told him (because Putin probably demanded it from them), and he refused to believe in the reality of Ukrainians' feelings toward Russia, nor did he prepare for the possibility that Ukraine was prepared, which of course it was. Vlad Vexler further asserts that Putin really believed Ukrainians would greet Russians as liberators, which I find credible. And to some extent, it seems like his delusion has continued for many weeks, as he's still using the "special military operation" moniker instead of declaring "war" which, apparently, is legally required to mobilize reservists/conscripts. Thus, his forces will probably remain undersized, and certainly underpowered, for months to come.
Major sources contributing to my understanding include the following sources published before the war.
- Adam Something predicts trouble for Putin if he invades (though with weaselly language): https://www.youtube.com/watch?v=-OO3RiNMDB8
- Adam Something backgrounder on Ukraine: https://www.youtube.com/watch?v=obMTYs30E9A
- Vlad Vexler's "The REAL Reason Putin is invading Ukraine" speaks of Putin's psychology & regime: https://www.youtube.com/watch?v=ZwU13-4SakE
- Vlad Vexler's other recent videos are good too, though frustratingly he says nothing about where his opinions come from.
- "Ukraine: Putin’s Unfinished Business" Nov 12, 2021 https://carnegieendowment.org/2021/11/12/ukraine-putin-s-unfinished-business-pub-85771
- Belated edit: I also saw a presentation somewhere...can't remember where... saying that even if Putin took Kyiv and other major cities, Russia would pay a terrible price that would ultimately weaken it.
If you were in Putin's position, with Putin's goals, what do you think the rational thing to do would be?
The thing he would realize, if he were rational, is that (1) he can choose his own goals rather than following the same path he's used in the past, and (2) he was wrong about a bunch of things and ought to have re-evaluated, either by aborting the war before February, or scaling back now (because his encirclement plan is risky and likely to fail), or at least delaying the coming offensive (because it's a complex op that needs a lot of planning).
Previously he's had remarkable success boosting his popularity with military adventures and killing people (which reminds me, have a look at the Apartment Bombings summary here: https://twitter.com/DPiepgrass/status/1507210690427174916), and he's made expanding the Russian empire militarily his goal, but he could just as easily have chose "expanding Russian influence" as his goal. And trade relations / foreign policy is a better way to do that. China's Belt and Road initiative is the obvious example, though I think that initiative is undermined by Xi making himself dictator for life, while making China into the world's biggest military power, killing off Hong Kong's freedom, and threatening Taiwan, the U.S. and Japan - taken together, this is terrifying, and if nearby nations have any sense, they would compete with China in manufacturing so as not to be so dependent on it. Putin could have improved Russia's domestic manufacturing industries and implemented reforms to reduce kleptocracy (because it's not an efficient system). Putin could have even chosen to join NATO (though that would be difficult now because of his previous annexations).
While his war is still likely to acquire some territory for Russia (probably a temporary win), it will dramatically reduce Russian influence. So I would say that if he wants a "great Russian empire", his recent and current actions are directly opposed to that goal. Edit: also note that Putin has given Xi/China a lot more power over Russia, as Russia now depends heavily on China, but China doesn't depend much on Russia.
Now, there's another strategy Putin could have taken: instead of trying to improve Russia's position, drag down the West. Have a look at what Stoic Finance said just before the war on Feb 18. He assumed that Putin was rational, so his interpretation of the military buildup was that Putin was trying to "try to destabilize the west" as he had done in the past: https://www.youtube.com/watch?v=d4SENp1IT6o
And indeed, U.S. had been warning loudly that Russia planned to invade, so NOT invading would be a win for Russia because it would make the U.S. look like the boy who called wolf.
It still seems to me that you are equating "rational" with "wise" and "irrational" with "foolish." There are many rational people who are fools, and Putin is one of them. But I think we're just going to disagree on what it means to be irrational.
Putin is intelligent, calculating and foolish. What role is "rationality" playing here? What would Putin have to do differently for you to call him "irrational and calculating"?
I read "Thorcon" as "Thorcoin" and thought the crypto guys finally had an interesting idea for a moment. :(
Lets refine all the thorium and turn them into commemorative coins! They're collectable, and come with their own lead lined carrying case. Very stylish.
Why not give Ukraine T-55s and T-62s?
Are they good enough to be worth deploying against a great power opponent? A tank that's outclassed badly enough by enemy equipment is just an expensive, cumbersome way of getting your tank crews killed.
I don't know enough about T-55s and T-62s to evaluate with confidence if they fall into this category, but a few things incline me towards suspecting this is the case. For one thing, those tank designs are over 60 years old and 20-40 years older than modern designs currently fielded by great powers, and they appear to be 1-2 major generations of capability behind modern main battle tanks. For another, both Russia and Ukraine are former operators of T-55s and T-62s who have many years since scrapped or sold them off to third-world countries: if Ukraine thought those tanks were still any good, I would have expected them to have been kept around for reserve units, or at least mothballed for restoration to service in the event of critical need. Even Russia, which I thought was a notorious pack-rat of old military equipment, seems to have scrapped their mothballed T-62s.
Which countries have them?
@Scott Alexander — The comments section on lorienpsych.com disappeared at some point. How can we provide feedback on the articles?
The book-review form has a space for entering your email addresses "to prevent spam and accidental double votes"; however, entering something in that space has not been made mandatory, which I think means that several of my votes have become accidental _non_-votes because I am a moron and often forget to do things.
Scott, if you're reading this and it's easy to do (which I _think_ it is), if you are going to ignore submissions in which that space is left blank could you please make it impossible to submit the form with that space left blank? If you're concerned about spammers and somehow making the field mandatory will make their robots fill things in there, you could have another mandatory field labelled "please enter four plus three" or something.
I read this comment before starting reading the reviews, promptly forgot it, and did the same thing as gjm. I second the plea for making the field mandatory or something.
I suppose that it should be safe just to go through all the reviews I read, remember what I thought of them, and resubmit. But somehow this is the sort of thing that is extremely not-motivating to my brain so I can't guarantee that I will.
I have a friend who I think might have Borderline Personality Disorder.
Any resources people would recommend?
the completion process by Teal Swan.
Unironically https://en.wikipedia.org/wiki/Personality_disorder
I recommend the tables in the sections "Millon's description" and "versus normal personality"
I'm not familiar with BPD specifically, but in my experience the descriptions of personality disorders will ring true if you have them, and having a clear understanding goes a long way towards relief.
I am not a doctor, and your friend should probably consult a licensed physician for a referral.
However, I did recently read about Borderline Personality Disorder at a website called "Lorien Psych".
https://lorienpsych.com/2021/01/16/borderline/
The recommended, evidence-based treatment is a semi-structured approach called DBT, Dialectal Behavioral Therapy. There are lots of books about it and therapists who offer the approach via individual or group treatment. My impression is that if the treatment is not particularly helpful for someone who has the full-blown syndrome (severe emotional dysregulation, chaos in their relationships, habit of offloading distress onto others via manipulative suicide gestures and threats). However, most people seem to think otherwise.
If what you are seeing in the person is far more subtle than what I'm describing, I don't think it's really that helpful to think of them as "having BPD," because BPD isn't an illness in the same sense as pneumonia is -- there's not a lot of mileage to be gained from saying "Aha, it's that!" For people who have more low-key versions of the BPD syndrome, a combo of meds and psychotherapy is the best approach.
I had a friend who had BPD and it was very difficult. I read the book Stop Walking on Egg Shells and I liked it but I don't have any specialized knowledge.
My comment is not administering treatment. It's about how to deal with a person in your personal life when things get tough.
Alas, my shameful pride. Reading book reviews other than mine, I find myself envious of the good and scornful of the bad. I cannot exorcise the implicit comparison and enjoy them in their own rights! I am tainted and untrustworthy as a reader and evaluator, and as such, cannot bring myself to submit ratings of any of the reviews I have read.
Oh hey, if you want a scornful review, extra scorn, don't hold the scorn, and can I have some scorn with that?, then have I got one for you 😀
https://thonyc.wordpress.com/2022/04/13/nil-degrasse-tyson-knows-nothing-about-nothing/
"They are back! Neil deGrasse Tyson is once again spouting total crap about the history of mathematics and has managed to stir the HISTSCI_HULK back into butt kicking action. The offending object that provoked the HISTSCI_HULK’s ire is a Star Talk video on YouTube entitled Neil deGrasse Tyson Explains Zero. The HISTSCI_HULK thinks that the title should read Neil deGrasse Tyson is a Zero!"
And then he really gets going.
Always fun to read someone letting it rip that way!
The best & funniest negative review I've ever read was an essay by Alexander Pope called, I think, "Peri Bathos," panning and parodying all the hack poets of his day. I recommend it highly.
I used to be sort of like that too. Much less burdened by it now. Lots of us who are smart get sort of fixated on the fact of our smartness, and have a terrible time letting go of the hope of being a certified genius. You have to sort of mourn that loss. It helps to realize that being a certified genius actually doesn't make people feel happier or more solid. I knew someone who was chess champion for his state when he was in middle school, and then right after graduating from college won first the US correspondence chess championship then the world championship. He was delighted when he won, but not any happier than I would be if I lucked into a great deal on the car of my dreams. The real locus of happiness is in the doing of something you're deeply interested in and good at.
I mean, couldn't you just set your book review at some value (7 perhaps? You probably think it's reasonably good since you submitted it) and then rate reviews based on how net envious/scornful you feel about them?
Seems like that system would have a decent signal/noise ratio.
Any widely published nonfiction book can be condensed to a 3 page PDF without losing information [textbooks excluded], change my view
Only technically true, as the book did not contain any information to begin with. All language is insufficient to truly communicate, it's merely a wretched cipher used by small minds. True communication is almost an impossibility for us.
If you want to communicate an idea to someone, do it through art. Novels are the most inferior form of this (the actual text does not contain communication, merely its psychic structure), followed in ascending order of "something approaching actual communication" by cooking, architecture (although the art in that has been destroyed by Mara in most of the world), dance, painting and sculpture, interactive and immersive art, and music being the highest and most genuine form.
Well, no, obviously taken at face value that is false, because "information" includes things like "how many times was the letter E used in each sentence, on average?" So "information" strictly defined clearly vanishes when you condense, in the same sense as information vanishes when you use the JPEG algorithm to compress an image.
What you mean to say I think is "useful" or "valuable" information doesn't vanish, which is also a truism, since as long as the definition of "useful" is set appropriately, we can justify any condensation, great or small.
If what you're saying is "most popular nonfiction is a form of intellectual masturbation, where people wallow in clever expositions of themes and ideas with which they already agree, so they can nod along enthusiastically" ("OMG! Look how forcefully he puts in on page 163! I'll have to Tweet that out right away...") -- yes, well, this is the human species. Probably 98% of our communication bandwidth is taken up by group identity signaling.
It depends! If you're talking about coffee table, pop culture of science/history/so forth books, then yes.
If you mean something with real information, then I don't think so. Sometimes you have to lay the foundation for what you are going to discuss before presenting useful ideas, otherwise it will be "as you all know" and no, we don't all know.
where would you go for a book with real information?
Do you think you could express 300 pages of Scott Alexander's most popular posts in 3 pages? Or try expressing any one of the six "sub-books" of Rationality: A-Z in 3 pages.
Even if you could include the key information from those posts in 3 pages, the summary couldn't communicate the ideas very well. The reason why is suggested by this post from sub-book one: https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances
"The Art of Electronics"
"Dynamic Aquaria"
"A Pattern Language"
"Tragedy and Hope"
Four books almost picked randomly from my shelf. I could go on... GEB, "Road to Reality"...
Isn't Art of Electronics a textbook, thus excluded?
The 3rd edition is not a text book. It's 'the bible' of analog electronics (at the time of printing.) https://artofelectronics.net/ The first and second editions are more text books. There are cosmology text books and then there is "The Principles of Physical Cosmology" by Pebbles, the classic reference work... I guess you call that a text book. IDK. What makes something a text book?
The narrative can often make the subject more engrossing and easier to remember. By creating an emotional response in the reader, the author highness the understanding of not just the facts but also the context and meaning of the subject.
Though I agree with the general premise that most popular non-fiction could be shortened considerably. However, short books can't be published - no one will be $15.99 for a 20 page paperback. Consider using a service like blinkist which does the summaries for you.
This is true for many nonfiction books, but certainly not all. History books are a classic example. Try condensing the entirety of "the Making of the Atomic Bomb" or
"Imperial China 900-1800" into 3 pages without loss of information. Now you might find information beyond 3 pages boring but that's not the same no extra information.
Most academic nonfiction falls into 2 categories. Either it could have been a single paper instead of a book or it feels like every chapter is a separate paper stapled together under a connecting theme. The only book I've written definitely fell into the latter category. You couldn't summarize it in three pages, but that's not always a good thing.
"it feels like every chapter is a separate paper stapled together under a connecting theme"
Well, sometimes they are literally that. It's not uncommon to find "an earlier version of Chapter 5 was published in the Journal of Such-and-such" in the copyright page.
Joke answer: If true, this would violate the Shannon Source Coding Theorem; the Shannon Source Coding Theorem is mathematically provable; BWOC, this is false.
The answer is, of course, that Cajou's use of "information" is technically incorrect. I am speculating, but Cajou probably meant something like "feelings of insight, changes of perspective, or facts I didn't know," which lays bare the subjective nature of the claim. Joke response: What if the book is written in qubits and the number of qubits adds up to two to the power of the number of characters that fit on 3 PDF pages?
Sometimes you read a book to gather facts and data. That information takes more than 3 pages to convey. Plus there is nuance, and stories involved in history. Takes more than 3 pages to summarize the Rise and Fall of The Third Reich. The whole point is to convey the weird variety of political maneuverings.
You can fit a dictionary into a three page PDF if you use large enough pages or a small enough font.
Oh that's a great book! Very briefly, it is "everything has been going to Hell in a handbasket and here's why", but he covers so much ground and (at least for me) gave so much new information that it's well worth reading. It's also very entertaining with the whole "damn kids these days" vibe 😀
Idle thoughts: Kant proposed that consciousness/sentience/free will is the result of the rational and physical (/animal) being combined. Chomsky proposed* that consciousness is contingent on language. Take a blended view and look at the latest work coming from AI and could it be that consciousness is the combination of physical/animal (blind sensory input in real time with no specific "training" set) and emergent language processing? It removes the rationality aspect from Kant which was always a challenge (making everything constantly consistent) and allows a potential gateway to understanding future interactions with LLMs.
I wonder, what does it "feel" like for a model to be trained vs called**? How much does real-time sensory input impact the nature of sentience? We know drugs are a problem for humans, could an AI fall into a mode of simply feeding itself fake data to "succeed" in its training?
For some reason I tend to imagine e.g. GPT-3 as being akin to a writer in a pure state of flow: divorced from worldly concerns and purely focused on following the train of thought where it leads.
* I have read Kant, but am relying on a single interview of Chomsky's I've listened to. I may be getting him completely wrong.
** I'm not really up to speed on the technical side of the current models, but the basic data science stuff I did with NNs didn't feed back the "real" output data back into training models, so there was no feedback loop to "learn" from calls to predict, unlike in training procedures (I imagine there must be *some* way of doing this in current models in order to keep stories consistent etc.).
I noticed a curious thing recently. Someone asked me a question, but I was mentally busy (reading something?), and then I said "yes", and THEN the answer and most of the question registered in my conscious mind. I then evaluated the question and answer, which turned out to be correct, but sometimes when this happens, the answer is wrong and I have to correct it.
I believe I am describing what some people call "autopilot", a phenomenon that seems to demonstrate that there are various mental subsystems (including the linguistic subsystem) which are physically separate from the conscious mind.
Yudkowsky seems to think that self-reflection is what makes someone conscious. I disagree with both Yudkowsky and Kant/Chomsky as you described; I think that consciousness is physically separate from those things, and that qualia is the meat-and-potatoes of consciousness. If you're being tortured, you are very much conscious, but your language and self-reflection abilities are not an important part of that experience. I further expect that all current AI models are not conscious, but we need a better theory of consciousness to be sure. We can observe, however, that reflexive behavior seems separate from consciousness and in most cases is opaque to consciousness, e.g. we cannot feel nor introspect the inner workings of our spinal cord, or whatever generates dreams, or our linguistic system. Instead, consciousness feels the outputs of such systems, and then, interestingly, can send out signals about what it feels (which in this case takes the form of comments on the internet).
"I wonder, what does it "feel" like for a model to be trained vs called?"
First of all, what is feeling? Its only identifiable from a distance, where multiple systems of reward+feedback, with different time horizons, overlap. Under a microscope, feeling reduces to sensation reduces to keeping track of cause and effect.
Calling is cheap, training is expensive. Pure calling obviously feels like nothing, because theres no change to the system. It might feel like a slight drain if power consumption is fed back in as an input.
In people, learning happens in multiple stages. First theres the subconscious, all-consuming 0 to 1, where a rough first draft of the action is pieced together. The "feeling" here is mostly black-box, hidden and used as subconscious feedback. It only bubbles up to conscious awareness when theres an existential threat to the activity.
Then comes the conscious competence stage, where the action is saved as a discrete skill, but still takes a disproportionate amount of attention+concentration. There's more mindspace for the conscious self to operate again, and it starts dealing with the eggs broken in the course of making the first iteration of the omelette.
With practice, the mental share of the skill gets carved away to a flexible minimal representation that operates automatically and can be tweaked.
So how does it feel to learn? Really all depends on how you slice it.
"How much does real-time sensory input impact the nature of sentience? "
Greg Egan explored this concept in his novel Permutation City. If you simulate a conscious brain on a computer, does it matter how fast the operations are performed? What if you pause the program and restart it later or what if you run it really slow? Would the simulated brain even notice? Probably not, so all that matters is the subjective experience of time, and not any objective passage of time.
As for feeding back output data to an NN, this is how GANs work (DALLE-2 is based on a GAN, I think). In a GAN, a discriminating NN tries to differentiate between real data and generated data, and a generating NN tries to maximize the error of the discriminating NN based on the output of the discriminating NN. So the system as a whole is feeding its output back to itself.
DALL-E is a diffusion model, not a GAN
I like your vision of “wireheaded AIs on lotus thrones”!
Judging my interest in the book reviews from titles alone:
* 1587, A Year of No Significance: The Ming Dynasty in Decline by Ray Huang
I'd like to see more Chinese history. A lot of what's out there has barriers to consumability for a Canadian like me. A good review can help sort that out.
* In Search of Canadian Political Culture by Nelson Wiseman
The title caught my interest. So did the reference to Albion's Seed. I'm also starting to think that we may be at the start of a new paradigm shift in Canadian politics, so this is good time to read a history.
* More Work for Mother: The Ironies of Household Technology from the Open Hearth to the Microwave by Ruth Schwartz Cowan
Recently Technology Connections released a video on the modern style can opener. He makes a point in the video on how small household inventions don't seem to catch on anymore. It's something I've been thinking about since, and this book seems to be in the same area.
https://www.youtube.com/watch?v=i_mLxyIXpSY
* Private Government by Elizabeth Anderson
Is this a new Machinery of Freedom? For most of these I think there is a 50% chance I'll read the book if the review is good. For this one, I'll just read the review. Aside: I loved the intro to Scott's review of that book.
* Public Choice Theory and the Illusion of Grand Strategy: How Generals, Weapons Manufacturers, and Foreign Governments Shape American Foreign Policy by Richard Hanania
After reading The Dictator's Handbook, a lot of books about politics have become hard to take seriously since they over-attribute everything to the person in charge. This book sounds like it could be different. The specific topic isn't interesting enough to get me to read the book, but the review seems interesting.
* Rationality: What It Is, Why It Seems Scarce, Why It Matters by Steven Pinker
My only interest in this is the name Steven Pinker.
* The Dawn of Everything: A New History of Humanity by David Graeber and David Wengrow
There is two reviews of the same book.
* The Righteous Mind by Jonathan Haidt
I've listened to this book three times now. It is my ideology for understanding ideologies. It is my hammer that makes everything look like a nail. I am very interested to see someone else's review of this book to see what they learned differently from me.
* Yąnomamö: The Fierce People by Napoleon Chagnon
I'll probably just read the review. The subsection title "They’re kinda dicks" really grabbed my attention.
I don't know if it is the same everywhere, but cans that need can-openers are almost a rarity now. Of the top of my head I can only think of the cheapest beans at Aldi, and the traditional canned steak and kidney pie (or variant) - which is admittedly a tough opening challenge even with a decent can opener. Almost all cans have pull tabs.
I think that the ring pulls hit a peak back in the 2000s, then the soup manufacturers realised that they could make more money by selling soup in reheatable bowls for convenience for lunch at work (bowls being more expensive or lower volume than tins). Then they started removing ring pulls from regular tins of soup to make them less convenient to eat outside of your own kitchen. No evidence to back this up, but it's the only reason I can think that most soup tins (in Canada at least) no longer have ring pulls, but tinned tomatoes etc. do.
Admittedly, I've bounced around the world quite a bit and maybe my series of events is affected more by geography than time.
I've watched that Technology Connections video, and while I enjoyed it (and will likely get the new style can opener if/when my current one breaks, although I purposefully got a high quality one that is likely to last a long time), I'm not convinced that it says anything about the likelihood of small household inventions to catch on. I had _hated_ the style of can opener for a long time and the reason was that it was non-intuitive and every time I had encountered them in the wild, I was not provided instructions leading to a frustrating experience.
Basically, I think that they were a special combo of small improvement and non-obvious difference in how to use them (they _look_ pretty similar to standard can openers, so there is no reason to intuit the little push-catch system) that made them have a high liklihood of a bad first impression.
Something that had just _one_ of these two features (either small improvement and identical operation or small improvement and hugely obvious difference in operation, or else a large improvement in operation that justifies further investigation in operation) would not have the same problem in catching on.
Additionally, one of my parents _loves_ buying various "as seen on TV" kitchen gadgets. In my opinion they have a very low hit rate, but as long as people like them exist, I think that adoption of new small kitchen gadgets will continue.
Is there any way to estimate what it costs Americans to deal with the medical insurance system?
Ideally, it would include everything-- time (including time spent by helping other people), money, difficulties with getting treatment.
Any way? yes. A way that's possible given the kind of minimal resources I could personally bring to answer the question? Doubtful. I would note that at my clinic we have a team of "Clinical Care Managers" who bill insurance for time spent helping the patient navigate the healthcare system. If you could find that data, you might be able to extrapolate how much time is spent outside of the doctor's office doing things like coordinating visits, transportation and the like.
A lot. But the tradeoff is that if we hired a professional class --- lawyers and clerks and a big Federal Government agency -- to do it instead, the total cost would be even higher, because those people would have far less information on each individual case, and be wrongly motivated (i.e. by their own paycheck instead of balancing health/cost/convenience issues with a precise appreciation for their respective values to the individual).
Yes, you also have to be cognizant of political influence. Even if you had competent, selfless, and motivated people working at your federal agency, they'd still be working from legislation and regs that had basically been written by lobbyists. Medicare and Medicaid reimbursement policies are hardly free of meddling from industry groups like the American Hospital Association as it is. This would only get worse if all hospital revenues were derived from government sources.
Thinking about this...some of the things you have listed vary in value between different people. One person might want care close to their home, one might want a certain rating of quality. Time spent driving my mother to a clinic - I might be resentful of that, while my sister feels valued and honored to be of assistance while she is helping Mom.
It's also interesting that you say 'American insurance system' - all of the other (American) systems also have issues with time, difficulty with treatment access, etc.
I think the question is not so much "time spent dealing with doctors to get treated" as "time spent on the phone with insurance agents" and possibly "doctor's visits that exist purely to satisfy bureaucratic requirements"
I think the review for The Age of The Infovore is truncated?
Trying to get in touch with a bunch of (mostly) American VCs for several reasons, I have contacts but some are too tenuous for me to get an introduction through them (I could bring up "btw I know X" but it would be weird to ask for an introduction).
Is cold emailing at all effective? Is LinkedIn at all effective? Any pointers appreciated.
To be clear this is not primarily about a startup that needs funding otherwise I'm sure I could find a funnel on their site or somewhere similar.
Are you cold emailing the individual or the firm? If its the firm, it will likely be reviewed by an assistant or a very junior associate.
If it's the individual, keep it short (can you communicate it all in the subject?) and make it very very clear what you are asking of them. And if you can, provide some value to them as well. That should increase your chances of a response though I think the odds are pretty low no matter what.
I think it's rough regardless
My old startup kicked my research group out/forced them to spin out (but gave $20M in process) and the director of that group was like George Church's golden boy and has still had a really hard time securing VC funding - I would just recommend trying to be impressive both financially and technically while having persistence so that your pitch is showstopping when you do get an in
Am I the only one who reads through all the banned comments out of sheer morbid curiosity?
No, you're not. It's always interesting to see what got someone banned, and if I think it's worth a ban (to date, yes).
I love it.
I wish there were more!
Gimme time.
See, now *that* would be an actually interesting AI experiment. None of this writing poetry or playing chess humdrum. Write a piece of code that, presumably through an extended process of experimentation with thousands of fake accounts, determined the exact boundary of permissible speech on a web forum, and was able to predict with pinpoint precision the distance of any given comment from the boundary.
If someone could do that, I'd be genuinely impressed, as it would have solved a difficult general intelligence problem, which even human beings get wrong from time to time.
No, you are not :)
No sir. Morbid curiosity is my middle name!
No
I try, but right now Substack is just giving an error message.
Nope, I wish I could hire Scott to mod my family events
Go and see Everything, Everywhere, All at Once. It was weird and indescribable and riveting. The trailers did not do it justice. It's not merely great, it's an opportunity to see genius showing off.
Might as well be subtitled "Michelle Yeoh and the Multiverse of Madness", and I expect I will rank it ahead of whatever the MCU puts out later this year. It's very, very good, and should be seen unspoiled.
I enjoyed it. But I didn't think it was *that* good.
I wonder if it is because I've seen multi-verse stuff done well already from watching Rick and Morty.
The multiverse stuff was, imo, the delivery vector. The genius I saw in the movie was ...
<SPOILERS AHEAD, OBVIOUSLY>
.
.
.
.
how it found meaning in total opposites. We saw high-octane action, horror, and comedy in the mundanity of a tax office. We saw existential dread in the banality of a bagel. Raw human emotion in snarky demotivational posters of rocks. Touching romance in the fucking hotdog world. Regret and malaise at the movie premier where she has it all. The kinetic villains were a teenager and a middle-aged woman. The climactic moment of breaking bad cycles happened at a traditional celebration of a new year at a laundromat.
On the flip side, we didn't get to watch the action in the post-apocalyptic alphaverse, even though surely it must have been in some objective sense very cool. Because that's too easy, too on the nose. All we got from them was exposition and human relationships.
And it was so well made that they told us the game and we still didn't see it coming. The "do the opposite to unlock your alter universe ego" *is* the whole shtick. It's really hard to find meaning like that! I struggle to imagine a mediocre version of this movie, just very good and very bad implementations. If it had been 90% as good I think it would have felt 10% as good. That's why I find it so audacious.
Moreover, I think the movie only works because of that audacity. Because if there can be action in an IRS office, there can be action anywhere - even in your or my life. If they can access their humanity in such a bleak and lifeless place as the rock world, surely anyone can access it anywhere - even you or me. If they can change in their moment of grinding tradition, surely you or I can change and improve right now. The movie felt so universal and personal not because it said any of those things, it would never be so crass as to say them out loud. It just showed them, seemingly effortlessly, while winking at us.
-edit- this comment contains very minor spoilers about general theme and tone of the movie. If you don't wish to see those kind of spoilers, skip this comment.
I am a big R&M fan, but I still thought the movie was amazing. Probably because the multiverse stuff was not the _point_ it was the McGuffin used to deliver a family drama.
I thought that the movie was simultaneously the best comedy, drama, and action movie I had seen in a long time (although I'm not big consumers of either drama or action/kung fu movies on the regular, so my rating in those categories should be taken with a grain of salt).
If you are viewing it purely as a sci-fi exploration of multiverse shenanigans, then yeah, it was...fine, but not great. The details are almost entirely omitted and ramifications are barely even touched on.
But as a vehicle to deliver a touching story about a family on the rocks with well done fight choreography and (IMO) hysterical physical comedy, it worked really really well.
I'm not sure I enjoyed it, but I agree that it's masterfully made and that you want to go in cold. It's weird, but given the weird concept at its heart it's the best movie it could possibly be.
Very hard agree. It's one of the best movies I've seen in the last ten years, and I have extensive knowledge on the subject and excellent taste.
My advice for anyone interested in seeing it is to go in absolutely dead cold. Don't look at the poster, don't read reviews, and especially don't watch previews. Just go in and see it.
It's by the Daniels. Hey are those the same guys who made this insanely awesomely stoopit video?
https://www.youtube.com/watch?v=HMUDVMiITOU
"start a conditional prediction market ... if the prediction market is higher than 25% then you can send me an email with a link to the market and argument and I’ll look at it."
Isn't there something distortionary about this? E.g., suppose the market were at 30%, I believe the true chance of it being worth reading is 0%, and I have unlimited money. Ideally, I'd bid the price down to ~0. But then Scott doesn't look at the appeal, the market won't get resolved, and I make no money! Is there ever a reason to drive the price below 25%, regardless of your true belief?
Well, if you think it's zero and you bid it down, the person who wants it to get reviewed needs to bid it back up, so you take that as profit. I think the problem is more tying money up in a conditional market which might not pay out to anyone.
Scott could fix this by randomly choosing prediction markets trading at below 25% to look at, and resolving them. But in practice there aren’t going to be enough markets for this to really work.
Yup - I think this probabilistic instead of deterministic choosing of actions is actually important for futarchy, for this very reason.
Found this piece of comedy gold in a freewrite I did a few months ago (because I'm usually a mediocre writer - at least personal-writing-wise), so I'm posting it here now. Feel free to analyze it to oblivion and beyond.
> The moon is not made of cheese, as is commonly thought, but is made of rock. The sun is also not made of cheese, though far fewer believe this, but the sun is made of plasma. If the moon or the sun were in fact made of cheese, I would expect that their sizes would be quite different, because cheese, rock, and plasma have different densities from each other, which means that equal masses of these three substances would take up different amounts of space. Also, if the sun were made of cheese, I think that the gravitational pressure alone would be enough to make it burn and turn it into plasma again.
I'm very loosely reminded of Randall Munroe's mole of moles.
https://what-if.xkcd.com/4/
On this blog we believe the moons are made of quetiapine.
The correct paragraph to write next is to first say what the type of cheese is, and then calculate out what would happen, and then be able to definitively say whether or not a cheese sun would become a sun again. :)
Really enjoying checking out all the book reviews! One of them is mine. I'd love to assist in the review rating process, but want to make sure that's kosher first. Are we assuming that everyone will give their own review a 10, or banning the practice of rating one's own review?
The form for submitting a response asks for your email, partially to prevent fraud. So I'm going to give mine a 10 using the same email I submitted my review from and if that turns out to be the wrong decision it should be really easy for Scott to fix by matching authors to self-reviews (but I don't lose out if other people are doing it and Scott was expecting them to).
As an aside, the quality of entries is absolutely *crazy* - I don't know how many are going to make it through to the finals but I'm up to double-digit numbers which I wouldn't be at all disappointed to see win. It would be great if there was some way to preserve the reviews scoring above some threshold but which don't make it to the final round, because there's a huge amount of excellent content here.
Agreed.
Makes sense; sounds like a plan - and yeah, there are a lot of really good ones!
My daughter got Wordle on her first try. I am not sure what the odds of that are. However it did get me thinking about how millions of people doing Wordle are all focusing on the same thing at the same time. It would be an interesting way to test if there is any collective consciousness that can be shared albeit unconsciously. I was wondering if anyone had ever tried to do any research in this area.
In my case, the odds are zero, because the word I always guess first is not in the list of possible Wordle answers. If you start with a word that _is_ in the list then you have a 1/2300ish chance of guessing correctly on the first turn. (In some sense; the actual order of appearance is fixed. But if you don't know that order, but _do_ somehow know that your word is on the list, and haven't been paying attention to what past words have been and whether or not any of them is your word, then the correct probability to assign is about 1/2300.
"Collective consciousness" in the sense of, say, telepathy seems vanishingly unlikely to me. But what might happen is e.g. that some particular topic is in the news and more people than average guess related words, and sometimes one of those words will happen to be Wordle's word of the day. If what you're interested in is Funky Telepathy Stuff, I think it would be very difficult to disentangle from that sort of exogenous correlation.
I saw my niece get it in two yesterday. She only had an a from the first one and was just guessing the second one, she said. 1-2 are flukes. 3 can be worked out.
With the classic Wordle, there were 2315 possible answers. I think the NYT version trimmed the wordlists a bit.
There are 13000 possible guesses, and the original creator chose 2315 of them to be the target words for the following 2315 days. The criterion for being one of those was apparently just whichever ones were "familiar to [the creator's] partner", creating a strong bias for the target words to be fairly common. cite: https://arstechnica.com/gaming/2022/02/heres-how-the-new-york-times-changed-wordle/
So while the odds of a correct first guess might seem extremely low (one in 13000! wow!) those odds are raised significantly because common words, which will often be the guesses you'd think of first, have a higher chance to be correct. Out of 4 people in my family who Wordle, there have been 3 correct first guesses in 3 months, and while that's higher than baseline, it's not all too crazy given this bias towards common words.
I am disappointed that once again nobody has reviewed the Road to Wigan Pier. There is much to delve into with the second half.
I'm on it! Might be a month or two though.
Could you review it sometime? I read it and found it very interesting, but don't have time to write a review on account of a toddler and baby.
Reading a biography of Angela Merkel called "The Chancellor" by Kati Marten. Written before the recent war in the Ukraine it was interesting to read about Vladimir Putin's relationship with Merkel and the west in general. In 2007 at a meeting in Munich he was highly critical of democracy and the nations that support it: "His stated goal had become to reclaim Russia’s place as a formidable global player by any means necessary". He also didn't like the criticism coming from a reporter about the war in Chechnya and somehow she was murdered outside a Moscow apartment on Putin's 54th birthday. Elsewhere he proclaims: "His ultimate goal is to weaken the European Union and its ally the United States." and he feels the Soviet collapse was “the greatest geopolitical catastrophe of the twentieth century”. Seems like a nice guy though ;). Here's the full quotes from the book:
"On February 10, 2007, the somber prime minister of a resurgent Russia strode onto a stage in Munich to deliver a scorching diatribe against democracy, the West, and everything for which Angela Merkel stands. “Russians are constantly being taught about democracy, when those who teach us do not want to learn themselves,” he rebuked the gathering of transatlantic security specialists and government officials. Gone was the accommodating Putin of just a few years earlier, grateful to be a part of the European family and proud that the German chancellor spoke good Russian. His stated goal had become to reclaim Russia’s place as a formidable global player by any means necessary. Blending lies with threats, he taunted the audience, deflected hard questions, and punctured the West’s moral superiority. “Wars have not diminished,” he charged, in spite of the West’s attempts to broker peace around the globe. “More are dying than before.” Though Putin had not yet thrown his support behind Syrian dictator Bashar al-Assad’s genocidal war against his own people, he scolded Washington for its wars in the Middle East and referred to the Cold War as a “stable” era. Merkel, sitting in the front row, was visibly shaken by the Russian’s venomous performance—and his description of the system that had kept her its prisoner for thirty-five years. Not since Soviet leader Nikita Khrushchev pounded the UN podium with his shoe in 1960 and earlier proclaimed, “We will bury you!” had the world heard such vitriol from a Russian head of state. But Khrushchev thundered at the height of the Cold War; this was 2007. Things were supposed to be different now. Yet for the next decade and a half, Angela Merkel’s relationship with Putin would be her most frustrating and dangerous. It would also be her longest relationship with a fellow head of state, its roots reaching back to November 9, 1989."
"Vladimir Putin, once a proud standard-bearer of the humiliated Soviet Union, had learned a lesson he would not soon forget. Unchecked demonstrations and sudden eruptions of freedom can topple even the world’s most heavily armed empire. His battle to reverse what he considered to be “the greatest geopolitical catastrophe of the twentieth century” (Soviet collapse) would ensnare Angela Merkel, a product of the same failed state. Their convoluted relationship would zigzag between faint hope and despair on her side, and dogged determination on both their parts. She was chancellor of Germany, and he was the modern-era czar of Russia. Divorce was not an option."
"From his perspective, the Cold War did not end in 1989; it merely took a short breather. Since then, Russia’s tactics had evolved. While the Soviets brandished nuclear-tipped missiles, Putin opts for weapons that are less conventional and less visible but ultimately more flexible and effective, such as spreading discord in the West through disinformation and cyber warfare, Putin sees himself, in his own words, as “the last great nationalist.” His ultimate goal is to weaken the European Union and its ally the United States. “The main enemy was NATO,” Putin said of his KGB service in Dresden"
"But he failed to intimidate her. In Dresden, the site of Putin’s deepest humiliation, Merkel even flipped his script. It was she who both diminished and humiliated him. The leaders met in his former town in October 2006, three days after the Moscow murder of Anna Politkovskaya, a reporter and human rights advocate whose coverage of Russia’s savage proxy war in the republic of Chechnya had gotten under the president’s skin. When Politkovskaya was shot dead in the elevator of her Moscow apartment building on Putin’s fifty-fourth birthday, some observers felt the timing of her murder was not a coincidence."
What do you (or book authors) think how should Russia have behaved towards Chechnya?
Sorry I am late with response. The book hasn't gone into any detail on Chechnya. At least so far.
Well, for a start, the Russians shouldn't have blown up hundreds of their own civilians. If not for that, I assume they would have had little reason to have a war in Chechnya.
- https://twitter.com/DPiepgrass/status/1507210690427174916
- https://en.wikipedia.org/wiki/Russian_apartment_bombings#Attempts_at_an_independent_investigation
I am neither the OP or the book author, but I guess Chechnya should have been given independence and then attacked in case they decided to create some sort of a Caliphate. But not attacked in the medieval way they were attacked.
Generally the russian lack of care for civilian lives (or any lives for that matter) is the biggest issue here. In Chechnya, in dealing with Chechnyan terrorists in Moscow (and that russian kindergarden, I forgot where that was exactly), in leveling of Grozny, Aleppo, Mariupol, in their terrorist tactics elsewhere in Ukraine...without all of this, at least the war in Chechnya could have been seen as reasonably legitimate by the West.
The US had its blunders in Iraq and Afghanistan but those atrocities were documented by US press, the soldiers responsible were actually persecuted and jailed. Russian soldiers are rewarded by their institutions for even a lot more ghoulish behaviour.
This sounds interesting. Thanks for bringing it up.