I wrote about self compassion, perfectionism, current trends in the tech industry, and how to cope with the diminishing prestige of the software engineering profession. "self compassion and the disposable engineer" https://dlants.me/self-compassion.html
To anyone who considers themself good at both "speaking" AI and visualizing objects in 3D, I have a favor to ask. Would you put this question, or an improved version of it, to your AI of choice, then guide it around with your special talents, and report back with your findings? I would be most grateful.
"You know how if you take a 3D globe of the Earth and flatten it into a 2D *map* of the Earth, it gets all distorted? Well, assume the periodic table is such a flattened map. What 3D shape was it most likely flattened from?"
I don't know what you want out of this question so I'm not sure if this is helpful, but there are answers to this on the Web already (that might be in the AIs' training sets). At https://www.av8n.com/physics/periodic-table.htm for example.
Well do you know that the periodic table is underpinned by the QM electron wavefunction. The wavefunction is mostly about the spherical harmonics and solutions for different energy and angular momentum states.
My employer introduced a High Deductible Health Plan option this year, but in direct contravention of everything I thought I knew about HDHPs they priced the premium higher than the HMO premium. Specifically, for individuals the HDHP premium is $20 higher per month and the only "perk" is a $250/yr HSA seed. So if you opt for the HDHP you net out $10 ahead for the entire year, at the cost of having to pay ~1000% more out of pocket if you ever consume any healthcare.
This seems like such an obviously horrible deal that I can't imagine anyone taking it, but if that's the case why did they bother introducing it in the first place? The open enrollment email alludes to vague "tax benefits" associated with having a HSA, but is there any possibility that those are good enough to justify opting for it?
As a young man in perfect health I'd love to have a real HDHP option that cost less in premiums than the HMO, so I'm trying to understand why they would have set things up this way. The email said that they had "heard employee requests for a HDHP" but introducing one structured like this seems like it's worse than just not introducing one at all and continuing to only offer us HMO and PPO. It feels like if the cafeteria said "we heard your requests for ahi tuna, so we've added some cat food to the menu". Does anyone have any insight on a possible explanation other than this being an institutional "fuck you" to the HDHP-wanters?
The HSA Health Savings Account is an especially good retirement vehicle. Yes you are right to be surprised that this is why your health plan option costs more we have a really strange system.
You can put only up to a maximum amount in your HSA every year and the interest earned is tax free similar to a Roth IRA. But, the money you put into it is pre-tax (like a 401k). And any money you withdraw for health expenses is tax free. Money you withdraw later for non health expenses is taxed as income.
The people who were asking for the HDHP almost certainly wanted it for the HSA access (which by law is only available to those with a HDHP)
“This seems like such an obviously horrible deal that I can't imagine anyone taking it, but if that's the case why did they bother introducing it in the first place? The open enrollment email alludes to vague "tax benefits" associated with having a HSA, but is there any possibility that those are good enough to justify opting for it?”
I assume the HDHP still works out to your advantage. The tax benefit is that the money is tax free, the HDHP allows investment vehicles that you allocate your money to, you park it there for decades while it grows and you are hopefully not consuming healthcare.
In addition, if you ever have unreimbursed health care expenses, you can keep your receipts and potentially decades later you can be reimbursed for these expenses, which ends up being g tax free income to you.
Think of it as buying access to another retirement vehicle at the same time you have a health plan.
The "AI as normal technology" series of posts challenges the idea of intelligence being a really large spectrum. The standard metaphor is that the "village idiot - Einstein" gap is a tiny segment in the spectrum of intelligence, akin to visible light compared to the entire radio-magnetic spectrum. They object, claiming that a lot of important domains naturally limit the power of intelligence.
This led me to think about the limits of intelligence in chess. The gap between an amateur and a grandmaster is similar to the gap between a grandmaster and a modern engine. A GM would win against an amateur with ~100% rate, and an engine would similarly wipe the floor with the GM. Naively, one would assume the chain goes much further.
But is it true? I claim (and would appreciate feedback here) that the chain basically ends there, assuming chess is a theoretical draw. There is likely no super-engine that would decisively beat Stockfish 17. Moreover, it is likely that even the game-theoretical optimal oracle would still draw against SF17 most of the time.
The key idea is that chess is a game of understanding and calculation, but both lose their usefulness beyond a certain level. One needs to understand which positions are more likely to win, based on general features of the position (material, piece activity, long-term weaknesses, etc.). But for any pure (fast) "understand" function, there exist very similar positions that differ only concretely; so one also needs to calculate to distinct between them. In other words, there is only so much to abstractly "understand" about any given position; hence calculation is used to complement understanding. But calculation also fizzles out in usefulness after a certain depth. Indeed, for calculation to yield new insight at, say, depth 20 rather than 19, the position must remain ‘forcefully sharp’ for at least one player all the way to that depth. Such cases exist, but are rare, and if the smart (with good understanding) player wants to avoid them, they mostly will.
In other words, superior chess understanding and superior calculation will get you only so far. To secure a draw, one needs to understand and calculate just "well enough." I think modern engines have likely reached this level. They mostly draw against each other. The famous AlphaZero vs. Stockfish 8 result appears to prove the point rather than to contradict it (they mostly drew within a 50–100 Elo performance margin).
So the game of chess appears to have some "irreducible complexity" baked in; this makes intelligence usefulness diminish, so even a relatively dumb intelligence can be good enough to secure a draw.
Afaik, in chess engine championships, it is already the case that the engines only play from given starting positions that give white an advantage, to avoid every game being a draw in mostly the same position. That would line up with it being functionally solved.
Then again, before the advent of LLMs, engines already were stronger than GMs, but LLMs improved on those engines in unforseen ways (not just brute calculating lines, but playing more human-like on top). And engines are improving still.
I would probably share the intuition that chess is a theoretical draw, and that there are diminishing returns, but I am less sure we already are seeing levels that could draw even against magically perfect play.
Aren’t you basically saying that chess is functionally solved, even if it hasn’t been yet mathematically? You’re describing what I presume would happen if you put two Game Theory Optimal poker engines playing heads up against each other, and that has been solved.
Yes, basically that! The thing is I never appreciated the difference between "solved" and "functionally solved". In a theoretical sense, the chess is astronomically not solved. Not only there is no proof it is a draw. There is not even a proof that queen odds is winning (!), though practically we can be really really really 100% sure it is. There is likely no hope for proof of "queen odds is winning" in our lifetime.
If the chess is indeed "functionally solved", there might be some other important domains similarly functionally solved, or close to being solved. "AI as normal technology" suggest election forecasting and human persuasion.
You might find reading about computers playing checkers to be interesting. The best human is/was much closer to the theoretical peak of performance.
And humanity now has a "solution" to checkers.
You might want to start here: file:///Users/mroulo/Downloads/1040-Article%20Text-1037-1-10-20080129.pdf
But the basic idea that once a problem has been "solved" being much smarter than the folks who have the solution won't help you is correct.
I don't expect to lose tic-tac-toe even to people much smarter than I am.
And for practical purposes (e.g. transportation routing) we might have "good enough" algorithms now such that even a perfect solution isn't much of an improvement.
I personally, based on little more than gut hunches and personal experience, feel that general intelligence is also subject to similar (maybe not identical, but similar) diminishing returns above some level. Sure, going from IQ 80 to 100 to 120 has meaning across the board. But 120 to 140 has less meaning, and in fewer areas of life. And I've seen little evidence other than sketchy analogies that suggest that the trend on increasingly narrow and small gains doesn't continue. Effectively, it seems to act logistically: big gains going from sub normal to just above normal, then flattening out.
Does anyone here know of a good deep dive on what it would take to fix the US healthcare system? I'm only ever exposed to the left-wing "corporations (and insurers, specifically) are bad", but something tells me it's probably not that simple...
I wouldn't call it a deep dive, but I did sketch how a libertarian might fix the US healthcare system, a few years ago. It might serve as a starting point.
I wonder what people worried that AI is going to kill us all in a few years from now think about recent Trump’s push to curtail visas for tech workers?
Like, a disruption to a Silicon Valley innovation ecosystem building doomsday machines seems actively good from their perspective?
The minus is that deconcentrating AI makes government restrictions a lot harder - right now the US or California government can pretty much unilaterally hit a pause button (maybe with a deal with China). If the US loses the tech advantage and AI research gets scattered between a dozen countries it does slow down a bit, but it also becomes much less manageable.
This sounds like a generalized argument along the lines "we can't ever use our power, because then we would lose it."? If it can't ever be used, why keep it in the first place? Like, after "unilaterally hitting a pause button", researchers might also scatter across dozen different countries?
How self aware do you think the 95%ile most aware human beings are? Let's use a scale where '100% self aware' would be at each moment you recognize the total set of drives and instincts that are active within your brain, and '0%' means you have zero internal awareness - not even of how you feel - and you are just acting.
In other words, how ignorant are the vast majority of people of their own drives, instincts, and motives, and how much does this matter?
Does anyone know of papers investigating the wisdom of crowds effect on a single LLM? That is, if you retry the same prompt 10 times and take the mean or median answer, does that improve the accuracy over single-shot?
Am so grateful for Trump's push for peace in the middle east. For the first time you have competent negotiators recognizing which players you need involved and who has leverage and who doesn't and what demands are achievable.
Hopefully within 10 years you have regional integration and two peaceful two states for two peoples each wishing for all of it but only culturally not militarily.
In 2020, the prize went to the World Food Programme. It isn't going to a guy who has eviscerated foreign aid. Nor to the guy who bombed Iran nuclear program sites. Nor to a guy who claims and exercises the power to kill suspected drug traffickers on the high seas. Not to mention the "Department of War" stuff.
And, btw, what role did Trump personally play in supposedly resolving any of those conflicts? The 1973 prize went to Kissinger, not Nixon.
Rob Malley's book 'tomorrow is yesterday' excellent dissertation on why the two states was never an organic solution given the aspirations of the people involved.
But at some point enough war and 3rd party interests might just wash away all that
This is an oil pipeline. You realize that? This is about $$$. I think that's good, and will enhance the stability of the Abraham Accords. Pay people enough, and they can hold their nose and work with Israel.
Duck sex? I read about raising ducks and someone said that all duck sex is rape! The female ducks have somehow evolved to make it very difficult for the male to fertilize an egg.
I can't understand why evolution allows this. Seems like the first evolutionary glitch that started it would have reduced the probability of mating and would not have been passed down. Instead, it seems like the entire species now has females that have adaptations to make mating difficult. The article says it gives the female some control, so I get why the female would be happy with that evolution, but how what evolutionary advantage does it have to where it continues to be not just passed down, but got worse over time?
Cat penises are barbed. They literally hurt the female cat while fucking them.
Most animal sex is rape, only a few primates have the ability to feel pleasure from sex in the female. (with a "breed only when fertile and non-pregnant" philosophy, the most efficient thing seems to be "don't fight too hard" from the female -- it's just once or a few times.)
My guess is: Since the female duck adaptations are basically defense mechanisms (only?) against unwanted fertilization, the male's aggressiveness in sec probably evolved first. For the males, this seems advantageous since they don't depend on the females decision. Meaning also that the more aggressive the male (and their style of sex), the better their reproduction chances. But females that can choose a better partner rather than the first one who chose them should have an advantage too. So for them, defensive mechanisms are advantageous.
As long as the defensive adaptations don't completely hinder reproduction, there isn't really an issue here: If a female duck has sex with say 10 male ducks over a couple of days, it's okay evolution-wise that she got fertilized by the 10th duck rather than the 1st.
I'm thinking of the first duck to evolve this, duck zero, I would expect that duck to have fewer offspring, and (I don't know, but) not pass on that trait to every one of the offspring. I would have expected that those ducks also would have fewer offspring and eventually breed itself out, rather than the males also adapting. Just seems weird.
Such a female duck zero would still be able to select whom to mate with, hence increasing the reproductive success compared to the general population. As long as she does not overshoot and is being too defensive, but the duck zero was probably just a bit defensive (since the mates were not that aggressive).
Why would you expect this to be disallowed? To me it seems largely equivalent to any other type of female choosiness, which generally reduces the probability of mating but increases the expected fitness of the offspring.
Sorry to hear! If you're reading this blog, it's likely the case that you've sufficient intelligence to overcome this. I was hospitalized a number of times in late adolescence and am totally fine as an adult. Feel free to message me if you would like some encouragement.
Popular history programming has gotten a bit stale. There's program after program on the vikings, the Romans, the Egyptians, and a few more. It's time for a refresh.
Your mission, should you choose to accept it, is to select parts of popular history that have been overdone, and suggest replacements.
For my part, I'm pressing pause on Rome, and substituting the Hellenistic period, from Alexander the Great to the rise of Rome. There's a good three hundred and fifty years of history there, and the geography stretches from the Mediterranean well into central Asia. And we get to talk about why the New Testament is written in Greek, not Latin or Hebrew.
As everyone here knows I am reluctant to push my own podcast. But you really should try this excellent episode on the Opium War. And lots of other subjects, Roman and non Roman!
Hmm Dan Carlin is doing a series on Alexander the Great. (Though new podcasts are coming out at his typical glacial pace.) I hear that Ken Burns is doing a documentary on the American Revolution. I'm looking forward to that... due out in November
One of the things I find interesting about Carlin is that he often makes podcasts about periods and places I don't normally see, such as the Visigoths or the Munster rebellion of 1534. Untrammeled paths aren't his focus, it's more about times of very intense passion or violence, and he lets that carry him to relatively unexplored times and locations.
In the uk, history programming has become 90% ww2. As interesting as it is, I would love some Roman, Viking or Egyptians. the colonial era is the most overlooked as it makes people uncomfortable.
I'd drop the fall of the Roman Empire and replace it with the fall of the Tang Dynasty in China. It's a fascinating tale that is big in modern Chinese culture but is almost unknown in the West. A good jumping off point is the Battle of Talas, one of the only times a Chinese army fought a Muslim army directly, 20 years after the Battle of Tours.
More like warrior girls; the practice was reportedly that boys and girls fought from adolescence until marriage, after which the young men remained warriors and the young women stayed home to raise the next generation of warriors (with more relevant experience than most mothers-of-warriors have had).
Reports that the girls had to take the head of an enemy in battle before they could marry are almost certainly apocryphal or grossly exaggerated. History and archaeology conspicuously fail to record the requisite mountains of skulls, and the girls would have been light-cavalry skirmishers who aren't in a position to take anyone's head. If there were a pro forma requirement, it would mostly have been met by throwing a javelin at someone fifty meters away, then having five of your besties swear they totally saw you kill that dude, no way he could have survived.
Still, they were in the right place at the right time to have skirmished with Greek settlers north of the Black Sea and, after a game of telephone all the way back to Athens, started all the talk of "Amazons".
This is exactly why I’ve chosen to write my historical fiction novel about the Scythians. The Pontic Steppe deserves far more love than just being the inspiration for the Dothraki.
The Hellenistic period is almost made for TV, with murder, intrigue, incest, and betrayal. Though I'd start a bit earlier than Alexander and instead set the scene with Sparta and Athens before getting into the rest.
As for my suggestion, I'd skip the Vikings - done to death now - and instead turn to the Three Kingdoms Era in China.
It was a violent and unstable time in Ancient China. So you have a collapsing empire, a struggle for power with some big battles, and eventually the emergence of a new empire, all of which makes for good TV. It's a well documented period, but also one which seems less well known in the West.
I wrote a post recently that included an experiment to try to show LLMs have advanced a lot in terms of shallow thinking, but not deep thinking. If true, the gains we'll get from continuing to scale up LLMs will not give us novel insights like curing cancer or whatever.
My vibes-based intuition is that scaling LLMs alone gets very high narrow intelligence - as we've seen already - but doesn't hit AGI. There's a persistent underlying stupidity to LLMs which hasn't improved to anywhere near the extent of the things they're good at, this makes me doubtful scaling can fix, and that some supporting paradigms need including to finish the job.
I think we need more adversarial/collaborative architectures. Like multiple LLM's engaged in a task.
One generates a task drive and drives execution to completion. Each step of the task graph has generators and validators. The generators try to propose solutions, the validators try to poke holes in them. That generation/validation process goes on until the validators find tinier and tinier holes, or until that process converges, and then a third process looks at their output and says, "ok, this is good enough."
LLMs illustrate that the fundamental nature of intelligence is interconnected neuron density. It's likely that, similar to how brains have had to evolve various structures and scaffold such as the amygdala, hippocampus, and various cortexes and neocortexes, there will need to be similar scaffolding for LLMs in the future.
I wrote about self compassion, perfectionism, current trends in the tech industry, and how to cope with the diminishing prestige of the software engineering profession. "self compassion and the disposable engineer" https://dlants.me/self-compassion.html
To anyone who considers themself good at both "speaking" AI and visualizing objects in 3D, I have a favor to ask. Would you put this question, or an improved version of it, to your AI of choice, then guide it around with your special talents, and report back with your findings? I would be most grateful.
"You know how if you take a 3D globe of the Earth and flatten it into a 2D *map* of the Earth, it gets all distorted? Well, assume the periodic table is such a flattened map. What 3D shape was it most likely flattened from?"
Slippin Fall
I don't know what you want out of this question so I'm not sure if this is helpful, but there are answers to this on the Web already (that might be in the AIs' training sets). At https://www.av8n.com/physics/periodic-table.htm for example.
Well do you know that the periodic table is underpinned by the QM electron wavefunction. The wavefunction is mostly about the spherical harmonics and solutions for different energy and angular momentum states.
My employer introduced a High Deductible Health Plan option this year, but in direct contravention of everything I thought I knew about HDHPs they priced the premium higher than the HMO premium. Specifically, for individuals the HDHP premium is $20 higher per month and the only "perk" is a $250/yr HSA seed. So if you opt for the HDHP you net out $10 ahead for the entire year, at the cost of having to pay ~1000% more out of pocket if you ever consume any healthcare.
This seems like such an obviously horrible deal that I can't imagine anyone taking it, but if that's the case why did they bother introducing it in the first place? The open enrollment email alludes to vague "tax benefits" associated with having a HSA, but is there any possibility that those are good enough to justify opting for it?
As a young man in perfect health I'd love to have a real HDHP option that cost less in premiums than the HMO, so I'm trying to understand why they would have set things up this way. The email said that they had "heard employee requests for a HDHP" but introducing one structured like this seems like it's worse than just not introducing one at all and continuing to only offer us HMO and PPO. It feels like if the cafeteria said "we heard your requests for ahi tuna, so we've added some cat food to the menu". Does anyone have any insight on a possible explanation other than this being an institutional "fuck you" to the HDHP-wanters?
The HSA Health Savings Account is an especially good retirement vehicle. Yes you are right to be surprised that this is why your health plan option costs more we have a really strange system.
You can put only up to a maximum amount in your HSA every year and the interest earned is tax free similar to a Roth IRA. But, the money you put into it is pre-tax (like a 401k). And any money you withdraw for health expenses is tax free. Money you withdraw later for non health expenses is taxed as income.
The people who were asking for the HDHP almost certainly wanted it for the HSA access (which by law is only available to those with a HDHP)
“This seems like such an obviously horrible deal that I can't imagine anyone taking it, but if that's the case why did they bother introducing it in the first place? The open enrollment email alludes to vague "tax benefits" associated with having a HSA, but is there any possibility that those are good enough to justify opting for it?”
I assume the HDHP still works out to your advantage. The tax benefit is that the money is tax free, the HDHP allows investment vehicles that you allocate your money to, you park it there for decades while it grows and you are hopefully not consuming healthcare.
In addition, if you ever have unreimbursed health care expenses, you can keep your receipts and potentially decades later you can be reimbursed for these expenses, which ends up being g tax free income to you.
Think of it as buying access to another retirement vehicle at the same time you have a health plan.
Limits of chess intelligence?
The "AI as normal technology" series of posts challenges the idea of intelligence being a really large spectrum. The standard metaphor is that the "village idiot - Einstein" gap is a tiny segment in the spectrum of intelligence, akin to visible light compared to the entire radio-magnetic spectrum. They object, claiming that a lot of important domains naturally limit the power of intelligence.
This led me to think about the limits of intelligence in chess. The gap between an amateur and a grandmaster is similar to the gap between a grandmaster and a modern engine. A GM would win against an amateur with ~100% rate, and an engine would similarly wipe the floor with the GM. Naively, one would assume the chain goes much further.
But is it true? I claim (and would appreciate feedback here) that the chain basically ends there, assuming chess is a theoretical draw. There is likely no super-engine that would decisively beat Stockfish 17. Moreover, it is likely that even the game-theoretical optimal oracle would still draw against SF17 most of the time.
The key idea is that chess is a game of understanding and calculation, but both lose their usefulness beyond a certain level. One needs to understand which positions are more likely to win, based on general features of the position (material, piece activity, long-term weaknesses, etc.). But for any pure (fast) "understand" function, there exist very similar positions that differ only concretely; so one also needs to calculate to distinct between them. In other words, there is only so much to abstractly "understand" about any given position; hence calculation is used to complement understanding. But calculation also fizzles out in usefulness after a certain depth. Indeed, for calculation to yield new insight at, say, depth 20 rather than 19, the position must remain ‘forcefully sharp’ for at least one player all the way to that depth. Such cases exist, but are rare, and if the smart (with good understanding) player wants to avoid them, they mostly will.
In other words, superior chess understanding and superior calculation will get you only so far. To secure a draw, one needs to understand and calculate just "well enough." I think modern engines have likely reached this level. They mostly draw against each other. The famous AlphaZero vs. Stockfish 8 result appears to prove the point rather than to contradict it (they mostly drew within a 50–100 Elo performance margin).
So the game of chess appears to have some "irreducible complexity" baked in; this makes intelligence usefulness diminish, so even a relatively dumb intelligence can be good enough to secure a draw.
Afaik, in chess engine championships, it is already the case that the engines only play from given starting positions that give white an advantage, to avoid every game being a draw in mostly the same position. That would line up with it being functionally solved.
Then again, before the advent of LLMs, engines already were stronger than GMs, but LLMs improved on those engines in unforseen ways (not just brute calculating lines, but playing more human-like on top). And engines are improving still.
I would probably share the intuition that chess is a theoretical draw, and that there are diminishing returns, but I am less sure we already are seeing levels that could draw even against magically perfect play.
Aren’t you basically saying that chess is functionally solved, even if it hasn’t been yet mathematically? You’re describing what I presume would happen if you put two Game Theory Optimal poker engines playing heads up against each other, and that has been solved.
Yes, basically that! The thing is I never appreciated the difference between "solved" and "functionally solved". In a theoretical sense, the chess is astronomically not solved. Not only there is no proof it is a draw. There is not even a proof that queen odds is winning (!), though practically we can be really really really 100% sure it is. There is likely no hope for proof of "queen odds is winning" in our lifetime.
If the chess is indeed "functionally solved", there might be some other important domains similarly functionally solved, or close to being solved. "AI as normal technology" suggest election forecasting and human persuasion.
You might find reading about computers playing checkers to be interesting. The best human is/was much closer to the theoretical peak of performance.
And humanity now has a "solution" to checkers.
You might want to start here: file:///Users/mroulo/Downloads/1040-Article%20Text-1037-1-10-20080129.pdf
But the basic idea that once a problem has been "solved" being much smarter than the folks who have the solution won't help you is correct.
I don't expect to lose tic-tac-toe even to people much smarter than I am.
And for practical purposes (e.g. transportation routing) we might have "good enough" algorithms now such that even a perfect solution isn't much of an improvement.
I personally, based on little more than gut hunches and personal experience, feel that general intelligence is also subject to similar (maybe not identical, but similar) diminishing returns above some level. Sure, going from IQ 80 to 100 to 120 has meaning across the board. But 120 to 140 has less meaning, and in fewer areas of life. And I've seen little evidence other than sketchy analogies that suggest that the trend on increasingly narrow and small gains doesn't continue. Effectively, it seems to act logistically: big gains going from sub normal to just above normal, then flattening out.
Does anyone here know of a good deep dive on what it would take to fix the US healthcare system? I'm only ever exposed to the left-wing "corporations (and insurers, specifically) are bad", but something tells me it's probably not that simple...
Probably a good place to start, is our very own: https://www.astralcodexten.com/p/book-review-which-country-has-the
I wouldn't call it a deep dive, but I did sketch how a libertarian might fix the US healthcare system, a few years ago. It might serve as a starting point.
https://www.quora.com/As-a-libertarian-how-do-you-think-the-US-should-reform-its-health-care-system/answer/Paul-Brinkley-1
I guess that depends on what part you think is broken.
the only remotely complex part is that corporations buy off US politicians so it can't be fixed, it's just one of many US governance problems
I found this a really helpful place to start for figuring out why the US healthcare system is the way that it is - you might find it useful as well as a starting point in determining how to fix it. https://podcasts.apple.com/us/podcast/the-peter-attia-drive/id1400828889?i=1000678878859
If you don’t have Apple Podcasts search Choices, costs, and challenges in US healthcare The Peter Attia Drive on your favorite podcast app
Do you have something text based? I find it really hard to pay attention to video/audio content.
I wonder what people worried that AI is going to kill us all in a few years from now think about recent Trump’s push to curtail visas for tech workers?
Like, a disruption to a Silicon Valley innovation ecosystem building doomsday machines seems actively good from their perspective?
Btw. it would not be a first time when priorities of anti-globalist Republicans and AI doomers coincide, see Steve Bannon’s succesful campaign against the Congressional moratorium on AI regulation: https://www.theverge.com/politics/704424/ai-moratorium-ted-cruz-steve-bannon-trump
The minus is that deconcentrating AI makes government restrictions a lot harder - right now the US or California government can pretty much unilaterally hit a pause button (maybe with a deal with China). If the US loses the tech advantage and AI research gets scattered between a dozen countries it does slow down a bit, but it also becomes much less manageable.
This sounds like a generalized argument along the lines "we can't ever use our power, because then we would lose it."? If it can't ever be used, why keep it in the first place? Like, after "unilaterally hitting a pause button", researchers might also scatter across dozen different countries?
Doesn't seem like the kind of thing that could help that much. Maybe it slightly slows things down but AI progress will still march forward.
How self aware do you think the 95%ile most aware human beings are? Let's use a scale where '100% self aware' would be at each moment you recognize the total set of drives and instincts that are active within your brain, and '0%' means you have zero internal awareness - not even of how you feel - and you are just acting.
In other words, how ignorant are the vast majority of people of their own drives, instincts, and motives, and how much does this matter?
All men are slaves to the Darkness That Comes Before.
I love those books.
Does anyone know of papers investigating the wisdom of crowds effect on a single LLM? That is, if you retry the same prompt 10 times and take the mean or median answer, does that improve the accuracy over single-shot?
Am so grateful for Trump's push for peace in the middle east. For the first time you have competent negotiators recognizing which players you need involved and who has leverage and who doesn't and what demands are achievable.
Hopefully within 10 years you have regional integration and two peaceful two states for two peoples each wishing for all of it but only culturally not militarily.
He should get the Nobel Peace Prize. What did Obama do to deserve it? A nuclear deal that fell apart once barely his term was over?
Trump resolved Nagorno-Kabakh, helped with India-Pakistan, got the Abraham Accords, Israel-Iran etc
He's not going to get it because he's seen as too offensive, but you get my point
In 2020, the prize went to the World Food Programme. It isn't going to a guy who has eviscerated foreign aid. Nor to the guy who bombed Iran nuclear program sites. Nor to a guy who claims and exercises the power to kill suspected drug traffickers on the high seas. Not to mention the "Department of War" stuff.
And, btw, what role did Trump personally play in supposedly resolving any of those conflicts? The 1973 prize went to Kissinger, not Nixon.
>two states
So much for understanding the players involved and what demands are achievable.
Hahahaha indeed.
Rob Malley's book 'tomorrow is yesterday' excellent dissertation on why the two states was never an organic solution given the aspirations of the people involved.
But at some point enough war and 3rd party interests might just wash away all that
This is an oil pipeline. You realize that? This is about $$$. I think that's good, and will enhance the stability of the Abraham Accords. Pay people enough, and they can hold their nose and work with Israel.
Of course. Regional stability is good for business. The more Saudi encouragement the better. AI and other stuff, too.
Duck sex? I read about raising ducks and someone said that all duck sex is rape! The female ducks have somehow evolved to make it very difficult for the male to fertilize an egg.
https://www.sciencefocus.com/nature/duck-penis-corkscrew.
I can't understand why evolution allows this. Seems like the first evolutionary glitch that started it would have reduced the probability of mating and would not have been passed down. Instead, it seems like the entire species now has females that have adaptations to make mating difficult. The article says it gives the female some control, so I get why the female would be happy with that evolution, but how what evolutionary advantage does it have to where it continues to be not just passed down, but got worse over time?
Cat penises are barbed. They literally hurt the female cat while fucking them.
Most animal sex is rape, only a few primates have the ability to feel pleasure from sex in the female. (with a "breed only when fertile and non-pregnant" philosophy, the most efficient thing seems to be "don't fight too hard" from the female -- it's just once or a few times.)
My guess is: Since the female duck adaptations are basically defense mechanisms (only?) against unwanted fertilization, the male's aggressiveness in sec probably evolved first. For the males, this seems advantageous since they don't depend on the females decision. Meaning also that the more aggressive the male (and their style of sex), the better their reproduction chances. But females that can choose a better partner rather than the first one who chose them should have an advantage too. So for them, defensive mechanisms are advantageous.
As long as the defensive adaptations don't completely hinder reproduction, there isn't really an issue here: If a female duck has sex with say 10 male ducks over a couple of days, it's okay evolution-wise that she got fertilized by the 10th duck rather than the 1st.
I'm thinking of the first duck to evolve this, duck zero, I would expect that duck to have fewer offspring, and (I don't know, but) not pass on that trait to every one of the offspring. I would have expected that those ducks also would have fewer offspring and eventually breed itself out, rather than the males also adapting. Just seems weird.
Such a female duck zero would still be able to select whom to mate with, hence increasing the reproductive success compared to the general population. As long as she does not overshoot and is being too defensive, but the duck zero was probably just a bit defensive (since the mates were not that aggressive).
Why would you expect this to be disallowed? To me it seems largely equivalent to any other type of female choosiness, which generally reduces the probability of mating but increases the expected fitness of the offspring.
Unfortunately I have been hospitalised for psychotic symptoms after a long and intense interaction. Pray for me, please.
I will. I've been there. We can chat about it if you want.
Sorry to hear! If you're reading this blog, it's likely the case that you've sufficient intelligence to overcome this. I was hospitalized a number of times in late adolescence and am totally fine as an adult. Feel free to message me if you would like some encouragement.
Will do and hoping you get better. St Dymphna, pray for him!
Popular history programming has gotten a bit stale. There's program after program on the vikings, the Romans, the Egyptians, and a few more. It's time for a refresh.
Your mission, should you choose to accept it, is to select parts of popular history that have been overdone, and suggest replacements.
For my part, I'm pressing pause on Rome, and substituting the Hellenistic period, from Alexander the Great to the rise of Rome. There's a good three hundred and fifty years of history there, and the geography stretches from the Mediterranean well into central Asia. And we get to talk about why the New Testament is written in Greek, not Latin or Hebrew.
As everyone here knows I am reluctant to push my own podcast. But you really should try this excellent episode on the Opium War. And lots of other subjects, Roman and non Roman!
https://podcasts.apple.com/gb/podcast/subject-to-change/id1436447503?i=1000705823343
China, from the 1911 revolution to 1949.
Hmm Dan Carlin is doing a series on Alexander the Great. (Though new podcasts are coming out at his typical glacial pace.) I hear that Ken Burns is doing a documentary on the American Revolution. I'm looking forward to that... due out in November
One of the things I find interesting about Carlin is that he often makes podcasts about periods and places I don't normally see, such as the Visigoths or the Munster rebellion of 1534. Untrammeled paths aren't his focus, it's more about times of very intense passion or violence, and he lets that carry him to relatively unexplored times and locations.
In the uk, history programming has become 90% ww2. As interesting as it is, I would love some Roman, Viking or Egyptians. the colonial era is the most overlooked as it makes people uncomfortable.
I want pre-tokugawa Japan. What were things like there? Seems chaotic.
Have you listened to any "History on Fire" (You have to get by his thick Italian accent.
the late Sengoku era is probably the single most popular period for Japanese media to cover, and if not it's up there with the Bakumatsu and WW2
(earlier could definitely use more treatment in fiction though... Onin War anime WHEN?)
I'd drop the fall of the Roman Empire and replace it with the fall of the Tang Dynasty in China. It's a fascinating tale that is big in modern Chinese culture but is almost unknown in the West. A good jumping off point is the Battle of Talas, one of the only times a Chinese army fought a Muslim army directly, 20 years after the Battle of Tours.
https://knowyourmeme.com/memes/historical-battle-shitposts-decisive-victory
Mm... yes, decisive tang strategic victory.
Sadly, wikipedia has lost its sense of humor, and the wikipage has been "cleansed" of all sense of military humor.
Less colonial period in general, I want to know something (anything) about the post Roman / pre Muslim history of north Africa and Iberia.
I really want to understand the Islamic Golden age. Would love some historical dramas in that time period.
I call for the return of Sophonisba
The last hundred years of the republic is more interesting than the empire. And has lessons for today ( amiright?).
Somebody tell me more about the Sarmatians, I see the name and know nothing about them.
The Sarmations had one of the few instances of warrior women fighting all by the men. For their time, they were quite egalitarian!
More like warrior girls; the practice was reportedly that boys and girls fought from adolescence until marriage, after which the young men remained warriors and the young women stayed home to raise the next generation of warriors (with more relevant experience than most mothers-of-warriors have had).
Reports that the girls had to take the head of an enemy in battle before they could marry are almost certainly apocryphal or grossly exaggerated. History and archaeology conspicuously fail to record the requisite mountains of skulls, and the girls would have been light-cavalry skirmishers who aren't in a position to take anyone's head. If there were a pro forma requirement, it would mostly have been met by throwing a javelin at someone fifty meters away, then having five of your besties swear they totally saw you kill that dude, no way he could have survived.
Still, they were in the right place at the right time to have skirmished with Greek settlers north of the Black Sea and, after a game of telephone all the way back to Athens, started all the talk of "Amazons".
+1
Written more eloquently than I could have!
This is exactly why I’ve chosen to write my historical fiction novel about the Scythians. The Pontic Steppe deserves far more love than just being the inspiration for the Dothraki.
500 AD Canada.
The Hellenistic period is almost made for TV, with murder, intrigue, incest, and betrayal. Though I'd start a bit earlier than Alexander and instead set the scene with Sparta and Athens before getting into the rest.
As for my suggestion, I'd skip the Vikings - done to death now - and instead turn to the Three Kingdoms Era in China.
Why specifically Three Kingdoms Era
It was a violent and unstable time in Ancient China. So you have a collapsing empire, a struggle for power with some big battles, and eventually the emergence of a new empire, all of which makes for good TV. It's a well documented period, but also one which seems less well known in the West.
The recent Sutton x Dwarkesh interview sparked a massive debate in the AI "community" over whether LLMs are a dead end
What are your thoughts?
I wrote a post recently that included an experiment to try to show LLMs have advanced a lot in terms of shallow thinking, but not deep thinking. If true, the gains we'll get from continuing to scale up LLMs will not give us novel insights like curing cancer or whatever.
Here's the post: https://taylorgordonlunt.substack.com/p/llms-suck-at-deep-thinking-part-3
My vibes-based intuition is that scaling LLMs alone gets very high narrow intelligence - as we've seen already - but doesn't hit AGI. There's a persistent underlying stupidity to LLMs which hasn't improved to anywhere near the extent of the things they're good at, this makes me doubtful scaling can fix, and that some supporting paradigms need including to finish the job.
I think we need more adversarial/collaborative architectures. Like multiple LLM's engaged in a task.
One generates a task drive and drives execution to completion. Each step of the task graph has generators and validators. The generators try to propose solutions, the validators try to poke holes in them. That generation/validation process goes on until the validators find tinier and tinier holes, or until that process converges, and then a third process looks at their output and says, "ok, this is good enough."
I think this gets you further than we are now.
LLMs illustrate that the fundamental nature of intelligence is interconnected neuron density. It's likely that, similar to how brains have had to evolve various structures and scaffold such as the amygdala, hippocampus, and various cortexes and neocortexes, there will need to be similar scaffolding for LLMs in the future.
Very funny to me. “LLMS” are dead. They channel the dead to us.