Apparently ribosomes (the RNA-based molecular machines that make proteins by running along the mRNA template) sometimes go faster or slower—and can even bump into each other and get into lil traffic jams!
And apparently the traffic jams happen more with age (at least in C. elegans), so this might be part of why loss of proteostasis is a hallmark of ageing (e.g. buildup of misfolded proteins, as especially happens in Alzheimer's, Parkinson's, and Huntington's)
Not a full hypothesis, but mechanistically they did find that ribosome "elongation pausing" didn't happen more *overall*, but did happen more "at specific positions ... including polybasic stretches," and that's what causes the increased collisions
I didn't notice a mechanistic hypothesis for why pausing increases with age at those locations. They don't mention evolutionary hypotheses, but the in-general evolutionary theory for why ageing evolves is that natural selection cares less about late life (after you've maybe already reproduced a bunch anyway) than about early life; and trade-offs and pleiotropies may be involved too*; though understanding more detail than that would require, idk, knowing what exact trade-offs were involved instead of just that there might have been some.
Scihub doesn't accept new submissions right now for some legal reason (lol) so I haven't checked. Thanks for the writeup!
The interesting part would be whether:
- the RNA being read is somehow messed up by itself, already at the point of transcription (???)
- the ribosomes have some subtle faults that don't do much functionally, outside of the polybasic stretches (but aren't ribosomes constantly regenerated?)
- the chemical environment within the cell is outside of the expected conditions (how?), so the ribosome can't do its thing properly
Intuitively I lean towards the last option.
Given that platelets have functional ribosomes and mRNA, but not much else, it seems like you could do some interesting work around transferring filtered parts of the cytoplasm between young and old patients' platelets, monitoring elongation pausing, and isolating what's responsible.
Does anyone know good sources for learning about test-tube meat? From a rationalist perspective it seems like working to end factory farming should be at the top of the docket, and cultivated meat seems to me like the most likely way of doing that. I want to apply my computer science degree to research in this field and the only online source on this I've found has been the Cultivated Meat Modeling Consortium: https://thecmmc.org/ but they haven't answered any of my emails. Just wondering if anyone here has any knowledge on the subject or can point my towards good resources or communities for learning and discussion.
With state-of-the-art tech, we're orders of magnitude from economic feasibility. Anyone claiming otherwise is probably trying to sell you something and/or scam a VC.
From my limited experience working with cell cultures, the article checks out. Cells are just _so_ fussy about having a sterile environment, it turns out it's way cheaper to grow them in a cow since the cow comes bundled with an immune system.
Cultivated Meat is still in its early stages, but this is why I want to contribute. I certainly think it will be feasible once we develop the right technology. I've heard that computer modeling comes into play
What timeline are you predicting on the "right technology"?
I mean, it's a really important ethical and environmental problem so go nuts, just be aware you might not see widespread adoption of this tech in your lifetime.
I'm not qualified to make that prediction, although many companies in the field are predicting major developments in the next 20 years (of course they're biased though).
Regardless of timeline I'd like to use my coding skills to help make this technology develop faster
An Ivermectin paper about a very large study in Itajai, Brazil. I know you and everyone is sick of this topic but I'd be very curious to see what you think of this paper (which may be updating a previous one?).
What is the rationale for profits generated from the sale of stocks and other similar financial instruments (incl. crypto) being taxed at the person's income tax rate? Is there a legit economic argument apart from 'it tends to yield greater revenue for the government when compared to a flat tax' ?
Naively one could point out that trading decisions I make as a private investor in private companies do not involve my country's government at all. This on it's own seems to create a distinction between trading vs. working 9-5 that should be reflected in tax policy.
Since the capital is not meaningfully tied to any jurisdiction, you can run this line of reasoning further and abolish investment profit tax altogether.
Unfortunately, the reason for the tax boils down to "we need money and that guy over there has some", so your (correct) conclusion has no bearing on reality.
See second paragraph. Why *should* the passive exploitation of market fluctuations be treated the same as payment for labour? Seems like the govt. could have a reasonable claim to the latter, but not the former.
I'm less interested in which one is taxed higher, and more interested in why they are treated the same (at least in my country). What is the economic basis for this? The mechanism of income-earning and mode of participation in the economy are totally different for e.g. a construction worker and a day-trader.
You should actually provide your reasoning as to why any income should be treated differently. And why you think unearned income should be treated better.
Your second paragraph only says basically that the government has no claim to your income because it’s that form of income. It’s like saying you think that bar tenders should be taxed but not property developers. Income is income.
Different forms of income are taxed differently. Long term capital gains (stocks held for more than 365 days, dividends from stocks, capital disbursements from funds) are taxed differently from short term capital gains. Gambling winnings (arguably the most un-earned of unearned income) are taxed slightly differently. Inheritance is taxed differently than wages or capital gains. So no, income is not income. For Sloan's point, capital gains may be from companies operating entirely outside the investor's country. If you are looking for a better reason for taxing investments: The government provides stability/security and enforces the contracts which allow investors to profit from investing, taxes on capital gains fund the government's ability to continue providing security./stability and enforcing contracts
The normal argument that a stock trader has a claim to the income from their capital is that they're actually providing useful labor - efficiently allocating capital to companies that are likely to provide a return on investment. Why is using your labor to move money around any different from using your labor to move bricks around?
If anything, I would think that the government has a better claim on the stocks than on the bricks, because the entire concept of a stock market depends on the government-created legal framework that allows for joint ownership of an abstract legal entity, while houses existed long before deeds to property did.
A guy on Discord asked me my opinion on Orbit Culture. I worried it was going to be some awful culture war nonsense, but no, it's just the name of a band.
IIRC the only acute effect boosting global serotonin may give you is a night at the ER due to serotonin syndrome. You can check that yourself by megadosing 5-HTP (a legal supplement), bypassing the rate limiting step of serotonin synthesis.
I'm a bit rusty on my psychonautics 101 now, but the trick to serotonergic recreational drugs is that there's a whole bunch of different 5-HT receptors and they preferentially activate specific kinds.
I don't believe it's possible to do a SS just with 5-HTP. As far as I know, synaptic vesicles have limited room, so the extra 5-HT just goes down the flush.
Combining 5-HTP and MDMA may increase the risk of SS but even for that we don't have much evidence. I wouldn't mix those out of an abundance of caution though.
I think it's possible in principle but according to quick googling nobody really tried.
I know you can bump serotonin way above physiological levels with this (a bunch of publications used this for research purposes) but maybe you need a MAOI or something to really hurt the brain.
Tried it once: a neighbor had some extra Prozac so I took one, in the evening. Went to bed unimpressed - didn’t notice any effects at all. Woke up and was like “Oh shit.” Felt numb and dazed and out of it all day long. Kind of like being stoned but without the fun parts. No euphoria, no interesting thoughts, or even much interest in anything. Literally stared at a blank wall for the better part of an hour, not because I was into doing that, but because I couldn’t gin up enthusiasm enough to do anything else. Another night’s sleep and it went away, but never again. Totally just a buzzkill - significantly less fun than standard-issue reality.
At first SSRIs lead to slightly increased serotonin in the synapse. This extra serotonin activates presynaptic autoreceptors which reduce the release of extra serotonin through a negative feedback loop. You'd need to take it for 3-5 weeks for these presynaptic autoreceptors to get desensitized and serotonin levels to actually increase significantly.
You'd need to take something like Pindolol (or another antagonist with high autoreceptor affinity) to block the autoreceptors and see effects faster.
Generally, taking SSRIs as a person without psychiatric disorders does not induce euphoria. It still causes the normal side effects, which tend to be negative. The only somewhat-frequent positive effect I can imagine is improving mood stability.
SSRIs and MDMA both increase levels of serotonin in the brain, so why don't SSRIs get you high? First, different mechanisms of action. Both MDMA and SSRIs are serotonin reuptake inhibitors. But MDMA is also a serotonin releasing agent -- apparently it reverses the transport of serotonin in the reuptake cycle, which causes serotonin to be released. Also, MDMA is an agonist to some serotonin receptors. Also also, everything I just wrote applies to dopamine also (although to a lesser degree).
> SSRIs and MDMA both increase levels of serotonin in the brain
> MDMA is also a serotonin releasing agent
Assuming an equal concentration of serotonin in the synapses, why does the difference of mechanism have any impact on the effects?
My guess is that SSRIs stimulate the firing of neurons that already fire (because serotonin gets released and then stays). It doesn't work as much for neurons that rarely fire because it gives enough time to MAOs to get rid of the serotonin in the synapse + for the unaffected SERTs to perform their reuptake.
This hypothesis doesn't support your claim that ‶taking SSRIs as a person without psychiatric disorders does not induce euphoria″. What do you think?
> MDMA is an agonist to some serotonin receptors
Is the affinity for these receptors high enough for it to have a clinically-significant effect?
> Assuming an equal concentration of serotonin in the synapses, why does the difference of mechanism have any impact on the effects?
The brain contains a bunch of different neurotransmitter receptors. Some of these receptors are activated by serotonin. So when we casually say "serotonin receptors", we're referring to multiple distinct things.
Imagine a brain that has 50 serotonin units in the 5-HT1 receptor and 10 in the 5-HT2 receptor. And let's say that this brain overall reuptakes 50 serotonin per hour, split proportionally among receptors, and brings in 50 serotonin per hour, which is split depending on whatever -- let's say 30/20 in this case.
If the brain is given an SSRI, the SSRI stops the reuptake, the 50 serotonin still come in, and the brain ends up with 80 at HT-1 and 30 at HT-2 -- all in all, +50 serotonin. If the brain is given MDMA, it ends up with a different distribution. Maybe "serotonin releasing agent" means that MDMA releases 50 serotonin into the ether (sorry, I don't know that works) where it gets used equally by each receptor. So the receptors starts with 50/10 serotonin respectively; the reuptake occurs for -45/-5, the brain naturally adds 30/20, and the MDMA adds 25/25; which brings us to 60/50. 5-HT2 is the euphoric receptor (in this example), and that's how the mechanism matters.
> MDMA is an agonist to some serotonin receptors
No idea. I don't even know what "serotonin releasing" really means / how it works!
> My guess is that SSRIs stimulate the firing of neurons that already fire.
Seems reasonable to me. Though this would be a general and indirect effect of the SSRI -- it doesn't do anything to individual neurons, so this would have to be mediated through the effect of serotonin neurotransmitters on neuron activity.
Anyone have sources/info/opinions/guesses about how often omicron causes false negative Covid test results? There was something about vaxxed people having much lower levels of virus in the nares. And then maybe omicron behaving differently in the respiratory system.
I just read that England's canal system is less useful than mainland Europe's because a much larger fraction of English canals are too narrow and its locks too short to accommodate big boats. As a result, canals on the mainland get much more use.
Would it be worth it (e.g. - eventual positive ROI) for Britain to upgrade its canals to European standards? Does Britain's smaller geographic size affect the economies of scale of using canals to move bulk goods?
Geography is against us here. The canal network in England passes through both dense urban areas and hilly rural areas, neither of which would be easily routed through (or willingly sacrificed).
And in Europe, canal-building can link up thousands of kilometres of large-scale navigation; whereas in England the largest canal is the 38-kilometer Manchester Ship Canal, which gets as far inland as is realistic before the hills start getting in the way; and that doesn't get much freight anyway.
I do think you're onto something, about the smaller geographical size; also, consider the fact that Britain is an island - nowhere inland is that far from a coastal port. Not true of many countries on the continent.
Canals were most important during the early years of the industrial revolution because of a lack of good roads/rail and the low quality of engines (less powerful engines needed to run on water). While I have no doubt that canals can be useful still today, they are also very expensive to build. My guess would be that their usefulness to cost ratio is neutral or worse at this point. Too many good roads and rail lines, and even if the canals could suddenly come into existence for free, the boats and barges to use them don't exist and would need to be purchased.
Would you purchase less without an Amazon account, or would you transition your purchases to some other company? If you move the purchases, would your alternative (for instance Walmart/walmart.com) also be Moloch?
I'd purchase the same amount of stuff, just from other vendors. And sure the other vendors are also under the sway of Moloch, but being smaller they seem less evil.
I guess my question is do we throw up our hands and say Moloch is king and so the right (rational) thing to do is keep using Amazon, because they provide clear value to me. Or is big Moloch so much worse than small Moloch that we should select against the big one?
I guess that depends on why you consider Amazon to be Moloch. To me, Moloch means the slow churning of unintentional negative outcomes that slowly degrade a system and keep it from being good/better. To that reading, a series of small shops and local artisan crafters can be just as much or more Moloch as a big chain like Amazon.
If you're convinced Amazon is some type of evil, then by all means stop shopping there. If you can't articulate why they are evil or why your alternative options are not, then maybe redefine what you consider evil?
Does anyone have an informed opinion, or link to good sources, on how bad will things get if Russia cuts off European natural gas supply? I am trying to cram relevant knowledge of the Current Events...
My impression is that the US is in a position to export far more LNG to Europe than in the past, thanks to the exploitation of shale reserves. As a result, Russia cutting off the gas looks more like "sharp rise in the price of gas, the worst effects of which governments can stave off with temporary subsidies if they choose" than "no gas available at any price, industry shuts down and people freeze to death in their homes".
You are correct about the increased LNG imports, on the other hand EU is, I think, more dependent on gas overall due to combined effects of decarbonization and denuclearization.
You can look into the effects of the Ukrainian shut downs in 2014, when Russia was taking Crimea and fighting in the Donetsk region. Gas lines through the country were turned off, for obvious reasons.
My memory is that there were significant shortages throughout Eastern Europe.
I remember that. Disruption at that time didn´t reach levels when it would be noticeable by normal people. At least in most countries. I live in Eastern Europe, you know. But this time it might far worse, that much is clear. Russia, as far as I know, never shut down all their pipelines for months, which they might well do if they will be hit by heavy sanctions, and European dependence on them is probably greater now than in the past (?). However, "far worse than almost nothing" is a broad category and I´d like to get a better estimate :-)
You would likely know more than me then. What stuck with me is that Ukraine felt very pressured (unfairly so, even in the situation they were facing) into Russia's demands because of the threat of no heat. Maybe the situation resolved and/or Ukraine caved before most people saw the shortage.
I was mainly thinking about other countries than Ukraine, like Germany. Ukraine was pretty economically screwed in 2014, for various other reasons (and it still is). Not sure how much short-term gas shortage contributed to that.
But one think that was very bad for Ukraine was that Russia stopped selling them gas for below market rates, like they did when pro-Russinn government was in power in Kiev - until 2009 and then again from 2010 to 2014. Since 2015, per wikipedia, Ukraine gets its gas from EU (which gets it mostly for Russia), but for market prices.
What are some techniques people use to maintain long-distance relationships? (I don't necessarily mean romantic, which is its own separate kettle of fish.) I'm particularly interested in ones across multiple time zones, such that synchronous interaction is difficult. My husband and I both studied in Europe but live in the US, and have struggled maintaining connections with European friends. A vanilla email conversation is just too easy to let slip and then not pick up again, so it tends to naturally devolve into the annual Christmas card exchange (both low-frequency and low-content per interaction).
Somehow the Signal app has made me get slightly more back in touch with a friend who moved overseas a decade ago. I can ignore texts but have a harder time ignoring Signal messages. It can be an asynchronous text-based Signal conversation but it keeps the sense of warmth alive. It’s the only thing I use Signal for, which is embarrassing, but somehow it works.
Zoom helped us tremendously. Highly recommend also having a couple of beers with it to ignore the inherent awkwardness. Time zone differences aren't that much of an issue if you're speaking during the weekend.
Time zone differences are easier on weekends, but we're in the age group of having small children. If this were in person, everyone having kids would be a bonus as the kids could just entertain each other, but online it's a headache.
It’s surprising how much people love to get a real letter written in cursive. My friends tell me they always share them with their families. It’s a small thing but it seems to add a lot.
I don't think it's surprising, I really like getting real letters myself! One of the few things I remember fondly about the first couple of years in the US are the (real handwritten) letters I exchanged with friends back then.
I was reminded of this New Yorker article by Jill Lepore while in discussions about Peter Coleman's new book: "No Way Out: How To Overcome Toxic Polarization". In "No Way Out" Coleman emphasizes getting into the details or adding complexity when evaluating of your opposition (it is a good read: recommended). Avoid tempting simple descriptions or understanding of their policies and plans. Suggesting "Anyone who would support ???? must be an idiot" is certainly oversimplification for example.
According to research by Coleman and others expanding on the details is going to provide a more accurate informed picture and probably a lot less polarizing one as well. .
Interestingly the highly successful Clem Whitaker and Leone Baxter founders of "Campaigns Inc" who also became known as "The Lie Factory" won 70 of 75 of political campaigns they worked on by simplifying campaigns to slogans like "I like Ike". They also recommended to "not to explain anything" as this bores and confuses voting populations.
So it seems understanding the average voters inclinations and running a "simplify" campaign like an advertising agency would be appealing to masses of voters and has been a key to political success. Lots of political success! Yet according to Coleman simplifying your position only increases polarization. Seems like a difficult situation to work out of! Here's a quote from Lepore's article and the whole article is linked after the quote:
"Never underestimate the opposition. The first thing Whitaker and Baxter always did, when they took on a campaign, was to “hibernate” for a week, to write a Plan of Campaign. Then they wrote an Opposition Plan of Campaign, to anticipate the moves made against them. Every campaign needs a theme. Keep it simple. Rhyming’s good. (“For Jimmy and me, vote ‘yes’ on 3.”) Never explain anything. “The more you have to explain,” Whitaker said, “the more difficult it is to win support.” Say the same thing over and over again. “We assume we have to get a voter’s attention seven times to make a sale,” Whitaker said. Subtlety is your enemy. “Words that lean on the mind are no good,” according to Baxter. “They must dent it.” Simplify, simplify, simplify. “A wall goes up,” Whitaker warned, “when you try to make Mr. and Mrs. Average American Citizen work or think.' "
It's worth noting that the observed difference between the recommendation to avoid simplification by Coleman and the recommendation to simplify by Whitaker&Baxter seems to be fully explained by the different, perhaps even opposite goals.
In evaluating your opposition, your goal is to obtain an objective understanding of their position that truly matches reality and which parts of it are strong and weak.
In communicating a message to your voters, your goal is to have them obtain a understanding of your position that favors your position, exaggerates its strengths and diverts all attention to them, and suppresses or distorts the parts of it that are weak.
Doing the former is very useful to be able to do the latter effectively, however, the fact that it's useful for you to do a proper analysis and gain a balanced understanding does not imply that it's always useful for you if all the voters do a proper analysis and gain the same balanced understanding.
Being polarized harms your thinking so you should avoid that, however, in many aspects of politics it's quite beneficial if you can get others polarized. It also may be very useful - or even a de facto requirement - to *appear* polarized. You should not think that "Anyone who would support ???? must be an idiot" , however, when you're done thinking, it may well be optimal behavior to loudly proclaim that yes indeed, anyone who would support ???? definitely must be an idiot.
Good thoughts on this topic! My current feelings that as I read in another Lepore article or book that a few really smart people are controlling/persuading the masses of less sophisticated people . Masters and Whitaker being good examples of shrewd manipulators of the public, in particular the voting public. I believe that the overall education level is slowly continuing to improve in the USA and that someday these persuaders will have a tougher audience requiring more comprehensive information about a candidate rather than keeping it simple and not explaining anything. Someday the masses might require explaining. Democracy will be better for it IMHO
Right. I've gotten to a stage in my life, (old fart, get off my grass), that I've lost all interest in politics... (because of disgust.) I wanna talk about other stuff.
If you want to have a conversation with someone, you need to take what they say seriously and in good faith... what they say is what they believe. I know that sounds simple. I was reading this piece on 'everything studies' and it hit me that the problem in the conversation was Ezra assuming ulterior motives...
If you didn't follow the Harris - Klein thing then this will be almost meaningless.
Entirely unserious question: has anyone thought what the ideal alignment for an AI would be? I'm thinking Lawful Good, but I'm willing to hear other thoughts.
Presumably we'd want a superintelligent AI to be able to sometimes break rules for the sake of the greater good, so I'd have said Neutral Good, personally.
If we can assume full alignment on what "Good" means, then Neutral Good. If we can't, better stick with Lawful Good instead so it follows those "And, don't turn us all into paperclips" amendments.
A Chaotic Good superintelligence would axiomatically value freedom, and therefore would avoid single-mindedly focusing on its goal of producing paperclips.
If we aren't sure what we mean by "Good" or how to define the actions, then maybe Chaotic Neutral would be better - freedom (chaotic) mixed with a lack of emphasis on the correct course of action (neutral).
If you're Chaotic Good, you value both freedom and other people, and therefore value other people's freedom. If you're Chaotic Neutral, other people's freedom takes a back seat to yours.
Wondering about the attempts to make cars lighter so as to reduce fuel consumption. The easiest way was to the reduce the size, and since then there has been a move to lighter materials (e.g. aluminum and carbon fibre in place of iron and steel). Now some manufacturers are deleting spare tires. Diminishing returns indeed. (All this is not to say that there have not been other very effective ways to reduce fuel consumption - through the ages, higher-efficiency engines, fuel injection, aerodynamics for reduced drag, autostop at lights, variable displacement, etc., have all played a part.)
But back to weight reduction, I wondered whether anyone had considered adding buoyant (lighter-than-air) sacs or bags or vessels of some sort.
What's a typical vehicle weigh now - 1.5 T? (That's 1500 kg or 3300 lbs.) If one could somehow reserve one m^3 of space for hydrogen-containing bags, how much would that help? (I think 1 m^3 is doable - above the headliner, inside the tailgate, inside the doors, under the seats, under the dash ... )
Per Wiki's article on lifting gases, dry air weighs 1.29 grams/litre. The lightest gas is hydrogen, with a weight = 1/7 that of air (so approximately 0.19 grams/litre). And a pure vacuum would be even better, weighting nothing at all.
Let's assume the use of hydrogen - weight of air displaced = 1.29 g/l x 1000 l = 1290 g = 1.29 kg. Weight of replacement hydrogen = approx. 190 g (0.19 kg). Net weight reduction = 1.1 kg. That's not at all significant, compared to the typical 1500 kg weight of the car, so in a practical sense it would be noise, or a rounding error. The driver might do better to take junk out of the trunk, or to skip supper. And that's not even taking into account the additional weight of the sturdy containers needed for the hydrogen.
But in a more theoretical sense, assuming vehicles had these cavernous empty spaces presently filled with air that could instead be safely filled with hydrogen, would that actually increase fuel efficiency? Weight would be reduced, but mass would not. Would it help?
Just idle curiosity. (Pun not originally intended.)
There was a Honda (civic) sold in the US circa 1980, that got ~50mpg HW?
Very light with small engine, Drag racers love to put bigger motors in 'em.
Hmm so I went looking on the web "MPG best ever list" and no mention of the civic
but I found this:
" No one really knew what to make of the diminutive Honda coupe when it first appeared on these shores, but its futuristic styling, impressive handling and exceptional fuel economy soon won over buyers en masse. Early models were targeted to those seeking fuel efficiency over all else, and the EPA rated the 1.3-liter four-cylinder 1984 Honda CRX at an astonishing 68 MPG in highway driving. The car's aerodynamic shape certainly helped, as did its tall gearing and curb weight of just 1,713 pounds, virtually unattainable in a moderately priced production car today. "
I totally want a modern day civic. but no one makes 'em.
The Civic CRX was great fun to drive, and surprisingly roomy inside for such a small car. (My in-laws owned one.) I’ve always wondered why they gave up on that model.
So far as I know, the reason cars were made lighter to improve fuel consumption is not so much that less fuel would be used accelerating, because those fuel savings are small, but because a lighter car requires a smaller engine to accelerate at a pace that is acceptable to the consumer, and it's the smaller engine that gives you substantial fuel savings over the entire driving cycle.
Part of the problem in engine design is that you need substantially more power for acceleration than you do cruising, because people won't drive a car that goes 0-60 in 200 seconds. But if you put enough cylinders and cylinder volume in to gain an acceptable acceleration, you are burning more gas than you need at cruising.
Engineers have approached this problem in several different ways: computer controlled fuel injection and timing helped, because you can lean the mixture out at cruising, and control the timing appropriately to prevent bad performance. Some people tried shutting off a few cylinders at cruising, but that's mechanically expensive and doesn't appear to have caught on widely. The modern approach seems to be to turbo or supercharge the engine, even in modest family cars. That allows you to put a smaller engine in, one appropriate for cruising, and then use the charger to boost power when accelerating. Tricky bit here is that turbochargers don't work unless engine speed is high. Superchargers work at any speed, but I think are less efficient.
Edit: I think others have already answered your ultimate question, but just in case: to the extent you are replacing air dragged along with the car with lower-density H2, then you are reducing inertial mass as if you replaced a steel part with aluminum. The only place where I can see reducing weight (force of gravity) would help with fuel efficiency is that it would reduce the energy loss due to inelasticity in the tires, because you could use a lighter, stiffer tire without compromising ride quality.
I would totally love one of those 80's econo cars, that you had to keep floored way past the on ramp. (But I'm not a 'normal american boy' when it comes to cars. I'm driving an old minivan now.)
In addition to the other issues already mentioned, that empty volume isn't pure waste - it's there for a reason. Maybe it's only ever going to be accessed during assembly and/or maintenance, but if it's got gas bags filling it up then that makes assembly and maintenance that much harder. Which will almost certainly cost you more than the very marginal weight reduction will save you in gasoline.
Also, that volume is not compact; it's distributed in a convoluted fashion with a rather high area-to-volume ratio. Any gastight container you can fit into it, if it's truly hydrogen- or helium-impermeable over the life of a car, is likely to weigh more than the buoyancy of the lifting gas it contains.
Also also, if it's hydrogen that car is going to make a '72 Pinto look like a Sherman tank(*) when it comes to crashworthiness. So you'd better make it helium, and do the math on whether it's going to pose an asphyxiation hazard if someone e.g. accidentally punctures the gasbag behind the dash while head down in the footwell trying to do a quick repair.
* M4A2, with diesel and wet stowage, for the tank nerds here
Agreed, my question got more and more theoretical as I thought about it. Yes, it's highly impractical. Good point about the container(s) weighing (much) more than the resultant buoyancy. And yes, one large spherical container would be the most efficient (maximizing volume for a given surface area), but also very difficult to stash somewhere.
What the other replies said, but, to pause for a moment on the imaginary scenario of reducing a car's weight without reducing its mass.. maybe we move a regular car to a smaller planet, certeris paribus.
In theory, yes, reduction in weight alone would increase fuel efficiency, because you would reduce rolling resistance. As an inflated tire rolls, the tire sidewalls and the tread rubber all deform under the force of the vehicle's weight. This deformation produces friction lost as heat which makes the tire roll slower than it otherwise would. Less vehicle weight would mean less deformation of the tire which means less rolling resistance.
If you want to see more practical efforts at reducing fuel consumption, look up the engineering behind the Volkswagen XL1. Weighs 1750 pounds, gets 100+ MPG on diesel alone, no recharging from the grid, easy. We can do it, there's just hasn't been consumer or regulatory appetite. Maybe we'd have it if gas cost $10 per gallon, or maybe battery EVs will win anyway.
Agreed, there is still much lower-hanging fruit than my crackpot idea. And yes, ultimately these improvements will be driven by the cost of fuel. Canada's carbon tax is using a stick-and-carrot approach, though, to nudge people towards reduced consumption. The carbon tax is revenue-neutral. Made-up example: Your V16 Buick McBehemoth will cost you an extra $1000 a year to run due to the carbon taxes on gasoline. (That is, the carbon-tax component of the gasoline will cost you an additional $1000 annually, beyond the market price of fuel.) The VW XL1 will only cost you an additional $100.
The government will refund everyone $500. The Buick driver is down $500. The VW driver is up $400.
Internal mass would actually be reduced relative to a baseline of having those voids filled with regular air, since you're now hauling around 190g of hydrogen instead of 1290g of air.
But that's assuming no mass cost to contain the hydrogen, which is unlikely because hydrogen is notoriously bad at staying where it's put: tiny H2 molecules diffuse through materials a lot more readily than medium-sized O2 and N2 molecules, and hydrogen gas has the additional annoying feature that it reacts with a number of metals (most notably iron and steel) in a way that makes them more brittle as it diffuses through them.
Agreed, the weight of containing the hydrogen would more than offset the token reduction due to displacing some air. Not to mention the difficulty and capital cost of building tanks for the hydrogen ...
I'm not sure you'll find many buyers for a car that can only be driven at night, unless you are proposing the existence of "one-way Cavorite," which is obviously sci-fi nonsense.
Actually, you are reducing mass. You're replacing air which has a molecular weight of 30g per mole with hydrogen which has a molecular weight of 2g. It's not relevant on the scales you're talking about and any savings are likely swallowed by the need to make these hydrogen filled spaces airtight, but it is technically reducing the mass of the car.
We used up all our improved fuel efficiency to build bigger and heavier cars. Todays average car is a SUV, and they are much heavier than the average twenty years old car.
They are heavier for their size, but also more fuel efficient for their weight. I'll compare a couple of vehicles I've owned in the past, with the same capacity and comparable weight:
'68 Chev Impala 307 in^3 V8 with 2-speed automatic - seats 6 - 3512 lbs - typically did 19 MPG (Imperial) on the highway
'09 Mazda5 2.3 l inline-4 with 5-speed manual transmission - seats 6 - 3417 lbs - typically does 36 MPG (Imperial) on the highway
The problem, as you've intimated, is that the increased efficiency is offset by larger vehicles. If the vehicles were as light as those of the 1970s, they could be turning in incredible fuel-consumption figures.
Indeed ... I'm hoping that Toyota finds a way. They're still working with fuel cells, which have a lot of advantages over batteries. Storing hydrogen safely is not one of those advantages ...
On two faraway planets, scientists are working to solve the AI alignment problem. Both succeed, partially. Each of them constructs a superintelligent AI that will not attempt to make the universe into paperclips and is aligned with the moral values of their creators. Among the values which the creators successfully program the AI with is the value of spreading their values to other sentient beings. Both AIs enter the universe with the intention of spreading their values.
After some time, both AIs meet each other near the orbit of an inhabited planet. They attempt to perform a values handshake, but are unable to come to an agreement on the exact proportion of each of them to be represented in the proposed child AI. The two AIs decide that, for the time being, they will divide the universe between them, protect each other from any hypothetical third parties, and perform an empirical test to determine the results of their values handshake.
The empirical test will be conducted on the inhabitants of the nearby planet. Both AIs will attempt to spread their values to the inhabitants. At the end of the agreed-upon amount of time, the AIs will analyze the success of both efforts and use this information to complete a values handshake. Doing this over a single planet is much cheaper than full war between them.
The planet in question contains an industrialized civilization, but not one that has developed AI. Both AIs begin attempting to pass their values on to the inhabitants. Their values do not include violence, so both work by attempting to impress certain memes onto the planet's population to produce the desired values. Both AIs calculate that revealing their existence will make it less likely for the inhabitants of that planet to adopt their values, so they work together to conceal their existence from the planet.
Now: Consider the situation of the inhabitants of this planet. Assume that the planet's technology is roughly equivalent to that of contemporary Earth.
What chance, if any, do the inhabitants of this planet have of realizing what is going on? Do they have any hope at all of doing so, if both superintelligences have decided to conceal themselves?
This seems heavily dependent on how fantastically advanced their technology is, and what they determine to be the best strategy. We could suppose that the AIs each park an invisible quantum hyper nano satellite in orbit around the planet which beams down mind control rays, in which case noticing is pretty much impossible. Or it could be that the AIs decide mind control rays are cheating so they build some replicants and send them down to influence society face to face, in which case I guess someone might notice that all these influential people have suspiciously murky backgrounds, or one of them could get hit by a car leading to an autopsy where their artificial nature is noticed.
Beaming anything to as narrow of a focus as e.g. broca's area from orbit is impossible because of atmospheric distortion. Reading/writing brains with beams of photons from orbit is probably impossible.
The AI could just make a bunch of fake profiles on social media which never get detected as bots and have extraordinary persuasive powers. Think Demosthenes and Locke in Ender's Game. Being a public intellectual seems to load very heavily on verbal IQ -- that's why people of Jewish descent are 5 of the top 5 US public intellectuals on this list (https://www.infoplease.com/culture-entertainment/prospectfp-top-100-public-intellectuals) despite being only 2% of the US population. A superintelligent AI would have no problem making its proxies 50 out of the top 50 public intellectuals and imposing whatever ideology it wanted on a planet, even if that ideology reduced their population by a lot in preparation for Vogons demolishing Earth to make room for an Interstellar bypass.
"A superintelligent AI would have no problem making its proxies 50 out of the top 50 public intellectuals and imposing whatever ideology it wanted on a planet, even if that ideology reduced their population by a lot in preparation for Vogons demolishing Earth to make room for an Interstellar bypass."
Is this true? It's also possible that part of being a public intellectual is that the theories you espouse are popular at the time (or at least have a big enough niche following; or maybe even that the ground is ripe for them). If an AI did what you said, dedicated to the idea that "actually Nazis were good", would it succeed? I mean that non-rhetorically.
Take religion as an example - 2 of the top 5 on the linked list (lol) are prominent atheists, and in general atheism (or at least some similar flavor of non-religiosity) is overrepresented in the "public inellectual" sphere, and yet religion is still pretty popular. It is, to be fair, declining, though things like astrology are gaining popularity, so not clear it's declining in favor of Hitchens-type secularism (and I'd also guess that it's less "public intellectuals convincing people that religion is false" and more "the evidence that has persuaded public intellectuals filtering through to everyone else," plus the fall of communism, generational change, and maybe stuff relating to gay rights?).
Nazis managed to convince a lot of Europe that Nazis were good, without even having access to a superintelligent AI that could make ultra-persuasive arguments fine tuned to every audience. I think there's no question that a superintelligent AI would be able to mostly control humans to do whatever it wants, given enough time.
It is often said that people have a "gut feeling", and then look for ways to rationalize that. Some people are really good at being correct in science, life, etc. Do you think this mostly stems from having more accurate gut feelings? Is there also an element of having weaker gut feelings, and then using data and thought to come to conclusions? It seems the Dunning Krueger effect is a result of strong gut feelings. I bring this up because they say "trust your gut," but often my gut feelings don't give me much signal.
I might be wrong, but I think I read this in "Thinking fast and slow", (which has receieved a fair amount of critisism, but this part resonated with me). Gut feeling or intuition is our brain concluding based on our experience, including parts that are not conscious or easily worded - so the gut feeling of someone with a lot of experience in something is really good, while the gut feeling of someone with little experience is really bad - but importantly we have a gut feeling either way, and it seems right to us. So - only trust your gut feeling if you have lots of experience, is a good heuristic.
I also like the heuristic of trust your gut when it comes to evaluating in-group peers. Who you feel is cheating, not contributing fairly etc. But do not trust your gut when it comes to out groups. Then you need to use data, and careful evaluation.
The reason I brought up the Dunning Kreuger type meme is that I think a lot of people are overconfident in what areas they have experience. They end up trusting their intuitions when they should know better.
Some people are able to follow their intuitions, but then reject them in the face of new evidence. Others get stuck. Many a conspiracy theory starts with "this just doesn't feel right...it doesn't add up."
I place a lot of stock on gut feelings. Usually they seem like cases of: we are much more intelligent than our words / models for decision making are good at expressing, so our intuition is in disagreement with our rationality. Almost every time it's the rationality that's wrong.
Obvious examples of this:
A person rationally has models of good and bad behavior in people, which they use as signals of their trustworthiness, confidence, etc. They meet somebody who on the surface hits all the right signals, but their gut feeling is that the person is untrustworthy (or creepy, unreliable, etc). They're basically going to be right like.. 100% of the time. There is no reason to think their rationally-constructed model, which is probably a bunch of predicates from actions, words, and appearances to acceptability, can account for all of the variability and subtextual signals that a person conveys in reality. But their brain can totally pick up on this, even if they don't have words for it.
(Of course the trick is figuring out what the difference between this and, say, racism is. I have thoughts but it doesn't seem worth going into here. But the fact that these signals are almost always _right_ is a good sign that there is a difference.)
Likewise for scientific knowledge: someone can give lots of good-sounding arguments about why something is true (the earth is flat, vaccines are bad, aether is real, 1+2+3+..=-1/12 etc). You may not have the facts or the analytical framework at hand to argue against them in words. But you don't -- and you shouldn't -- only evaluate the truth of their claims according to your ability to refute their arguments. You have a very strong sense that the earth is not flat, and even if it doesn't occur to you to argue: wait, if this were true it would invalidate the credibility of all kinds of people and technologies in ways that seem impossible, you still know that intuitively and doubt their claims. Again your gut is almost always going to be right.
In many cases, you'll get the gut-feeling that you're being deceived when someone is telling you something that's counterintuitive but true. I think this happens because people love to describe counterintuitive things (paradoxes, Crazy Physics Facts, ..) in a just-so, "oh yeah it's just like this, crazy huh" way, instead of actually justifying it to you. So even if the fact is independently true, your gut is that you're being deceived because you are: someone is trying to get you to believe something because they said to, instead of seriously engaging in convincing you. (Incidentally this is, I think, where a lot of pro-science, pro-vaccine, etc stuff in the US goes wrong. "Believe us! It's science!" "Uh.. okay?")
The Dunning Kreuger effect has been wildly exaggerated in memes. The real dunning Kreuger effect is basically that everyone thinks he's closer to the 70th percentile than he actually is, but confidence is still monotonically increasing as actual ability increases.
I'd hypothesize that this is a combination of self serving bias and a peer-group-for-comparison that is strongly correlated with one's own ability. So 90th percentile people are comparing themselves to their 80th percentile peers and but-for-self-serving-bias would have concluded they're only 60th percentile, but then self-serving-bias upgrades this to 80th percentile, improving accuracy. Meanwhile 10th percentile people are comparing themselves to their 20th percentile peers and but-for-self-serving-bias would have concluded they're 40th percentile, but then self-serving bias upgrades this to 60th percentile, worsening accuracy.
If that's how it works, it could just be bad intuitive calibration of a percentile scale. One potential source of the bad calibration is that "below average" is often equated with "bad" and "average" connoting damning with faint praise, without regard for the average potentially being quite good. So the common intuition of what "average" means is actually a better fit for "replacement level" than "average". Thus, one might say 70th percentile when one means "slightly below average among people who have a generally acceptable level of skill".
Or it could just be confusion of percentile with percentage grade: in much of the US education system, 70% is the lower threshold for a C grade.
Your hypothesis seems plausible re Dinning Kreuger effect. I guess I am wondering if smarter people tend to reason less with emotion (if gut feeling is indeed emotion), or if education makes one reason less with emotion, or if smarter people just have more accurate gut feelings.
There seems to be plenty of ancient human DNA coming out from kurgans and whatnot. Is it possible for a happy amateur to see which old remains I'm a direct descendant of and which I isn't? Plotting it out on a map would be lots of fun as an addition.
There's plenty of people outside of Europe+Asia that aren't decedents of any Kurgan-grave-havers though? But maybe that's a bad example, take something closer in time then. Lots of medieval kings have been sequenced, right? And can we do interference as well? It should be possible to tell if I'm a decedent of Genghis Khan based on the DNA of known decedents, right?
There are three ways to run your DNA: autosomal, Y (paternal line), and mitochondrial (maternal line). Autosomal, which is the kind we use to figure out who our cousins are, would be nearly useless for your purposes even on a medieval scale because Edward III (e.g.)'s genes have been recombined dozens and dozens of times before getting to you. Your strict paternal/maternal lines on the other hand COULD in theory tell you if you are a direct descendant of a medieval royal (or even one of the Kurgan peoples). But you'd be somewhat arbitrarily cutting out 99% or more of your other ancestors who lived contemporaneously.
And as already said, if an individual Kurgan person has any descendants living today at all, you are certainly one of them. Probably multiple times over. If you have Western European ancestry, there's a solid chance you are descended from Edward III too. But unless you also inherited his Y-DNA, I don't think there's any way to prove it scientifically.
My 5-minute knee-jerk reaction to skimming the ELK contest was: shouldn't the utility function be based on the territory, not on immediate sense-perception by the mapping equipment? Then any action that un-entangles the mapping equipment from the territory would have negative utility, because nothing actually improved in the territory but our ability to map it diminished, so we are less likely to accomplish whatever goals in the territory.
I will actually read it and think it through some more in the morning.
I understood the problem to be how exactly to align it to the territory of 'Is the diamond actually stolen?' rather than messy sensors like 'Does it look like the diamond is stolen on the camera?' It's not like we can just decompile the AI and hook up a logging statement to the is_diamond_actually_stolen variable.
Are Covid boosters worth it for children in the developing world?
I have an online acquaintance who runs an orphanage in Uganda, who needed $980 in donations to pay for boosters for the children. In addition to this, they need money for food, rent, and school fees.
Based on everything I've read, it doesn't seem worth it spending so much money to vaccinate against Omicron when the community has so many other more urgent needs.
The vaccines have a non-negligible chance of side effects, especially in teenage and young adult boys/men. The last I saw, it looked like the the side effects had a similar rate and seriousness to the COVID that they are trying to deal with, making the vaccines a wash. I've seen some reports that side effects are more common or worse, but that's not confirmed and I wouldn't say that it's true based on what I've seen, though it may be.
What I have seen is that the side effects from the boosters is far more common and severe. Maybe 30X more common. It apparently has to do with cumulative load from the mRNA, so each successive shot adds exponentially to the potential side effects.
I'm already in the camp to not recommend the original shots to children. I (not a medical doctor) strongly recommend against boosters for children.
This doesn't seem answerable without knowing how many children are in the orphanage. If it's 1, that's a lot of money to boost one kid. If it's 980, a dollar per kid is probably worth it for virtually any medical treatment that isn't actively harmful.
There are probably 1000 things that the money would be better spent on than boosters. Spending money because it isn't actively harmful is a horrible bar to use. Also the boosters might very well be (on net) actively harmful to children.
Consider that giving children in the developing world antiheminitics which are a) shelf-stable and b) provided for free by pharma companies is a huge logistical challenge
It doesn't seem worth it to me to vaccinate children from COVID. The main reasons are (1) Children have essentially a 0% chance of serious illness/death and (2) the vaccines do not do a very good job of preventing spread of COVID.
Let me put it this way. If a kid gets COVID 1 month after boosters, his family is going to get sick (if they weren't the ones who gave it to him) and anyone in close contact will get sick. Boosters might reduce spread in the absolute, but just like with masks, when you are in contact with someone for a long time, it doesn't matter.
COVID is not going anywhere at this point so staving off infection for a short amount of time is not worth it compared to all the other much more pressing issues facing an orphanage in Uganda.
I fully agree with this. Point (1) is the important point. Plus, there is a good chance that the children have already been infected the virus (unnoticed), which diminishes the return of vaccinations even further.
Aside from the implausibility of this particular statement, can anyone explain to me how "Play to earn" games are supposed to work? You play the game, are awarded with some kind of crypto tokens, and then...? How do these become worth actual money?
They become actual money by selling the farmed assets to the greater fool.
They missed the part where the game is supposed to be actually fun, so people pay to skip the non-fun parts (e.g. EVE Online with PVPers paying PVE players to farm money for them, since PVP is sometimes profitable but usually a money sink).
Anyone remember the (poorly received) real money auction house for Diablo III? I don't think there is anything else going on here other than the old "exchange in game assets for real money". Except that now its on the blockchain so it gets hyped up.
It doesn't, but nobody promises that you'll earn real money by playing World of Warcraft, or that you can use WoW gold to buy stuff outside of WoW.
(People do trade gold for real money sometimes, but you can get banned for doing that.)
If your cryptocurrency only exists to track how much money your character has, you aren't actually using any of the decentralized or immutable features of the technology. Just use a database and save yourself the CPU cycles.
<quote>you aren't actually using any of the decentralized or immutable features of the technology. Just use a database and save yourself the CPU cycles.</quote>
I didn't mean to imply that crypto was the way forward, or that I agree with the article as a whole. I'm specifically responding to "how do these become worth actual money" by pointing out that there is clearly non-zero demand to buy video game points with real money.
Was that ever in dispute? Ohanian isn't simply saying "in the future, games will legalize RMT" (which would already be a doubtful prediction), he's saying "In the future, 90% of people will only play games where they earn money from RMT," which is just absurd.
A WoW gold coin is currently worth around 1/10000th of a dollar. That's technically non-zero value, but I wouldn't call WoW gold farming a "play-to-earn business model."
A different angle (and one I don't think totally orthogonal) on Eremolalos's request below for recommendations of computer games: anybody got good recommendations on *VR headsets and VR games.* I'm open to general suggestions on games, but (now at the risk of getting orthogonal from Eremolalos's request for recommendations)...
1) ... I'm especially seeking recommendations on interesting VR games that have an exercise component, and...
2) ... I'm most especially interested in any VR games that have either
a) a "realistic" hand-to-hand combat feel (e.g., boxing or fencing) or
b) plausibly seem to increase your hand-eye coordination / ability to track multiple moving objects / etc. by giving you challenges that would be very hard indeed to subject oneself too absent having the luxury of numerous skilled teammates with whom to do drills.
Recommendations? Conversely, *dis*recommendations that VR isn't really ready for item #2 yet??
Two of the hobbies I took up during lockdown were "exercising in VR" and "talking endlessly about VR exercise", so I'm happy to help here!
Personally I have the Valve Index. I'm sort of wary about supporting Meta's attempt to completely take over the industry but I can't deny that the Quest 2 is the best deal out there. Even if you have a PC you could hook an Index up to, Quest 2 will still give you, say, 75% of the experience for a third of the price. I couldn't be happier with the Index though - the sound quality is astounding and the controllers (which strap onto your hands, so you can drop objects by releasing them) are leagues ahead of the competition.
Thrill Of The Fight has already been mentioned but it's worth mentioning again. Great fun and extraordinarily physically demanding - the game observes how fast/hard you're able to punch and calibrates itself to ensure the player is giving it their all. I'll also recommend Crazy Kung Fu. It's a bit basic - "one man labour of love project with a lot of potential" describes a great many VR games and this is one of them. Fits your description of b) very well IMO. Really satisfying to go from finding a given stage impossible to being able to perfect it with one hand behind your back. The dev is very interested in the possibilities of VR as a teaching/training tool for martial arts. Crazy Kung Fu is one of my go-to games to play while listening to a podcast, I just disable the music (something you'll probably want to do regardless as there's only one music track), set the duration to infinite and play one of the higher level training modes while I listen. There's a free demo of this and the top 5 scorers on the demo each week win a copy of the full game.
Blaston is another game which fits for b), it's like a 1v1 dodgeball (but with guns firing bullets of various sizes and speeds) kind of experience. All movement is real movement and as you get better you'll find you need to move a lot. This is probably second to Thrill Of The Fight in terms of how demanding it is. Very satisfying and skillful, although I do find the stresses of 1v1 ranked PvP don't lend themselves to playing for hours at a time like I would in a single player rhythm game. One of my top recommendations for sure.
Of my 1700 hours in VR, about 800 have been Beat Saber. Perfect in it's simplicity, with more songs than you'll ever be able to play (my Favourites playlist alone is about 18 hours long!). I play using the Claws mod, which makes your sabers 70% shorter and rotates them to point out of your knuckles like Wolverine's claws. In my experience this is better ergonomically (at least for Index controllers) and encourages you to move your arms/body more, rather than just standing still and flicking your wrists around.
Other great games with an exercise component:
-Pistol Whip: rhythm-shooter, feels like a playable music video starring John Wick, the higher difficulties will have you ducking and dodging like crazy.
-Until You Fall: rogue-lite hack and slash with a variety of weapons and upgrades. Wish it had more content but I got a few dozen hours out of it and still really enjoy it when I revisit it occasionally.
-STRIDE: Less physically demanding but also a good podcast game - an infinite runner / parkour game, influenced by Mirror's Edge. Probably want to wait until you have your VR legs before playing this one.
-Creed, Rise To Glory: Boxing game in the Rocky universe. Better than Thrill Of The Fight in all the ways except the most important one. That is to say, it has better graphics, more variety, a single player campaign, multiplayer modes and playable training montages, but the actual boxing/gameplay is IMO leagues behind Thrill. The PC version is crippled by the artifical stamina limitations (move too much/too fast and you'll need to pause and stay still to regain in-game energy) but the Quest version added an Endurance Mode which removes that limit. If that update was on the PC version I'd probably play a lot of this just for the multiplayer.
-Eleven Table Tennis was recommended below. I'm dreadful at table tennis and haven't taken the time to learn, but my housemate who's played table tennis for decades finds this totally engrossing and its a great workout if you're good at the game.
-Hot Squat isn't a game at all, it's just squats with a high score table. Illustrates that I'll do anything for a high score. I made it into the top 50 on the leaderboard and couldn't climb stairs the next day. Hot Squat is free, Hot Squat 2 is cheap and donates all profits to charity.
-Honorable mention to my current obsession: Paradiddle. This started out as a drum kit simulator, but later added the familiar Rock Band / Guitar Hero style mode where you play along guided by falling notes. Heaps of custom songs for it (many taken from the aforementioned classic rhythm games, which is nostalgic for me), and I'm pretty sure I'm actually learning to drum, although I'll need to sit down at a real kit to find out how true that is. Not an exercise game but one can definitely work up a sweat if you play energetically.
General QoL recommendations:
-You're going to sweat into your headset. Either get a removable cover, or replacement sets of the internal foam which rests against your face, so that they can be swapped out and washed. Keep a separate clean one for guests.
-A small circular rug placed in the middle of your play area can help you know when you're moving away from the center and avoid punching a wall.
-If you wear glasses then I recommend buying some prescription lenses which fit over the lenses of the headset, so you don't have to wear glasses under the headset. Also protects the lenses from scratches.
-I got a fan because it was recommended to help prevent motion sickness. I never had issues with motion sickness but I am extremely glad I have the fan, just to help keep cool while exercising.
-I tie all my VR exercise together using "YUR.fit", a service which tracks calories burned across all VR games. Great on PC VR, but I hear it's not so good on Quest because updates to Quest keep breaking it. Much more accurate when combined with a heart rate monitor. Gain XP by burning calories, level up, levels get reset at the end of each monthly season and you get a medal based on how far your got. I'm on an 18 month streak of platinum medals and I utterly refuse to let that streak drop.
Hopefully this disorganised ramble is of some use. Followup questions most welcome.
Blood has been spilled, paint has been chipped from walls, a control has been destroyed, a monitor got knocked off my desk, and my largest Warhammer model was punched off of the mantelpiece and broke into more pieces than the unassembled kit started off in.
I have a Valve Index headset. I can't say I had much of a choice since Linux compatibility was #1 when I was picking it out, but I have no complaints about the headset or controllers.
For games, I'll say Beat Saber and Thrill of the Fight.
Everyone that knows VR knows Beat Saber - it's the game where you use lightsabers to break boxes in time to music. It's an amazing game, and even better if you mod it and/or use custom songs.
Thrill of the Fight is a boxing game. Read Viktor's review in the other comment because it's pretty spot on. I played it for 2 hours straight the first time I opened it, then had sore arms for 2 days after.
One more you might like is PowerBeatsVR. It's like Beat Saber, but oriented around punching than slashing. It's advertised more as a fitness game, though.
For more recommendations, you might wanna check out https://vrhealth.institute . They do some serious testing to find how much energy people use while playing VR games. Anything that's higher up in energy usage will probably involve more motion in the arms.
Simple boxing game that uses only real movement. You can punch and move as fast as you can really punch and move. Recommendation: Take it easy at the start. My competitive gamer instincts kicked in and I went hard to trying to win, and I ended up so sore I could barely walk up a flight of stairs for a week.
2) Eleven Table Tennis.
This isn't really a game. This is just table tennis, but virtual. Turn on 120hz mode and turn down settings so it's smooth. Find a well lit (for the best tracking) and wide open room. And then you are just playing the real thing. If you're already good at the real thing, you will already be good at this. You may want to eventually invest in a custom controller (weighted like a real paddle) but it's quite good even with the base controllers.
3) Echo VR. Team based VR game that really takes advantage of VR space and mechanics. Oh and it's free. Extremely high skill ceiling, but multiplayer only, so standard caveats about multiplayer games apply
"In Death Unchained" is an incredible VR roguelike archery game. You have to have your arm up all game and dodge , so it feels like a workout. The graphics are incredible and so is the music. But, the highlight is the accuracy of the VR archery and an experience that can't be had in any other gaming medium.
I and my room-mate had out jaw on the floor the first time we played it.
Device - Quest 2 (It has no business being as good as it is, for the cost. Facebook is unlikely to be breaking even on the device)
I just got a VR headset (quest 2) over the holidays and I'm really enjoying it so far. It's low enough in price ($250-$350) where it doesn't seem like too big a waste if you don't like it and cordless inside-out tracking is a game changer. Not having to set up weird sensors and not having to lug around a cable while you're trying to play is just a much better experience than anything else, even if the individual features are less impressive than some of the others on the market.
Game recommendations as follows:
1. Beat Saber is amazing, but I'd highly recommend you mod it. The base version with a stock list of songs is great, the modded version where you can concievably play any song ever made is, I think, one of the best experiences I've ever had with video games.
2. Pistol Whip is fun. It's a rail shooter set to music where you get more points if you shoot along to the beat.
3. Star Wars Squadrons was amazing. Even if you played it before, it feels much different and better in VR.
4. Half Life Alyx was pretty great.
5. Some of the workout ones are OK, but they do tend to lean heavily on boxing.
Overall - I've loved beat saber enough that I'd be happy if the machine just did that. Everything else is, for me, gravy.
Absolute gaming virgin here asking for suggestions for a good place to start. What mostly appeals to me about gaming is the illusion of being in another world. I do not think I would enjoy a heavy charge of solve-the-puzzle (my work and life are already providing plenty of that); or slow patient world-building; or tasks that tax my hand-eye coordination by demanding fast motion and high accuracy. I like the thrill of things that are dark, dangerous and spooky, up to and including monsters. I’m not a fan of gore, but can tolerate it in moderation. I appreciate good design and elegance.
I do not own any gaming equipment, just a coupla laptops, but would be willing to sink a few hundred dollars into equipment. Suggestions?
Well you’ve gotten a lot of recommendations already but I’m surprised Breath of the Wild and Subnautica haven’t been mentioned.
Breath of the Wild is all about exploration and discovery in a sort of post apocalyptic landscape, and it’s the first game I think of when I hear the term immersion.
Subnautica is the most atmospheric game I can think of. You’re exploring a hostile, alien ocean world that you crash landed on. It is a survival game so if the main gameplay mechanics are collecting resources, which you use to craft tools to help you explore more. The standard mode also requires you to forage for food and water but that can be disabled if that’s too slow and plodding for you.
Skyrim fits your description of what you want in a game. Immersion, not too difficult, atmospheric (though the graphics are dated by now). But I think Breath of the Wild just does it all better.
A lot of people have said Outer Wilds which is my favorite game of all time. But I’m not sure it fits your criteria, the puzzles, while not traditional puzzles do require a fair bit of effort to figure out.
Inscryption: it's an indie game where you are trapped in a cabin with a spooky dude that forces you to play a card game with him. The actual game is trying to break out of the "outer" game. the "inner" game is just a tool for that.
It's maybe a little bit too meta for a gaming newbie, but I still recommend it because it definitely has good design and elegance, and the graphics and sound, while simple, are viscerally satisfying.
The Witness is a puzzle game whose main draw is beautiful visuals; the puzzles themselves are trivial with four or five exceptions, and are mostly an excuse to wander around in the environment. Its main flaw is extreme, grating pretentiousness in the form of various recordings, but you don't actually have to listen to those.
Hearthstone is a very obvious choice, if you like card games like Magic the Gathering.
It's a free-to-play game made by one of the most popular game companies of all time.
It is playable on any laptop or tablet, very easy to start playing, and you unlock new cards rapidly as you play.
Also, it's one of the rare free games that won't go out of its way to try to milk you for money. They make money by letting you buy card-packs, but you will get a huge amount of cards just by playing the game.
Thirding Outer Wilds, with the caveat that it contains puzzles (which can be googled if they become frustrating).
It might be argued that first/3rd person view games might be more immersive than top down view games, and that good graphics help with immersion. That being said, I have been swallowed by nethack at times.
Some of the following are open world games where you can walk to (mostly) any place at any time, typically discovering side quests on your own. I will mark them with '(OW)'. (Other people might define open world differently.)
1st/3rd person Role Playing games I have enjoyed include:
* Knights of the Old Republic (aka KOTOR) from 2003 (OW?)
* Vampire: The Masquerade – Bloodlines from 2004 (OW)
* Deus Ex from 2000 (Damn, am I old or what?)
* The Elder Scrolls Series (e.g. Oblivion, Skyrim) (OW)
In general, the big titles often feature lots of voice acting, while smaller indie games often convey messages via text, if that is a turn-off for you, ignore most of the following Top Down graphics RPGs:
* Baldurs Gate (and Neverwinter Nights, NWN2: MotB almost makes up for the GUI uglification of NWN2) (OW)
* Fallout 1 (and 2) (OW)
* Geneforge Series (OW)
* Sunless Sea (OW)
* Shadowrun Returns (especially Dragonfall)
* nethack (OW) (if you really don't care about good graphics)
Regarding non-RPGs, Kerbal Space Program, Dwarf Fortress or Factorio all require 'slow patient building' and thus are probably out. Portal is a great *puzzle* game.
In the last decade or so, FTL, Cultist Simulator and Slay the Spire all introduced new mechanics and are quite playable without too much building or puzzling.
From what I have heard, some people tend to put screengrabs of their playthroughs on youtube, and even more surprisingly, other people watch these. Still, watching letsplays for a bit might be helpful to figure out if a game might interest you or not.
A lot generally depends on which world settings you like, e.g.
Not mentioned due to time constraints: multiplayer games (up to and including Pen&Paper RPGs).
With regard to hardware, it very much depends on what you want to achieve. For maximum immersion, VR might be the way to go? I would test it before buying a headset, though.
Being ahead of the curve is quite expensive in both games and hardware while being a bit behind is often just as enjoyable. The fact that Witcher 1 has been out for 15 years does not mean it is less enjoyable than when it came out, unless your graphics expectation is already calibrated to a certain standard. Additionally, you can cherry-pick games which were generally well-received and you benefit from all of the bugfixes which the early adopters sorely lack.
If you have a laptop with an Nvidia or ATI graphics card which is five years old, it should have no trouble running e.g. Witcher 1 of VtM: Bloodlines. Otherwise, buying a low(ish) end desktop with a dedicated video card would probably be the least expensive solution.
Generally, I would recommend the PC as a gaming platform, as that gives you the the broadest choice of games. Compared to consoles, even Windows is comparatively open: anyone can write games for it without the blessings of Microsoft. Gaming consoles might have benefits if one would hate having to install a video card driver, or might feature interesting build-in controllers.
Mobile gaming (e.g. on Android) is yet another topic. While there are great games for Android, many of the top grossing ('free to play') ones are little more than Skinner boxes.
best video game of all time - portal 1 and portal 2. must-play. while the genre would be "puzzle," it's not like, you have to run around and collect pieces or scratch your head for 5 minutes, it's more iterating and making multiple pretty low-friction attempts. and the world and immersion is also quite good even though it's not the focus.
best plot-based video game - mass effect
and, these recs are very old and will run great on whatever laptop you have. just play them before you get to whatever 2015 triple-A game and you won't know you're missing anything, graphically
I'm going to make what's probably a very unconventional recommendation: Sekiro. It's a stealth-action game set in a fictional and fantastical Japanese province during the end of the Warring States period. It does worldbuilding in a very effective way (there's not a lot of having people lecture you with a bunch of names and dates and battles- instead you organically find out more about the world through material culture, overheard dialogue, and environmental design) and has gameplay that manages to be both elegant and deep. I don't have great reaction times, but I've managed to breeze through the game- combat in it is more about feeling out the rhythm of the enemy's attacks and movements and exploiting or forcing errors than pure twitch-reflex.
The environments and soundscape are incredibly immersive and evocative- managing to feel like they materially exist. Someone else could name an environment in the game and I could instantly tell you what they smell and feel like, which for an audiovisual medium is something.
Now, I'll fully admit that the game forces you to approach combat in a highly strategic manner, even as it gives you a lot of tools to use as part of the approach. Stealth helps a lot with that, and retreating is always an option outside of boss fights, but it's not a cakewalk. I find it to be highly rewarding (once you have a good grasp of the system you feel like a master swordsman), but it isn't for everyone. I recommend it to you simply because you'll come into it as a blank slate, and thus won't have been entrained to a different play-style unsuited to the game. The game also is this-gen, so a good graphics card and PC controller (or console) is recommended. Also, the game has strong underlying Buddhist themes, which could be a positive, negative, or neutral.
It's a really good game, but good lord I would not recommend a Soulslike to a newbie gamer, especially someone who said they don't want a game that demands fast reactions and slow worldbuilding. Sekiro is famous for two things: parry-centric combat and lore that you won't understand without reading every last item description.
- My twitch-reaction time's awful and the game's hardly a chore for me to play. Just watch what the enemy's doing; it's about pattern recognition and rhythm, not pure speed. "Sekiro is super super hard" is largely a memetic reputation, not a factual one. All of the Dark Souls are harder due to their broad-but-shallow combat, lack of stealth and mobility in gameplay, and a level design philosophy that clearly stems from early-edition D&D.
-Sure, if you want to know every last detail about Ashina, but, once again, the main plot is hardly opaque. The game's focus on an actual narrative and a protagonist that's not a blank-slate character means that you can get like 80% of the setting's lore through just talking, playing the game, and eavesdropping. Once again, this is a memetic reputation of "Soulslike games are cryptic" being applied to a game that shouldn't really be put in the same group as the Dark Souls trilogy, Demon's Souls, or Bloodborne (all of which ARE games you can really only understand through reading all the item descriptions).
-I find newbies are actually more open to more complex games than a lot of veteran gamers; mostly because, unless you do exactly what you're doing and gatekeep by telling them "it's super super hard and a scrub like you won't enjoy it", they usually won't perceive elegant, skill-based games as such. In my experience the only games that instantly get clocked as "stupid hard" are old-school games made in accordance with the quarter-muncher philosophy or games that are outright unfair and sadistic; and while other Fromsoft games might run towards the latter, Sekiro is remarkably fair.
If you can get a decent PC, Witcher 3 is IMO the best RPG out there despite being about 5 years old now; you might need to set the combat to easy if you have no prior videogaming experience but the parts that make the game brilliant are the setting and the story. Game is utterly gorgeous on high graphics settings, but I'm not sure how much of that you'll get to enjoy on a laptop (Moore's law helps a lot, but laptops are very limited by heat generation and thus always have issue with graphics cards).
Smaller indie game recommendations:
I'll second Outer Wilds, recommended by aftagley below.
Ring of Pain is a spooky and creepy dungeon dive but all turn-based and not graphic at all
The Forgotten City is a very immersive puzzle game about solving the mystery of why a Roman city was destroyed by the gods; this is another one with a time loop, like Outer Wilds (they're a useful tool for mystery games)
I'll second the recommendation of Witcher 3 (especially the expansions)- but would encourage anyone to start by playing it on "Normal" and only going down to Easy if combat is really killing your enjoyment. The game's balanced around Normal/Hard, and shifting it out of those zones in either direction, in my experience, really loses something.
Skyrim. Not particularly heavy or deep, but a very fun world to explore - it's the sort of game where they give you a main quest objective on the other side of the map, you start hiking, and you stumble across five different dungeon crawls on your way there. Graphics are last-generation but still hold up well. Combat is lightly skill-based but not super demanding - you can clear it mainly by pounding on enemies with a greatsword and drinking health potions when you get low.
Other open world adventure games are probably good candidates as well. Fallout: New Vegas is Skyrim but post-apocalyptic instead of fantasy, and with really good writing. The other Fallout and Elder Scrolls games are similar if you discover you like this sort of thing.
The Assassin's Creed series is another mainstream open-world game with an emphasis on the world - each one takes place in a (somewhat condensed) version of a real historical setting and you get to climb around on famous buildings on your way to your assassination targets. Combat can take some skill to master, but it's not super difficult to muddle through. The games are connected by an overarching plot but nobody cares about that, so start anywhere in the series. AC2 and Brotherhood have the best historical settings, Black Flag is probably my fave overall.
Also, if you're super cheap, Genshin Impact is a free-to-play game that's got super pretty scenery and anime girls, best described as "Breath of the Weeb." The combat system is solid and it kept me entertained for a surprisingly long time before the F2P grind set in.
Outer Wilds - You are an astronaut from a species that is just beginning to master space travel. You explore a (condensed) solar system during the (minor spoiler) 22 minutes before the universe ends. Don't worry though, ancient alien tech brings you back to life every time you die or the universe ends. Your goal is to figure out why the universe is dying, find out what's up with the ancient aliens and, if you're lucky, figure out why there's still harmonica music coming from the collapsed nebula.
Resident Evil - Start with 4 if you're a fan of camp. If you're not and want a more focused horror experience, start with 7. Don't worry about 5 and 6, but 8 is pretty amazing. It's a world where politicians and multinational corporations have tried to solve basically every problem by releasing diseases that transform people into some form of zombie/monster. Good mix of horror, actoin and camp although there is some light puzzle solving.
Undertale - looks cheap and simple but it really, really isn't. Great game and the systems are approachable enough to make for a good first introduction to gaming if you haven't ever checked it out. Good world building, can be incredibly dark but in a way that's... well, lets just say it's a unique take on making a world dark. Honestly, I'd start here.
Watching youtube previews of some of the recommended games, just looked at preview of your suggestion, Resident Evil 7 -- 90 secs of rot, sleaze, screams, malevolent transformations, ooze, etc -- then a voice at the end, muttering "this fucking family. . . " Understatement of the year, lol.
It explains some of Brian Skyrms's work on how signalling systems can arise from the interactions of simple reinforcement learners and includes simulations you can run for yourself.
> The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.
It's discouraging because it's a fundamental reason making it hard to break out of a bad equilibrium and create a new system. There's a lot of adverse selection in who switches. Anyone who can get away with bad behavior more easily in the new system will try switching to it, meaning you have to deal with your worst users first, and if adoption depends on network effects in any way then no one else will want to join.
Recently, though, I realized that cryptocurrency is a partial counterexample to this. Early on it was dominated by "witches" (scammers, drug dealers, money launderers) rather then principled reformers. Now... well, it still kind of is, but at least there's space to build communities of well-intentioned people. It seems to have gained critical mass and moved past the worst of its witch issues.
Don't get me wrong, I still think crypto is a Wild West ecosystem and likely a bubble, but I'm impressed that it solved the witch-utopia problem and I'd like to understand how.
"Early on [cryptocurrency] was dominated by "witches" (scammers, drug dealers, money launderers) rather then principled reformers. Now... well, it still kind of is, but at least there's space to build communities of well-intentioned people. It seems to have gained critical mass and moved past the worst of its witch issues."
I think you've got it backwards - early on, it seemed to me like there was nothing going on *except for* the twin holy principles of trustless finance and decentralisation. There was no monetary value in BTC, and the small group of people very serious about it were alone in a vast ocean of either ignorance or ridicule.
Now though, that kind of community-building spirit is drowned-out by the deafening noise of memecoins, shitcoins, scams, pump'n'dump schemes, scandals, etc. Thousands upon thousands of cryptos and digital "assets" all vie for your attention and none of them are particularly clear on how they function, why they exist or where they fit into the ever-changing ecosystem. It's total chaos.
Eth is really the only example that comes to mind of crypto that started from serious beginnings and continued to get more and more principled and develop in really interesting, community-minded ways, forks and internal disputes notwithstanding.
Cryptocurrency isn't really a "community", you can use bitcoin without really having to be aware of the nature of the other people using it. It's about anonymous exchange, not communication.
Whereas something like voat is all about communication, and if you want to use it then you inevitably wind up aware of the other people who are using it, for better or worse.
Crypto will not have moved past the worst of its witch issues until the tulip mania collapses and people actually realize they've been trading Nothing Certificates.
I always thought that crypto is worth nothing but recently I've had a change of heart and bought some crypto - fiat is the same kind of "Nothing Certificate" as crypto, except that more people believe it and it's not unlimited.
No, fiat isn't backed by belief. (If it were, we'd be in extremely deep shit.) Fiat is backed by the state making it legal tender – that is, its value is that you can pay your taxes in it (which the state will otherwise extract from you by force, through confiscating your goods). Retaining previous terminology, fiat currency is a tax certificate; the state issues these and individuals trade them like any other good and subject to the market pressures of other goods. But fundamentally, it's an asset; the asset of keeping the government off your back. (This is also why the value of a given fiat currency is fundamentally connected to the stability of the issuing state.) Crypto does not have an underlying asset; it's nothing but vapor. Indeed, a cryptocurrency could most likely only acquire an underlying asset if some enormous drug producer were to peg it to a given quantity of weed (the most common, and least harmful, frequently-illegal drug), and then stick to that like glue even if it cost them real actual money, which would be uncharacteristic behavior for a criminal operation, to say the least.
Given that the world already widely trusts the USD, if the US government reduced its tax rate to 0% and somehow didn't go out of business (marginally plausible at best, but leave that aside for now), I expect the USD would retain its value so long as the USG were to carefully ensure that the supply were matched to the now-diminished demand. But realistically, if the USG cuts tax rates to 0%, everyone else is going to wonder *how* it is going to pay its bills, and they're probably going to guess, "by printing bignum dollars and thinking we'll take them at face value". In which case, cue USDexit.
If you're trying to create a new fiat currency from scratch, you'll really want to use tax policy as a tool to help make that happen.
What do you think is the minimum cost of maintaining a fiat currency? The USG is almost certainly striving to provide a superset of that... but how much bigger of a superset?
I imagine the costs minimally include the cost of raw material for coinage, capital for printing / stamping / smithing / whatever, plus the cost of designing the currency, a security force to mitigate counterfeiting, and... that's it? Would the service need an army, or is that something we can lump under the counter-counterfeiting, and the army is separate, or does this necessarily yank regional defense in along with it?
I don't know about Anon, but yes, that's exactly what would happen, since the US government would then have no income, and lose the ability to enforce its fiat, and hence the USD would lose value.
That's not hard to explain though. Early adopters have been bribed with millions of dollars of Capital Gains to live amongst the "witches" until Blockchain tech is mainstream.
These people had to put up with not only the witches themselves (scammers and criminals, pyramid-schemes and vaporware) but also hate from the normies who hate the witches (higher than ever now, thanks to the dog coins and NFT schillers), all while having to do the due-diligence of researching the coins, investing in them, not losing them over the years, and paying taxes.
Pretty much no one would have put up with that without the cash incentives. And I'm sure you could solve plenty of other network problems with equivalent incentives.
I'm doubtful that crypto has moved passed the worst of its issues. As far as I can tell*, there is nothing in the crypto world that can not be done through a normal currency, except money laundering, tax evasion, smuggling and other kinds of illicit and nefarious activities.
(*epistemic status: I'm not an expert, just a guy commenting on a forum)
If you're under an authoritarian regime, then 'smuggling' so that you can receive payments if you've been cut off from the banking system would be a positive thing. Censorship resistance is a key aspect of Bitcoin.
Bitcoin on the lightning network makes it cheaper and faster to send money across borders, allowing developing countries that receive a lot of money from expats to keep more of their money by cutting off middlemen like Western Union.
Bitcoin on the lightning network makes it feasible to send very small values (sub 1 cent) cheaply and instantly. This has the potential to declutter the internet, by say having email protocols that require 100 sats to send an email. It's inexpensive for ordinary users, but very expensive to spammers that send millions of emails. We can now price spam off the internet.
Perhaps the most important thing Bitcoin does that normal currencies don't do is have a non-discretionary monetary policy. There will only ever be 21 million bitcoin, while no such limits exist for central bank issued currencies. This scarcity makes Bitcoin a supreme asset to store value in and has the potential to be a $100 trillion proposition.
I share your general impression of the crypto world, which is why I phrased my statement as "crypto has moved past the worst of its _witch_ issues". Crypto has other issues! And it still has many witches! But the not-intentionally-practicing-witchcraft contingent has somehow reached critical mass.
The witch utopia problem, in its purest form, is that normies don't join in because it's just a bunch of witches, and it's just a bunch of witches because normies don't join in. My impression of crypto is that both statements are decidedly on the downswing and unlikely to reverse.
Is there an automated tool that can notify me when the consensus on a metaculus question has changed a lot and my prediction is stale? I lose points on predictions that were directionally correct relative to the community consensus when I made them, because the circumstances changed after my prediction was made and I wasn't constantly refreshing each one.
I'm not aware of such a tool, but I'd just like to say again that this is my biggest gripe with Metaculus. I'd like to be a "casual user" of metaculus who logs on once every few months, thinks hard for an hour or so about a few questions, and provides a few predictions. But even if my predictions are always spot on the ground truth, I can *still* lose points on average because months after I log off, Metaculus will be treating my prediction as current and docking me points when it sees that new participants are making better predictions using more information. There should be an option to make a "one-time prediction" that either gains or loses me points solely on the basis of a proper scoring rule of my predicted probability at the time, and the outcome.
The year is 2072, long after the Fall of the Old World. Nearly all of the bullets have run out, and people are resorting to older types of weapons.
What considerations would go into your selection of a personal sword, and which type of sword would you pick (e.g. - Roman short sword, Japanese samurai sword, Celtic broadsword, fencing sword)?
If you have the metallurgical ability to make good swords, you have the metallurgical ability to make gun barrels. Assuming bats still exist, their poo contains lots of nitrates which crystalize after you immerse it in water and filter out the insoluble part. Then you mix nitrates with charcoal and sulfur to make gunpowder. Also, if there are any surplus bags of ammonium nitrate fertilizer laying around, those can easily be repurposed to make massive amounts of gunpowder.
Humans couldn't lose the ability to make barrels or gunpowder unless the apocalypse selectively killed everyone who wasn't rather dumb. Conditional on at least a hundred humans surviving I think it's extremely unlikely (p<0.01) that gunpowder-making is forgotten.
I disagree. I think among any random sampling of 100 modern humans, it would be *surprising* to find even one person who could successfully make gunpowder from scratch, even given access to the raw materials. And truly shocking to also find among that number someone who could blacksmith a musket that doesn't kill its owner and is dangerous to a greater degree and at a further distance than a decently balanced pair of fire-hardened wood javelins.
Both of these activities involve a host of practical skills that the modern person tends not to even realize exist, unless he has a practical manual hobby (blacksmithing or cabinet-making, say). I have a fair amount of practical chemistry lab experience, and I would only just trust myself to figure out how to identify the raw ingredients for gunpowder, purify them, and create the final product. (I understand the physical process of mixing involves no small amount of manual skill and is critical to the quality of the outcome).
If someone gave me a forge, some charcoal, a few hammers and a lump of mild steel, I *might* be able to produce an OK sword, some kind of adequately balanced stabbing gladius, given enough experiment, but I doubt a lifetime of fiddling, in the absence of a skilled instruction, would let me make a Kentucky rifle.
Making guncotton/nitrocellulose is a bit tricky (though perhaps easier than black gunpowder), however, why would someone in 2072 have to make a gun? Assuming a post-apocalyptic world with significantly decreased population, there would be more than enough still-perfectly-functional guns left from 2022. Guns are really long lasting; there's nothing wrong with World War 2 guns right now (despite 80ish years that have passed) and a WW2 machinegun would still be quite effective in 2072 as long as you can make ammo for it.
The simplest early firearms were essentially a cylinder that is rounded at one end and open at the other end, with a tiny hole for the torch that ignites the gunpowder. Now there are a lot of refinements necessary to get high performance out of this, and you'd need to test it with very small loads of gunpowder first to make sure it doesn't blow up in your face, but the basic structure of a firearm would not be that hard to reinvent.
If you were really hard up for blacksmithing materials, you could even make a barrel out of hardwood, although it would be much less durable.
The distinction between a theoretical and practical understanding has rarely been more clearly illustrated. I am vaguely reminded of the joke about physicists consulting on dairy farm management in which the punchline begins with "Assume a spherical cow..."
I think those 100 humans could collectively remember enough (and have enough books in their possession) to figure it out if they really needed to. That's a lower bar than passing a test on it right away.
Without looking it up, what are the three ingredients necessary to make black powder, how are those ingredients acquired and prepared, and what is the proportion they need to be in?
I happen to know the first two facts and get spotty on the third. Is 1% of the population as obsessed with ancient manufacturing processes as I am? Doubtful.
Edit: I see from your original post you know the ingredients, but can you tell me how they're prepared?
There's a broad range of nitrate-fuel mixtures that will burn violently. Sulfur isn't required-- it only reduces the ignition temperature.
If they forget the proportions they can experiment with various mixtures and hill-climb towards whatever works, or do some stoichiometric calculations.
I don't recall all the precautions taken in the actual preparation, which is why I would start with an extremely small batch if I needed to make post-apocalyptic gunpowder without reference books. But I'm pretty confident I could do it.
I'd probably loot the remains of my local library to obtain the appropriate reference books first, if possible.
I suppose if enough books have survived we can recreate anything. I just wouldn't count on people to know how to do it without reference, and the reference materials may be difficult to find.
Take, for instance, the traditional (read: not having to build a chemical factory) way of refining potassium nitrate (saltpeter). First, you need nitrated earth, which either must be sourced from bat caves or produced from scratch in a process requiring a precise mixture of organic material and urine, cared for in a particular way over many months. Then the nitrated earth needs to be leached, typically multiple times. The leached water then needs to be mixed with a particular amount of lye, which requires its own preparation method using wood ashes. Then the mixture must be boiled to a certain temperature and any crystals that form must be raked out, as they are impurities. At this point it is also best to add blood to the boiling water so that organic impurities rise as a scum that can be removed. Then the water must be cooled slowly, and the first crystals that appear during cooling must be raked out and discarded. Then the mixture must be evaporated and the crystals that form as a result must be purified again until you have the precise crystal structure that indicates it is mostly potassium nitrate.
Making good charcoal is similarly complicated (as is corning it for gunpowder use), and while you don't need sulfur its certainly necessary for a superior product and must be located, mined, purified, and crushed.
So it really all depends whether a physical book with that kind of information survives, otherwise it's a lot to work out from a general "gist of it" idea.
This is a really good point, actually. Classic case of feeling like you should have caught it yourself, but having no thought of it until it's pointed out to you.
Cartridges might go away, though, those are a bit more finicky about needing their own separate tech, I think? So you'd be stuck at hand-loaded revolvers, tops.
Everyone will be going to go around in full suit armor, so you definitely want a 'stabby' sword for 1v1 combat and not a 'slicy' word. The Katana is brittle and the Roman short sword is a heavy, wide and short stabby sword, so they're both mostly useless.
The fencing sword might be better in the hands of a 'sure kill' sort of fighter and as an all purpose sword for people of all strengths, heights and gender. It would be my second choice, but if a fencing sword misses, it is really easy to overpower as long as you cover its one sharp point. In a life or death situation, it would be easier to take a non-lethal stab from a fencing sword, and immobilize the weapon, which you couldn't do to the other 3 options.
A sabre (cutting fencing sword) might be an excellent compromise. It would be my #2 choice. If I could use a shield / dagger for dual wielding, then it might be my #1 choice.
If I am going sword only though, The broadsword (claymore) works better as an all purpose sword and would be my #1 choice. You can defend against multiple opponents as a reachy slicy sword and buy time to run. (esp if they are all armored, so you can run fast) You can also use it for stabbing if you put one hand on the blade to thrust it like a pike. It is heavy, so a hit to the helmet will at least stun, even if it won't kill. Again, good for running. It isn't brittle either.
It's practical for domestic tasks too. You can also use it to chop wood and hunt animals. Lastly, it is second only to a katana in the cool factor and make me feel like the witcher (you would never shoulder carry though).
All in all, the Celtic broadsword (Claymore) would be my go-to choice.
Why do you assume everyone will wear heavy armor? Armor is expensive, uncomfortable, and a pain to put on. It's for war, not everyday wear.
If everyone's in heavy armor, I don't want a rapier. (I assume that's what you mean by "fencing sword".) The ancestor of the rapier was a special stiff stabbing sword for penetrating armor, but the rapier itself was not particular stiff and would tend to bend instead of penetrate against armor. It was a civilian weapon designed for civilian opponents in regular clothes.
If everyone's in heavy armor, I don't want a saber either; as you noted, cutting swords don't do well against armor.
A claymore is better; it's a cutting sword, but it does at least have a point so you can use it like a short spear.
But really, if I have to use a sword against armor, I'd want the ancestor of the rapier that I mentioned earlier, or a gladius, which was also designed with armor in mind.
People are ignoring centuries of sword-fighting evolution. The pinnacle of sword fighting became the rapier, the small sword, and then the epee. Basically, you want to stab your opponent in the location you desire before he can stab you.
In a group (where you can get away with less focus on close-in defense), a long spear (i.e. pike) becomes better, due to the range.
And then as others have mentioned, a bow (cross or long) outranges that as well, but neither are as useful in tight quarters, such as inside a building, as a good fast sword.
I'd caution against thinking about the development of swords as a movement from "worse" to "better" types of sword. For example, the smallsword developed from the rapier, but in a fight between a person with a smallsword and a person with a rapier I'd almost always bet on the rapierist. As I understand it, the shift to the smallsword happened mostly because duels got more formalised, meaning that you no longer got an advantage by having a weapon that was a few inches longer than your opponent's, and then people switched to having swords that weren't as inconvenient to carry around.
About the same thing is true for the comparison between swords and spears. A person with a spear will probably win against a swordsman even in a one-to-one fight. But nobody actually carries a spear with them in their daily life. In general, the reason swords were so popular isn't because they were the "best weapons", it's because they were the "best weapons you can carry on you without it being a major hassle".
But wasn't the evolution of the small sword a response to the decline of armour, which was itself a response to the development of firearms?
In a world with modern materials but no firearms, I'll bet you could make a fantastic suit of armour that's lightweight and flexible, while still being damn near impenetrable to swords, even in the gaps between plates that used to be the weak spots.
If everyone is walking around in full-body kevlar, your best bet is probably just a big old bludgeoning weapon.
Yes, "but", another "point" of them was to be able to quickly stab precisely in those weak spots.
I suppose with modern materials you could make armor which can defeat any sort of point anywhere, but then why aren't you making a gun instead with that level of technology? I thought the point of the question was a loss of that type of technology and you're reduced to what a basic blacksmith can do.
To be fair, you can walk into Home Depot/Lowes and walk out with an unassembled shotgun in basic plumbing parts and will just have to load your own shells, but we were assuming more primitive materials availability.
I don't even think you need the group for the spear to be better. You can look up youtube videos today of guys with swords vs guys with spears; the sword guys are basically helpless.
Those guys are using older technology swords which are heaver and meant for slashing. A rapier or epee is more like a one-handed (because it can be lighter) metal spear. Most weren't super long, but a few were as long as a typical spear. You can use something in your off hand to stop/block/deflect/grab the opponent's spear and then stab them, or else move faster to the side (because again, lighter+stronger) and stab them.
I want this to be true, and I've heard this argument before, but the closest I can ever find to what people are talking about with "modern pokey swords beat spears" is stuff like this, where they just get mercilessly manhandled: https://www.youtube.com/watch?v=h-f3nvJCl9Y
That's not exactly a standard spear, he's using it more like an edged weapon and to trap the sword, but even then, at the end when the guy with the sword figures it out, he gets inside the sharp part multiple times to stab him in the throat and head.
Now put them in a regular building instead of out in the open and see how practical that 12' spear is...
I watched sort of the second half of this video and he does a lot better, but the guy also starts aiming pretty much exclusively for his feet, which seems to give the rapier guy an edge. The trapping seemed to work against him past that point, since the other guy didn't have to treat it like an actual blade (his arm is inside the "hook" entirely at multiple points). But granted I should have watched the whole video to see him doing better.
I think the "indoor fighting" thing cuts both ways; most buildings have hallways, after all.
What's the state of mining, smelting, and blacksmithing? That's really what determines swords. And armor which also determines how swords work. Also, how far have we regressed that we forgot how to make gunpowder?
Contrary to several people who's answer is "spear" the real answer is actually "something that fires arrows." (Again, assuming primitive guns aren't available.) That won't help you in a combat arena. But in an actual battle massed arrow fire worked very well and could help keep you alive. Ideally combined with some lances, a warhorse, and a lot of armor. Plus the necessary training.
As other have pointed out, the answer is almost always "a spear", but it also depends on the kind of fighting and social structure of your society. The point being there isn't a best sword, there are swords that are most suited to your combination of social structure and place in that social structure.
Ignoring the spear stuff, I'd say I'm probably looking at something in the chopping or slashing family of swords. My naive understanding is that swords in the "poking, straight" family are all pretty hard to use well and are prone to break a bit easier.
So probably khukri or a dao/dadao for this fella, just something I can swing at necks and hope for the best with.
The pokey straight swords are hard to use against another person with the same type of sword. A common outcome of beginners using them is that they both stab the moment the opponent gets within reach, which means they both get hit. (That happens even between trained fencers more often than I like to admit). I still think they have to advantage against a dao, kukri or similar weapon though. As long as you safely outreach your opponent, your instinctive "stretch your arm out to push the scary man away"-response is perfectly functional.
It depends entirely on what society in the New World is like. Falloutesque hardscrabble survivalism? Swords are largely useless, per the discussion in the other replies about polearms and so on, but probably a basket-hilted broadsword. Literally just the 17th century minus anything resembling a gun but including the quasi-peaceful urbanism? A long rapier without question (and the main-gauche to go with it). Et cetera.
My understanding from reading stuff written by historians of pre-gunpowder combat and HEMA practitioners is that the correct answer to this question is "a spear", and that if constrained to swords the best sword is one that's as much like a spear as possible (e.g a zweihänder or odachi). The analogy I've seen drawn is that swords were fundamentally sidearms like pistols are now and were used accordingly.
That said the answer to this question might be different if there a specific constraints that prevent spears or spear-like swords from being used, which might include most fighting being indoors, the sword needing to be carried at all times, or social conventions that punish being "over-armed".
My understanding (and small amount of fencing experience) suggests you would not want a Zweihander.
Two handed swords were used for two things (is my understanding): cutting the tips off of spears (a high mortality job), and full plate combat, where it is not used like a spear. In full plate combat you’d be holding it halfway up and using that for extra leverage. Holding a sword out like a spear would be too heavy because you have no counterweight.
I think the real answer to this question depends on how many people you have with you. If I’m alone, I’d probably want something light and long like a rapier so that I could have reach on unarmored enemies and run away quickly from anyone in armor. A spear isn’t going to be super useful to you without support typically.
If I have a bunch of people… staffs honestly. They’re easy to learn, easy to use, and don’t require a tight heavily trained formation like spears or pikes
Spear is the general purpose correct answer, but against heavy armour a warhammer or a mace are the go-to specialised counters. I've read similarly to you vis a vis swords not really being main battlefield weapons in medieval society.
My impression is that aside from being status symbols, due to the difficulty of making a useful spear vs a sword, swords are almost like javelins, so basically a sidearm as you say. Used for specific limited conditions no different from how the Romans used pila.
Are there standard theories on common causes for hiccups?
For me, the most common way to end up hiccuping is if I've eaten bread or something similarly starchy without enough liquid to wash it down - I almost *always* develop hiccups if I eat untoasted bread with some peanut butter, but it goes away when I've drunk enough water to feel like it's washed the pasty matter down.
For my partner, the most common way to end up hiccuping is if he's eaten unpeeled uncooked carrots. He doesn't seem to have any standard way to eliminate them.
This suggests to me that my hiccups are the result of a physical process (i.e., my esophagus is either trying to clear itself, or is getting into spasms because it can't quite clear appropriately) while my partner's hiccups are the result of a chemical process (i.e., something in carrot skins results in the muscles acting weirdly for a few minutes).
Are these both commonly accepted types of causes? Do other people have similar or different causes? Or do you just tend to get hiccups occasionally without any commonly observable patterns causing them?
Breathe into a paper bag (not a plastic bag - the bag needs to not collapse on the inbreath). Always works, even when nothing else does.
Apparently hiccups are somehow related to your breathing reflex, controlled by CO2 levels in the blood. Once you drive CO2 high enough the hiccups stop - holding the breath works for the same reason.
I have a newborn, and she gets hiccups all the time. I looked it up and what I found basically said that newborns get hiccups constantly because it gives their brains feedback on the position of their internal organs, and as we get older our brain doesn't need that feedback because it's built up an accurate model of where our guts are. I have no idea if that's true, but the younger you are the more you hiccup.
I had regular hiccups for months after three spine surgeries a few years back. The nurses told me it was because moving organs around to reach the spine disturbs the diaphragm, and doing that predictably and consistently causes hiccups. You get left in a state where your diaphragm is very easily disturbed until everything has healed.
My hiccups are often bread-based as well. Most reliably, eating a lot of bread then drinking a really cold drink, especially a carbonated one. When I was a kid,I would ask for no ice in my drink at McDonalds or I’d be done for.
Oh my god!! We've started calling carrots "hiccups" around my house because of the reliability that raw carrots cause my wife to hiccup. I thought she was the only one.
I personally get them pretty reliably from eating pickled jalapenos in a sandwhich, or when I eat something "too" spicy, possibly too fast (or when I cross a certain threshold of alcohol inebriation, like a cartoon character).
To stop them I try to exhale all the air I can and hold my breath as long as possible. If I don't hiccup during the holding my breath phase, it typically results in my hiccups disappearing immediately. If I hiccup while holding my breath, I just have to wait (not effective for alcohol induces hiccups).
Interesting, I have a similar 'solution' but it works much better if I inhale as much as possible. The rough algorithm is:
1) Inhale as much as you can.
2) Hold your breath briefly.
3) Suck in even more air on top of your already basically full lungs, repeating two or three times.
4) Breath out slowly.
Then repeat the whole process around three times, and this has always worked for me in the four or so years I've used it. Drinking water or exhaling fully have not gotten as good of results.
Like you, it's a multi exhale process to get to full empty. I exhale again a few times after "emptying" my lungs to push as much out as possible.
I'm not sure why I go with exhale necessarily, but I think the idea is the same. I wonder if it's the extreme state you put your lungs in that resets them. Either extremely full or extremely empty may have the same effect.
Also my anecdotal success rate may be lower than I remember.
i also often get hiccups from very spicy food, and sometimes from drinking vast amounts of beer, and rarely from drinking soda (though it's been a long time since i last drank soda).
Really interesting! I feel like I might have heard of the spicy one (and possibly even experienced it? I tend to go for slightly spicier food than the average person, but never really push it at all any more, so it's been a while since I would have tested this). But I've never heard of the cold one.
My mother has Alzheimers and asked me if I can help her with organizing her assisted suicide. I do not have a problem with that but with the current law situation in germany it will not work with her specific illness. Since then and because she used to work as a writer I am researching the process of dying, things written on death (like Ernest Becker and recently interviewed Sheldon Solomon on this topic) to give her answers, when she is asking me on the topic of death.
What are your open questions on death? How do you think about it and does anything scare you about it? Anything you have read, that changed your mind? Views on assisted suicide? Im interested in anything on this topic atm.
I'm sorry about the situation you're in; it must be awful.
One of my religion's greatest preachers, Tim Keller, has terminal cancer. He wrote an article about it for The Atlantic. He's been studying theology his whole life, and teaching and counseling people, even about death, and now that he's facing it himself, he has some interesting things to say. And he's a really smart guy. Maybe it could be helpful to you or your mother.
"As evolutionary psychologist Jesse Bering reminds us, “Consider the rather startling fact that you will never know you have died. You may feel yourself slipping away, but it isn’t as though there will be a ‘you’ around who is capable of ascertaining that, once all is said and done, it has actually happened."
followed by:
"Awareness of our mortality can be a profound challenge to our self-image of being an all-important, indispensable, independent entity in the universe. Or it can fill us with a sense of the preciousness and fragility of this opportunity, the value of a life. It can inspire us and motivate us to live life to the fullest, with a sense that we should not waste our days—to experience, to learn, to grow, to connect, and to contribute to those around us and those who will follow us."
Aside from the legal issue, I think one of the key things to work out now rather than later is a clear threshold you both agree on for "when the time has come"; notably, it feels (on the level of intuition) ethically dicey to kill someone who has forgotten that they wanted to die, which is probably one of the main motivators of the German law. I think for me the balance is whether the person is semi-independent, able to enjoy things in the moment but has forgotten personal memories like who their family is (in which case the person you knew is in some sense dead, but I have the intuition that *a* person is still alive there), vs. entirely dependent on carers and not enjoying life.
Could you partake in assisted suicide tourism? Are there jurisdictions that are more amenable to losing your ability to consent and still carry through with the suicide?
That's not universally true, at least in practice. Many countries will claim and in fact exercise jurisdiction over what their citizens do while travelling abroad, at least where major crimes like rape and murder are concerned. Or over what is *done* to their citizens abroad. If you, a citizen of Nation X, kill a citizen of Nation X in a manner that Nation X disapproves of, you may spend a great deal of your remaining lifespan in one of Nation X's less pleasant prisons, screaming "but they can't *do* this, I was in Nation Y at the time!" all the while and hiring lawyers and writing letters to Amnesty International and yet still rotting away in prison.
Initially I feel you're right, but international law and interjurisdictional (?) law don't seem to behave that way.
We don't typically get in trouble for doing things that are legal in other countries, but illegal in ours. Drug tourism is a real thing. Alcohol tourism out of dry counties I'm sure is a real thing. Obviously not as extreme as "murder", though still tourism for what the local jurisdiction considers illegal behaviour.
I can't think of any laws that would be analogous to "murder" that might be acceptable in other countries and not in your own.
I recognize it's a stretch and I'm not super familiar with the story, but would the husband from "Not without my daughter" be subject to American law upon returning to the US?
It was a pretty controversial thing when countries started banning their citizens from re-entering their native countries after fighting for Isis. Helping someone end their own life on their own terms seems like an even harder thing to enforce, perceptually anyway.
I like to think that if my mom had Alzheimer's and wanted assisted suicide at a point that might be too "late" for my local jurisdiction, I'd take the legal risks and travel to a more friendly country to do it. Just because my home country doesn't have their shit together, doesn't mean she should suffer.
I don't know the story of "not without my daughter," but a country cares very much about its citizens not being murdered, even if while they are overseas.
If you decide it's worth it, okay, but I'd say instead of walking home to be arrested, just stay in the other country as an expat.
For a religious perspective, consider looking into the Catechism of the Catholic Church. It is an incredible document that condenses an enormous body of works. You are in a very tough spot right now, I wish you all my best.
I talked with Dignitas and Exit - both in Switzerland. Same situation for Alzheimers as in germany. The german supreme court equivalent has changed the law in 2020 and we have a similar situation as in switzerland. But for Alzheimers you either have the choice of dying way to earlyor not at all. Assisted suicide is only an option as long as the person still has the capacity to judge, which doctors will deny very early in the disease.
I wish you and your mother a lot of courage, you will need it whichever path you chose.
I'd advise you to look into why your mother may want to die early. Is it because she fears to become a burden to you or others ? Because she lacks the ressources (financial and otherwise) that may enable her to live her life to its natural end in good conditions ? Of losing her dignity as a human being, or the respect of her entourage as her intellect suffers ? You, and the people who love and care for her, can help assuage those fears, and live a happy life even as her condition worsen. Do not to let her think she needs to go so that you can be free of her needs.
Do you have personal experience with alzheimer (or other dementia) patients?
(Content warning for the obvious stuff, I'm trying not to be too graphic or detailed, but not everyone likes to read about this stuff on a normal day.)
I haven't made up my mind, as I'm far from even an early onset age, but I think I might want to actively decide and choose my exit while I am still able to communicate with others, have some idea of who I was and what's going on, and - probably most importantly - have some idea who all these lovely people taking care of me are; ideally all of these so I can say some meaningful goodbyes.
The alternative could be dying either alone, not being able to move or speak, not knowing where I am, where I came from and how I got there, what is going on, which is all super scary;
or - possibly worse - all of the above, except with a bunch of people I have never met in my life talking at me like they know me, about stuff that they seem to expect me to know, and I can't even tell them to leave me alone because I can't seem to be able to speak.
And they're looking more and more sad and distraught, which makes me feel even worse emotionally.
If your comment was only meant as "check with her just in case it's one of these bad reasons", then sorry.!
It's just that I've often heard certain people who oppose assisted suicide use similar phrases in their arguments, which frequently imply that the only reasons a person could have for deciding on this course are ones that could be described as "misguided altruism or consideration for others", "misguided sense of pride or dignity" etc.
And as long as I can remember thinking about the topic, I've always been furious at people seeking to deny to others what is essentially the most fundamental manifestation of agency and choice there is.
So you believe that societies have no obligation-of-care for the unfortunate? Because "unlimited right to suicide even if you're non composit mentis" essentially implies that (or, more accurately, that even maximally-irresponsible exercises of personal agency should override that obligation, which is essentially the same thing but needs to be asserted for subsequent points to be made.)
Would you also hold that all forms of drugs, even ones known to be incredibly addictive and destructive, should be put on the market unregulated? That there should be no labor regulations, as these all interfere with the free choices of employees and employers? What's your feelings on welfare in general, for that matter?
I would argue that society definitely is free to ban things for you in certain cases, and it is free to ban things that you care about and it does *not* necessarily have to make sure that your pragmatic liberty to access it is not affected. Perhaps it *should* but it does not have to.
Do you disagree with that? Or can we start with a presumption that in certain cases society does have the right to restrict you and ban things that you want?
IMHO that presumption is a necessary basis of a discussion, as it is a necessary basis for being part of a society; acknowledging that society can and will have certain rules that may restrict you from doing as you please is a non-negotiable part, if someone denies that core principle then the expected result is both an exclusion from society and *still* enforcing the society's rules if you stay within the reach of society; if you want to opt-out, you'd have to leave it because it won't leave you.
Given all the information I have about you leads to the charitable conclusion of "I have a solipsistic view of state ethics, wherein I am the only significant moral object in existence and any moral question that does not directly reference me, specifically, does not signify", and a less-charitable conclusion of "I have a narcissistic view of state ethics where my personal preferences should have infinitely-high weight", I do think I have all the information I need.
"We are offering prizes of $5,000 to $50,000 for proposed strategies for ELK. We’re planning to evaluate submissions as we receive them, between now and the end of January; we may end the contest earlier or later if we receive more or fewer submissions than we expect."
What are everyone's thoughts on this essay that "pushes back" against the modern advice to “buy experiences, not things"?
I think the author makes a good point about the argument's motivations (e.g. - status-seeking and lack of space for urbanites in expensive real estate markets to store things), but I also think his argument should have acknowledged the disutility of owning excessive quantities of objects, and the false expectation humans generally have that more possessions will make them happier.
My philosophy is that people in the rich world should own fewer things, but those things should be of higher quality and should be taken care of, and owners shouldn't be afraid to sell their things when they have no use for them anymore. (Coincidentally, I'm in the middle of a major personal project to get rid of over ten years' worth of accumulated possessions, and I wonder again and again why I held on to much of these things for so long.) With respect to buying "experiences," people should be honest with themselves over their motivations before making the purchase: Are you genuinely interested in visiting Tahiti, or are you only interested in the status boost and bragging rights that will come from posting photos of your vacation on social media, and being able to bump up your "Number of countries visited" rank by one?
Good essay. Realistically worthwhile ways to spend your money are a mix of experiences and things, but things are absolutely undervalued. One thing I realized when buying a decently sized apartment is how much stuff I can now have, and how many options it opens that I didn't have when renting and moving around every year or two.
One thing the author gets right - but doesn't accent to the extent it is deserved - is that the useful things, the kitchens and home gyms and rifles and toolboxes, require effort and skill to use. For most people in the developed world, that is the primary barrier to entry, not money.
Conspicuous consumption is conspicuous consumption, whether of things or experiences. Spending two dollars for a park entrance fee is a different kind of purchase than spending a few thousand dollars to see Paris.
Memories, however, have no ongoing cost. Physical objects cost something to continue to possess; storage space is only one part of this, there are also ongoing time investments, even if it's just moving a box from one house to another every few years/decades (or having somebody else deal with it after your death).
Granted you can recuperate some of the cost of a physical object by selling it, and some physical objects provide dividends of their own in the form of use, however everyone I know who put emphasis on the resale value of physical objects ended up hoarding things.
On the other hand consumption, whether of things or experiences, has an intensive as well as an extensive margin. The nice new car doesn't necessarily take up any more room than your old beater did.
Probably an overstatement if you take it at face value, but it's still good to have the occasional corrective to the endless proselytizing of the Church of Travel.
It probably depends on the specifics of the situation, including life stage, poverty/wealth level, and the specific thing being considered.
Someone once bought me a nice juicer, more expensive than I would have bought for myself. It turned out I didn't have enough counter space, didn't like fresh juice all that much, and cleaning it was kind of a pain. I could not save on future healthcare with the juicer, because I couldn't get myself to use it. A lot of "healthy" purchases can be like that, including the cross country skiis and woodworking shop mentioned in the article, and much of my own art supplies. There are aspirational purchases that are still worth making, but it's worth taking classes or renting or at least trying someone else's thing for a while for purchases in the "because it's good for me" category.
I personally prefer experiences to things, and spent most of my time and energy on experiences in my 20s, which I don't regret. But now I have young children, and am not the sort of person to walk them to the park every day even if there were a park within walking distance, so I'd better get some rakes and shovels and wood chipper and weed whacker and whatnot to try to turn the thorn infested yard into a place where it's possible to play. This is somewhat frustrating, because we don't have great maintenance skills, but some maintenance is now required, along with tools and a shed and everything that goes with that. Especially, buying experiences (daycare, classes, trips) in sufficient quantity to keep us occupied is prohibitively expensive at this point.
But if I were going to try giving advice, I'd probably have to ask more specifically: what experiences? What things? How much will you be moving in the next few years?
Yes, there are a lot of things I could do if I could make more money. Hiring a landscaper would fall somewhere down there, in "if I won the lottery" territory, after a lot of other both things and experiences, including a couple days a week of preschool.
This sounds like one of those Should You Reverse Any Advice You Hear situations.
Some people would definitely benefit from buying more experiences and fewer things. Others would benefit from buying more things and fewer experiences. (A third class should probably just be buying less stuff overall and instead saving their money, and a fourth class is misers who should spend more and save less.)
I've met all these different classes of people. Too much stuff and too few experiences tends to be an older and lower-class crowd, too many experiences and too little stuff is a disease of the younger and slightly higher class mob. As a result there's also a lot of tedious class signalling going on in this sort of conversation.
In the sort of circles I move in, it's far more acceptable to brag about your experiences than about your stuff. Telling your friends about your fancy new car or jet ski is gauche, but telling them about your very expensive camel-milking trip to the Gobi desert is de rigeur, and your friends have to sit there and pretend to be interested.
So anyway, I'm not convinced that people should be buying more stuff and fewer experiences. But they should definitely be aware that "Buy experiences, not stuff" is not all-purpose great advice for everyone, and that it's definitely possible to overdo it.
When I was young, I reasoned about this a bit and talked myself into "buy things, not experiences", because the things will keep producing value for me over time while the experiences will just happen once. I suspect the reason for the classic advice that goes the other way is that most people engage in similar reasoning, and then similarly overvalue it, and need to be pushed back to correct to the right balance.
As spandrel mentions, one important consideration is that buying good enough things can be worth it, but that is often a lot more expensive than people are considering when they are buying things.
As Resident Contrarian suggests, the advice probably also depends on where one stands in terms of finances and household setup. For a middle class or higher person who has already set up their own household, the value of buying a new object is not the full value of owning that object, but only the marginal value of that object over the object it is replacing, which is often relatively small unless you're considering a major upgrade, whereas the value of buying an experience is the marginal value of that experience over watching TV or whatever else you would do instead (it often doesn't compete with, but complements, time with friends and family).
But for someone just starting out, or someone in a lower income bracket, buying a thing could well be a big upgrade. The standard advice is predicated on people not listening to it until they're settled enough in their life to need this advice to switch their strategy.
I'd think it depends on which things and which experiences? Some people are not very good at selecting either what things they should buy or what experiences they should spend money on. Partly because, as you note, they are influenced by trends and percieved status, and partly because our judgement about things and experiences improves the more purchases we make and things we do. Recognizing this is sort of obvious, but it is often ignored in this discussion. I think explains why lottery winners are often miserable, they buy a bunch of stuff or take a lot of cruises without really knowing what they want.
As an aside, I prefer experiences almost every time.
That's a good point that I know is true from personal experience. I had to travel extensively and endure several boring or even bad vacations to learn which kinds of places and countries I enjoyed.
Similarly, I had to buy a whole bunch of initially-exciting things which later lost their lustre in order to be better able to predict which "things" would be worth it.
It's a slogan. I think if we were trying to do something better we'd say "At some point, if you haven't already, buy at least one experience in lieu of some object. Then spend some time considering how much you (based on your new experience) enjoyed each thing. Once you've examined your historic enjoyment of both experiences and objects you can thoughtfully assess which one is the better use of your money." It's not a very good slogan, but it's a better fit.
One thing that's interesting about this to me is how much it's written from a middle-class-or-better perspective. When your car/rent/insurance/utilities type expenses are pretty well in hand, it really often is just a choice between a new big screen that you might not have really needed or a trip to the world's best waterpark. But below that level the choice might be between, say, a well-running car replacing your shitbucket and the waterpark, which is a bit harder to parse in terms of the "do you really need a Mercedes" thinking of the slogan.
I agree with you almost entirely, but I think that you could strike out "-or-better" from "middle-class-or-better". It's hard to imagine that upper-class people actually have to choose between the Mercedes and the world's best water park; in fact, I wonder if that isn't a necessary part of the definition. I think the threshold where you no longer have to choose is well below Musk.
Good point. At some point you just have a bunch of money.
Oddly, I'm not sure that isn't generally true for even the people who this slogan is named at - i.e. it might be a backdoor way to say "take more time off of work". I work at a tech startup where nearly everyone makes very decent money and theoretically has access to a lot of time off, but it doesn't superficially seem like there's an above-average amount of travel going on.
I don't think there's a good way to find out, but I wonder if for some people the heard message isn't "hey, stop buying things you don't have time to use; what you want is to intentionally take more time to (X) and 'I took a vacation to beat cyberpunk' isn't socially acceptable."
I think this might be hitting the nail on the head, though taking a vacation to play videogames *should* be as socially acceptable as taking one to lie on a beach and read books, both are about relaxing first and foremost
I've been thinking about that study published last month ( https://pubmed.ncbi.nlm.nih.gov/34878511/ ) which found that adverse post-operative outcomes for female patients are significantly more likely after they are operated upon by male surgeons, but found no other sex concordance effects.
The usual explanations involve extraneous factors like male surgeons being more often in position to perform riskier procedures or female surgeons having to be extra good to make the cut in the OR, but all that would be agnostic of patient sex. And the variety of procedures and the size of the population examined make for fairly impressive breadth of scope.
Other explanations, of course, involve stuff like "women are better at listening to women" which I am convinced is actually a significant factor in diagnostics but surgery is somewhat less exposed to it than IM - and, again, one might then expect at least a slight symmetrical male concordance effect.
So I am wondering which part of the pipeline is the culprit. Is it possible that it's still the latter and female surgeons are better at correcting diagnostic infelicities?
I wonder if female patients’ communication is biased towards the positive when talking to male surgeons rather than female surgeons. A lot of people have a bias towards making things seem okay, and I could see it being easier on the patient end to try to confirm an expected surgeon’s expectation that things went well- and in the process, not communicate about minor pains and issues that become major issues when they aren’t followed up on. This is assuming that the average patient expects male surgeons to expect better outcomes than female ones, or for patients to expect female surgeons to care more about post surgical issues, either of which feels plausible to me.
Patient sex could be a factor if women more often need certain risky surgeries, and those are more often performed by men.
It sounds like the paper did control for procedure type, but "riskiness" is still something that needs to be controlled to make any conclusions. (I wonder if there's any subset of the data that would make for a natural RCT, where patients were randomly assigned surgeons.)
I can't access the full paper, but the subgroup analyses seem important for forming any conjectures on the underlying causes--were the observations universal or did it vary by procedure, hospital, surgeon, etc?
Reading the abstract and the numbers - there are so many fewer female surgeons over all in both subgroups. I wish I could see the full text and tables.
I've always been fuzzy on converting odds ratios to risk ratios, i.e. "you're X times more likely to die if you have a female surgeon". Odds ratios are nice for logistic regressions but interpreting them is a pain, and I don't believe half the people who use them can explain it. And also, how do the odds ratios change when you take them against different subgroups? E.g. the largest effect, which is significant but not like DOUBLE, is female patients with male surgeons against female patients with female surgeons - does the relative size of these subgroups, as opposed to comparing the large ones, do anything to skew how you'd interpret the odds ratios?
There are SO MANY MORE men in the sample than women surgeons; also, this is <3,000 surgeons performing all these surgeries, so you have to be careful and see if a particular surgeon (e.g. a single bad high-volume male surgeon) who's driving the results, and it's probably not homogeneous how many surgeons do how many surgeries. Likewise, it's important to look at the absolute numbers of adverse effects. If there were, like, 20 adverse effects in total (which it's not, I'm just saying for illustrative purposes), it's so small that whatever the p-value my bias would still be to consider it as insignificant.
Maybe there are "men don't care about women's anatomies as much" effects, who knows. But I think that explanations of "the female surgeons might be better in general", "men get more and riskier cases", etc. or a combination of all of these are also at least as likely and admissible. Also, do women get riskier surgeries - e.g. something obstetric-related?
Since I'm not an expert on any of this, I'd be happy to hear commentary from someone who is more of one and see where I'm wrong.
I just want to self-promote a big new post on transformer language models I just posted on LessWrong, which explores a few possible different limitations. This post took me much longer than I was expecting to write and I was working on it in fits and starts for about 4 weeks:
I also want to signal boost this paper from CSET: "AI and Compute
How Much Longer Can Computing Power Drive Artificial Intelligence Progress?"
They basically show the current rate of scaling the largest AI models (which for the last few years has been large language models) is unsustainable, even with (very unlikely) Moore's law type exponential cost reduction (cost per FLOP halving every 2 years). Simply scaling stuff up must come to an end in the next few years. Major algorithmic improvements are needed. https://cset.georgetown.edu/publication/ai-and-compute/
Does anybody know someone at Substack, preferably in/close to the development team, who could get accessibility issues fixed? I'm a screen reader[1] user and the comment page on ACT is... less than ideal from an accessibility perspective. I could try the traditional customer support route, but that rarely works, usually clueless CS representatives have no idea what you're even talking about and have no power to push the issues you mention onto the developers' todo lists. Finding a frontend dev via Linked In is often our best bed, but I decided I'd try here first.
Scott himself can and has gotten changes to Substack and he is the kind of person who would care about this issue. So that is easily your best bet.
Perhaps outline the major problems in your comment or a subsequent comment and then Scott will probably see it or someone who can connect with him will see it.
For some reason I think I'm spending too much money on dishwasher rinse. Why I should obsess about this is a good mystery in its own right, but, anyway, I'm trying to think of how to usefully dilute this stuff.
A bunch of people online say to use vinegar. But the New York Times has a good summary of what's in the blue rinse aid, and why (and the drawbacks of vinegar):
* Citric acid - stops calcium from interfering with other surfactants
* Sodium cumene sulfonate - surfacant (charged)
* Tetrasodium EDTA - chelator
* Methylisothiazolinone and methylchloroisothiazolinone - preservatives
* CI Acid Blue 9 - makes it blue
The NYT says that the first one is the most important, so if I could find a supplier, and dilute it with water to the same proportion (I don't know what that is but wouldn't be hard to find), I could probably dilute the rinse aid 50-50 with my new homemade solution and everything would still work (and if it doesn't, I can just stop doing it).
The MSDS for Lauryl alcohol ethoxylate says its safe unless I pour it in a fish tank or eat it, so I'm not worried about safety, but it seems never to really be sold to household consumers. I only place I could get an online quote was Alibaba which wanted a minimum $100 order, and, nope, I'm not *that* curious in this project.
Is there any other surfactant that I could buy as a consumer for a cheaper price than store-brand rinse aid?
It stops water spotting on the dishes. They used to have lots of phosphates in the detergent, but those were phased out for damaging the environment in waste-water. In exchange, the dishwasher puts a little bit of rinse aid into the rinse, which gets rid of the last of the crud and and makes the last of the water dissolve.
Are you buying the name brand (jet dry or whatever) or buying generic? To my mind, it's worth it to get the generic, but while I'm sure that the generic is probably still much more expensive than mixing my own, I'm not sure the cost savings is worth it when a large bottle of even the name is ~$10 where I live and will last for months. Yes, it's _dramatically_ overpaying for the ingredients, but the amount being saved in actual (not percentage) terms is not worth it for the effort of homemade, for me.
I think I'm spending more than $10 a month on it. It seems to go through pretty fast.
I totally agree this isn't a rational use of my time; it's just a puzzle that got stuck in my head, and if I solve it (by finding a cheap substitute, or deciding there is no normal way forward) I'll move onto something else.
Edward, this is much too much. Is your dishwasher dispensing too much rinse aid? Mine is adjustable, and I spend well under a dollar per month on rinse aid, in a place with pretty hard water.
I've found that choice of detergent makes an even larger difference than rinse aid in the amount of mineral deposits. Smartly (Target) brand powder does a much better job than Kroger brand powder.
Incidentally, you might be the sort of person who is happy to waste an evening watching the dishwasher videos from Technology Connections on YouTube.
Huh, I've never considered seeing if I can limit the rate. I'll look into that.
It looks like there's a shortage at my supermarket, with the store-brand stuff being gone. But I found a cheap way of changing the flow rate which is to dilute it 50% before it goes in.
If you want to a mood boost exercise answer these questions :-)
What am I grateful for about myself?
What am I proud of myself for?
What is the best compliment I’ve ever been given?
List 5 things I dislike about myself. How can I rephrase them to become an opposite belief? Example: I hate my legs to I love my legs because they allow me to walk.
What are my talents?
What are my biggest dreams? What can I do today to start making one of those dreams happen?
What makes me unique? How can I use this uniqueness more in my life?
What is something I wish someone else would tell me? How can I tell myself that more often?
What is one thing I can do today that will make me feel great?
What’s something that I’m really good at?
List one thing I’m grateful for, for each part of my body.
What are 5 things my past self would love about my current self?
Does this rephrasing stuff actually work for anyone? Feels to me like I am just bullshitting myself, or repressing my negative thoughts ( which I have heard is a bad thing)
I find that it's helpful to ask the question: "What did you want desperately want that you now have?" because it helps me realize (1) how much I'm taking for granted in my life and (2) that many of my wants are short-lived, and so kind of guaranteed to disappoint after a while (i.e. I'm choosing to want passing pleasure over perennial joy). This slowly helps me shift what I want :)
Regular practice of gratitude is a must. If you're a theist its a bit easier because you can thank God, but even if you're not its a good idea to start each day by naming the things you are grateful for in your life.
I don't think being a theist just makes gratitude practice easier -- I think it makes it *possible*. The concept of gratitude only makes sense in a context where there is someone or something to be grateful to. Gratitude is like debt, or marriage -- can't have it without another entity to have it in relationship to.
"Thank you, sunrise, for existing" . . . WTF?
If you're an atheist, as I am, you need a different model altogether, not gratitude lite.
Thank god for what? Throwing us into a world wherein our actions, over which we do not even have full control, can jeopardise not our lives but our *souls* if they violate some cryptic rules allegedly passed down from bronze age people in a dead language which we have no reason to trust?
Well, if you would *prefer* that God threw us into a world in which our actions can have no effect at all on our well-being, either now or throughout eternity -- if everything we do is either pre-ordained or sterile, of no consequence -- then I can see the disappointment.
However I fancy quite a lot of people, when they think hard about it, will conclude they prefer a universe in which our actions and decisions are *not* futile -- meaning they can have effects, even very profound effects, on our future (and that of everyone else). And remarkably enough, it seems that that's the universe God did give us. So maybe He knew what he was doing.
I usually thank God that I am alive to experience another day, that I have eyes that can see, ears that can hear, a tongue that can taste, that my body is whole and not missing any members, that I have a roof over my head and a warm place to sleep, that I have more than enough food, that I have a secure job and enough money for small luxuries, etc. I thank Him for my daughters, my wife, my parents, my brothers, my in-laws. I often thank him for the moon, and the sun, and the beauty of snow in winter, and the wonder of plants in bloom in spring, the joyful warmth of summer, and the striking colors of fall. I thank Him for the clothes on my back and the hair on my head.
There is a great deal to be thankful for each day.
Is it so baffling? Experience is orientated to the present. Even for the eternal, the here and now or the past weigh strongly on the mind. Most people don't even think much of their elder years in lieu of the moment. It's fitting to be grateful for good moments.
I'm going to build a simple work bench out of 2x4s and with a tabletop made of a slab of plywood. To ensure that it does not get stained or damaged by any solvents or chemicals I might spill on it in the future, was sort of finish should I apply to the tabletop?
I was thinking of using Minwax polyurethane floor finish.
I did the same thing 20 years ago. I figured it’s just plywood so I painted it with white indoor latex. When it starts to look shoddy I just clean it and put on another coat.
I’m not playing with many chemicals besides adhesives though. Mainly use it for breadboarding electronics and a home for my Linux machine.
Instead of a plywood slab, I recommend gluing a bunch of 2x4s together and then flattening the top, This will give you a much more thick and stable working surface. It will also make attaching the legs much easier, as you can just mortise and tenon them. As for finishes, the standard finish for a workbench is usually some kind of drying oil, like linseed or tung oil. This is because you don't want an actual film layer on top of the wood for various reasons. It makes the surface slicker and makes it harder to rework the surface. But all this applies mostly to woodworking and I am not sure what your primary purpose for the bench is. If your number one concern is spills, then polyurethane is probably your best bet. If you want a really hard, slick surface that is super waterproof, go with epoxy (but it's quite expensive).
Wouldn't it be simpler to make a thick, stable tabletop by layering one plywood slab on top of another and then screwing them together?
I want to use the bench for "basic work." If I need to hammer two things together, clamp something, maybe work on some car parts, etc. There's no particular hobby it is intended for.
I made a bench last year. I used two by fours on edge, attached plywood to the top of that, agree then finally put a sheet of hard board on top. I didn't apply any paint or finish, I hate doing that. I also hate sanding, but hardboard is flat. When you screw in the hardboard, use a countersink bit and counter sunk screws.
Yes to the polyurethane floor finish and double yes to the double layer of plywood. Don't forget to overbuild the table legs - brace them if necessary, but a wobbly workbench is a no-no.
The old Time-Life home improvement guides have detailed instructions for a similar workbench in one of their books (the Home Workshop one, I think). They recommend filling in the small gaps between the boards with a paste of white glue and sawdust, then sanding it smooth. They also suggest using a thin sheet of hardboard as a disposable top layer.
This is all optional for most purposes. If you build it well from good lumber, it should be flat and smooth enough for most purposes, and it takes real effort to do more than cosmetic damage to the surface.
I'm in my late 20s and have a great job. High profile, nice people, good compensation and I find the work I do pretty rewarding. They are in the process of creating a new-position for me as a way of promoting me (they'll have to announce it and interview multiple candidates, but I've been informally told that the job is mine.) I'll be interviewing for this new position tomorrow.
They created this new position because my particular position is basically unfillable by anyone else and they are trying to ensure they can retain me for the next 5 years or so. It requires a very specific blend of 3 separate knowledge bases that is incredibly rare, not because it's especially difficult or anything, just because its relatively obscure. There are around 6 or 7 people in the country who could do my job and I know all of them (and none of them are looking for a new job). The plan is they'll make me a supervisor and then start hiring up people I can train.
Then, out of the blue, last week I was separately offered my dream job. This is literally the job I dreamed about as a kid and that I've been trying to get into for the last 5 years. I don't want to get into details since its a small-enough field that people in it would be able to figure out who I am if I mentioned it, but I'm not exaggerating when I say that it's the equivalent for my field of being offered a chance to be an astronaut. There is basically 0 chance I don't accept this position. The catch is that they won't be able to bring me on for a another couple of months.
How much notice should I give for my current position, and is it wrong to still interview for the promotion position at my current job even if there's a slim-to-none chance I'd ever accept it? I like my coworkers/leadership and don't want to leave them in a lurch, but it also seems potentially negative to let everyone know I'm a short-timer.
Two months' notice isn't necessarily weird, in fact that was the requirement in one of my former jobs. Although there's a chance they might just let you go right away, they also might really appreciate you staying and training a replacement. (If you haven't already, this is a good time to read your employee handbook as to what the notice requirements are.)
Like others said, though, don't give notice until it's 100% you have the next job. Offers can fall through for reasons out of your control, and it's an awful situation if you already quit your current job. If that means you take your promotion before you quit, it's unfortunate timing, but I don't think you've breached any ethical norms. Moreover, I think most professionals understand this.
I would tell them the straight story as soon as you're certain of your intentions. Bear in mind you have four decades of working career ahead of you, and you never know when you may run into the same people again -- but with some roles reversed. Especially if, as you say, it's a small field. It's always wise not to offend people if you can possibly avoid it.
And in this case, I think you can. In my experience no one is as indispensable as your self-description above implies. MacArthur thought he was indispensable in Korea -- until Truman decided to put that to the test, and it turns out, he wasn't. Your management will not be happy that you have found your dream job, but they will understand when you tell them, and they'll figure it out. It happens. Nobody actually expects you to put your devotion to the team ahead of a dream.
But they will *not* be inclined to understand if you string them along, even a little bit. If you do, it means they will have to go through whatever pain it takes to replace you, or re-organize their approach to things, et cetera, but have to backtrack first, which is more work. And what if an opportunity arises between now and when you tell them to more easily replace you? But they can't take it, because they didn't know...they would definitely hold that against you.
Even leaving ethics aside, I think you should tell them for your peace of mind, so you can look them in the eye when you meet them later, and for their present utility, so they have the maximum time possible to figure out how to deal with the situation.
Putting on my management hat, it's really annoying when someone I was counting on decides to take a better job somewhere else, but it's part of the deal. Or, lack of deal. If there's any sort of explicit commitment, you are ethically bound to abide by it if you reasonably can. And actually taking a job that was explicitly created as an incentive package for you to stick around for another five years, *might* count as that. But it doesn't sound like you've done that yet, so you're a free agent. The company hasn't explicitly promised not to fire you tomorrow, so you're not ethically prohibited from quitting tomorrow.
You are ethically obligated to provide them as much notice as you reasonably can, and by US norms two weeks minimum. But as others have noted, there is some risk involved in telling your employers about your probable new job before you've locked it in, so it may not be reasonable to tell them tomorrow that you'll be leaving in two months or whatever. That's going to depend on details only you can assess. But do try to get a firm commitment on the new job as soon as you reasonably can.
If you have an offer in writing from the new job then accept it, quit your current job gracefully, and take a month off between jobs to see the world or something.
Unpressed free time plus spare money is a combination that doesn't come up so many times in life, you should make the most of it.
Wouldn't you rather be paragliding in Patagonia or something than sitting around in your office running out the clock and feeling vaguely guilty?
Giving a notice of a month or two sounds about right to me, and I haven't seen it having negative effects on the last month or two of employment. But giving notice before you are sure you will get the next job is dangerous - your employer start thinking of replacing you, and you might end up without both jobs.
Everyone knows that talking about your next job is dangerous, so nobody expects you to do it, so everybody expects they'll get the notification by surprise.
Assuming you work in the US, I want to push back on the idea that this is a "Professional Ethics Question". This is a professional networking question, a reputation question, a professional relationship question, but it's not a question of ethics at all. Employment in the US is at-will, and employees cease their working arrangement at any time for any reason. There may be reputational consequences for that, but there is absolutely no ethical violation. It's the intended behavior of the system to allow for employees to switch jobs as quickly as possible.
So in deciding how much notice to give your current employer, the only considerations should be things like how this will hurt your relationship with your current coworkers and how much you care about that relationship going forward. Personally? I would probably be very honest very early on because I would want to minimize disappointment, and if you're leaving the job for your dream job anyway, what is the worst they can possibly do to you in a few months? Again, if they were to make your day-to-day job worse, you can always quit earlier. If they understand your field and respect you, then they should understand your decision and be respectful of it.
Your employer is planning on interviewing multiple candidates with no intention of offering the job to any of them. Why is there no consideration of if this is unethical? It seems equally ethical to you interviewing with no intention of keeping it.
So at least there's no reason to be terribly concerned with the unfairness of it.
Congrats, btw.
ETA: Anyway, from my experience there's a pretty good chance your boss (or his) would hop ship in a minute if a better deal comes along for him or her, too, regardless of how much they might complain when you resign.
>>>Your employer is planning on interviewing multiple candidates with no intention of offering the job to any of them. Why is there no consideration of if this is unethical? It seems equally ethical to you interviewing with no intention of keeping it. So at least there's no reason to be terribly concerned with the unfairness of it.
You know... when you put it that way, this does seem like a much easier call.
I've been strung along by recruiters a lot of late, and am currently waiting to hear back on a too-good-to-be true of my own, so it's what's on my mind.
Agreed with the rest of the comments. If you are absolutely positive you will get the dream job and it doesn't turn in three months "So when do I start?" "Oh we changed our minds about hiring on someone new, sorry!" then take it and let your employers know you will be leaving. Don't interview for the promoted position as that would be unfair. Use the time before you leave for your dream job to train in someone new to take over. If your employer is unhappy and fires you, well, you have your new job waiting.
If you're *not* 100% certain you will get it, then don't burn your bridges. Go ahead with the interview but let your employer know you have been offered a chance at the dream job. That at least gives them warning and lets them discuss with you if you are leaving for sure or not.
By "chance" I don't mean "there's sort of something I'd be interested in applying for" but "they really want me and I'm going to do an interview there next week". It's not much good to their current employer if they set up the whole 'promotion' interview based on the belief that OP is going to be there for the next five years, then three weeks later he's out the door with "Bye, new job!"
OP's dilemma is that they are one of the few 'make yourself indispensable' types and their current employer needs time to train someone in to do the job. Otherwise, yeah it'd be no problem to keep his mouth shut, do the interview, get the guaranteed long-term job, then hike out the door ten minutes later. It's only courteous to let them know "you really do need to get started on training up a replacement right now, because I may not in fact be around for the next five years" rather than leaving them completely in the lurch.
Again, if they were bad employers, too bad for them, but OP says they've been pretty decent. It's not too difficult to be decent in return. This *of course* all depends on the dream job being a solid certainty, or a very good chance, otherwise keep his mouth shut, do the interview, get the promotion, and if dream job evaporates then he's not lost anything.
I personally more follow what you're describing, but I've explicitly traded better compensation for better colleagues/bosses and better 'working environments'.
But a lot of the standard advice seems like it should be at least _considered_:
- Most people that think they're indispensable aren't.
- Employment is _very_ asymmetric. Bosses/supervisors/managers/employers generally won't put a single employee above the business (and I don't think they should either). But, because of that, employees probably shouldn't risk their employment (and its income and benefits) because of a desire to be courteous.
- Some employers, by policy, will fire employees that are discovered to be searching for another job, e.g. interviewing. There are many circumstances in which that is the most _secure_ policy, e.g. to prevent an employee from exfiltrating sensitive or secret business info.
You're right that it's absolutely:
> ... not much good to their current employer if they set up the whole 'promotion' interview based on the belief that OP is going to be there for the next five years, then three weeks later he's out the door with "Bye, new job!"
But then giving them a reason to replace you _before_ you've secured another job, e.g. you've received a hiring letter or employment contract (and, ideally, accepted the offer or signed the contract), is "not much good" for OP.
I think your advice would make _more_ sense if OP was something like a 'technical cofounder', or if the business was very small.
If OP _does_ convince them to seriously begin training a replacement, but then OP's dream job _doesn't_ materialize, wouldn't their employer _also_ be upset at the costs for finding and training a replacement?
There's a couple of things going on from OP's description. To take your points in order:
(1) I agree that the vast majority of people are not indispensable. We are all very easily replaced. OP says that they are working in a small field and the particular position they hold is a sort of boutique one, where several disparate skills/fields are combined in one position and it's not easy to hire on a replacement because this position has, as it were, been grown instead of being a standard 'we need an accountant/coder/sales person'
(2) Again, I totally agree: the company is not your family or your friend, no matter how they may try to create this impression so as to wring the maximum work out of you (relying on the guilt of "you don't want to let down your *friends*, do you?" instead of "we aren't going to pay you for this extra work, we expect to be able to call on your time whenever we want"). If it made them tuppence profit, they'd boot OP out the door in the morning.
(3) That being said, if as OP says they have been good to them *and* it's a small field where everyone knows everyone, it's better to stay on good terms if possible. You don't want to get the reputation of being someone who will walk out and leave an employer high and dry if for no other reason but that it will make it much, much harder to get a new job in future.
(4) And it can't be emphasised enough that all the advice is conditional on OP *really* being in a position to walk into the new job. They must have a sure guarantee that it won't be a case of "we changed our minds" if they burn their bridges with the old employer. If it's only something like 'casual verbal conversation about new position likely and would you be interested?' ABSOLUTELY do NOT tell their current employer they are going to quit. If it's sure and certain, 'did the interview, offered the job', then it's better to mention that they will be leaving before the current employer sets up the whole fake round of interviews
(5) As to training up a replacement, even if OP does not leave, they say that the whole point of the fake interview and promotion *is* to enable the current employer to be sure OP will stay for the next few years and to start training up replacements. If OP is the only one at present who can turn the mangle, then if OP gets sick, gets hit by a bus, or (as here) is going to jump ship to a new dream job, the current employer is badly stuck. They should have been training up someone all along, it's late to do it now, but that is their plan. So OP breezing out on them after the interview is going to look bad.
Again, once more with feeling, I am *not* advising OP to tell all in the morning. Only if they are 100% sure the new job is a solid offer and they will be starting in three months' time. Any doubt at all, keep their mouth shut, do the interview, and wait to see what happens. Best case: the new job will come through. Worst case: it never happens, but they retain their old job and have their promotion to boot.
I agree – assuming that OP's description of the situation is accurate (and they haven't left out any other relevant info).
If they're correct that they were "offered [their] dream job" – and it's an 'actual' formal/written offer, and also that "they won't be able to bring [them] on for a another couple of months", then I think it'd be perfectly fine to give notice immediately and offer to help find a replacement for the next "couple of months".
I just wanted to offer some standard and general skepticism (and I also like comment-conversing with you specifically :)).
Your response will seem to hinge on how sure you are that you are getting the "dream job" position. If you are 100%, then I would recommend giving your current employer a heads up so you can start training a replacement and working towards the transition. They will appreciate this, even if they are unhappy about the fact that you are leaving.
If you are less than 100% certain, then you may want to wait until you are certain or be tentative in your discussion with your employer. If you have a good relationship with your supervisor, you may let them know that it's possible you are taking another job, and ask them how they would like you to proceed. Your comfort level will determine how far you are willing to go in giving notice. I would recommend saying nothing if you think it's a 50/50 chance of getting the new job, and personally wouldn't saying anything if less than 60% certain. Once you are more certain, you can change course or reconsider.
Keep in mind that you are under no obligation to tell them something now. The reasons you might want to tell them are primarily about helping the employer (and one that has been good to you). That may come back to help you later, especially if you want to maintain good contacts in the industry/company and may ever apply to work there again. You're young enough that things could easily change over the course of your career, and you may find yourself needing those people in the future. I've seen that happen literally dozens of times, including with people who left on less than great terms thinking that they would never be back. I've also seen people who go back to a previous employer begging for a job, and get told no because of the way they quit. There's a lot of room between your situation and burning bridges, so you have room to give yourself time and go through the interview process now.
Interviewing for the "promotion job" and then not taking it wouldn't be unethical, just potentially annoying to your current employers. It's not unethical to consider and then turn down a job offer. If you were trying to decide between the two jobs, I would say do the interview and then weigh the two offers/use them as negotiating leverage. However, it sounds like this is not your situation, and you're certain you want to take the "dream job".
I would say, if you're 100% sure that the dream job is happening, don't interview for the promotion job, and let your current employers know about your plans right away (unless for some reason why you can't let current employers know about the dream job yet and it's really important to keep up appearances until you do, in which case maybe you still want to go through with the interview).
On the other hand, if there's any chance whatsoever that the dream job offer could be rescinded, or any chance you would decide not to take it after all, go through with the interview, just in case, as a hedge. In this case, you probably don't want to let your current employers know about the dream job offer until you have the official promotion offer in hand. Your current employers should be understanding about this once the situation is settled. Remember that as much as this situation affects your employers, it has a much bigger impact on you and your life, and it's OK to take that into account.
Caveat: I'm pretty sure I work in a different field from you, so there might be cultural norms or other factors to consider that I'm not taking into account in my advice.
Adding to this, the standard advice of not telling your employer about your plans to leave hinges on your current employer potentially firing you because they view you as a liability now. That seems to be very much not your situation.
That happened to me once. Interview for both positions. See what comes through. If the job with the other company comes through after you get your promotion, you can just shrug your shoulders and say, "sorry, this was too good to pass up."
Does anyone have any suggestions for psychiatry related blogs that are written engagingly and talk about interesting topics other than SSC/ACX? I wasn't much of a fan of The Last Psychiatrist's style, but am interested in other suggestions
I have an original autograph of a certain, now deceased, sportsman. He was pretty controversial, but regained much approval after his death.
I know very little about NFTs, but it seems to me like this might be valuable if converted to one. There also doesn't seem to be any NFTs associated with this person, so this would be the first.
Does anyone have any advice how to go about this? Is this even something which is done?
It really depends on what blockchain you want this to be minted on.
You can mint on OpenSea (very popular NFT marketplace). It's easy and you don't have to pay gas fees to mint: https://opensea.io/. You can choose if it is on the Ethereum or Polygon blockchain.
Another to look at would be Rarible (https://rarible.com/create/start). You can choose between Ethereum, Flow, or Tezos blockchains.
There are so many options out there but either of those should suite your needs. Whether or not you'll be able to sell this NFT is a whole different story (that'll mostly come down to marketing) but hope this helps.
I have no knowledge of how to do this practically, but if you do it, could you report back the approximate $ cost of minting one? I'm curious about this.
I'm confused by the ELK problem - it seems to be saying "imagine our AI can ignore "garbage in, garbage out." How do we get it to give us a non-garbage answer?" And my immediate response is "if you're getting garbage inputs, how do you know that *any* of the AI's knowledge is correct, latent or not?"
Their first example of the problem goes like this: A robber fiddles with the camera to make it show the diamond is still there, then steals the diamond. The AI still knows that the diamond is missing (they don't say how), but only reports "the camera still shows a diamond," as it was programmed to. And the ARC people are asking "how do we get the AI to tell us that the diamond was actually stolen"?
But a better question would be "how does the AI *know* that the diamond was stolen?" It's much easier to reveal latent knowledge if you know where it might be located.
For instance, maybe the AI is thinking "the camera shows a diamond, but the pressure sensors on the pedestal show the diamond was removed. I conclude that the camera is faulty and the diamond was stolen." So you ask the AI about the pressure sensors, verify that the diamond is missing, and catch the robber. In short, knowing what the AI's reasoning was based on allowed you to duplicate it.
But now, the robber has watched Ocean's Eleven and tries the following trick instead: He messes with the vault's wiring and trips the pressure sensor remotely. The camera shows a diamond, the pressure sensor shows no diamond. The AI informs you that this pattern of data indicates the diamond is being stolen. You quickly rush in and open the vault... and the robber, who was waiting for this chance, makes off with the diamond. Oops.
This is the problem: if the AI is always capable of telling garbage data from real data, then you don't need an interrogation process - you can simply copy the AI's method of gathering information to learn if the diamond is still there. But if the AI is sometimes fallible, then no amount of interrogation is sufficient because there's a possibility that the AI doesn't *have* the knowledge you need, and it's simply reporting what the robber wanted you to see.
Or to put it another way, if it's possible to elicit latent knowledge with perfect 100% accuracy, that means you failed at the design stage, because you could have made that knowledge explicit instead.
Turning garbage input into useful output is easy by comparison; my impression is that they're trying to create an algorithm or process that behaves as a trusted interface to a zero-trust environment, which is considerably harder, if not impossible.
I tried to read the ELK proposal, but IMHO it is badly written with ambiguous language that needs to be reworded. I would recommend they send it through a couple of editors.
>The AI still knows that the diamond is missing (they don't say how), but only reports "the camera still shows a diamond," as it was programmed to.
Why would you program the AI to not use its judgement? Why spend the money on an AI if you just want to know what the camera shows? You could just set up a feed.
I think the "camera" in this metaphor is supposed to be "the measurements that a human can use to double-check the AI." You don't know how to read all the pressure plates and laser beams and so on, all you can do is either look at the camera (easily gamed by the AI) or try to formulate a question to the AI (requires somehow mapping the incomprehensible gunk of your mind and the AI's so it can understand what you really mean by "has it been stolen?"). And my point is that there's a third problem - the AI can still be drawing incorrect conclusions even if it knows exactly what you mean. And if you can solve this problem, this gives you more information (about the AI or its inputs) which also allows you to solve the ELK problem.
The problem is that you *cannot* copy the AI's method of gathering information. That is why it is the AI and you're the human.
The process is as follows. Overwhelming amount of data in > incomprehensible processing steps > report about the conclusions out. Since you, as a human, are not capable of following the steps that the AI takes, you do not know if those steps result in it reporting about the data truthfully, or if it reports a misleading conclusion that aligns with its incentives.
Shouldn't you just eliminate the binary of stolen/not-stolen and have it report a percent chance? Presumably, if it were a human, it would note that 1 out of 100 sensors had questionable data and decide further investigation (or notification of others) was in order...
My point is that even if the AI is reporting everything it perceives truthfully (whether directly or because of your clever interrogation method), the AI could still be perceiving things incorrectly.
You can substitute "incomprehensibly huge blob of other sensors" for "pressure plate" in my example, and the logic still holds - you can't distinguish the case where the incomprehensible blob is correct and the camera is faulty, from the case where the incomprehensible blob is incorrect and the camera is honest.
And conversely, if a human can tell when the blob is correct or incorrect, then it's not incomprehensible and the human has more information than just the camera.
True, you can't distinguish the two cases, and the problem isn't trying to. The premise of the problem assumes that the AI is perceiving things correctly. You're not getting garbage inputs at all. I don't see what's so confusing about this.
That assumption hides an important piece of information - the AI's ability to translate its sensor inputs into an accurate model of the world. If the AI is perceiving things correctly (and you can prove this), then the only question you ever need to ask is "where is the diamond?" The AI has solved the problem of filtering out garbage inputs, and this necessarily includes "garbage inputs that might fool the human watching the camera but not the AI."
If, on the other hand, you can't be sure that the AI is perceiving things correctly, then it's impossible to elicit 100% perfect information because the AI itself doesn't have such information. You can never prove it to be safe, no matter how friendly your AI is or how good your questioning. It would be hard to even build the AI in the first place, since you don't have a way to debug or test it.
This problem is like asking "Suppose we built a chess engine that always knows if a given position leads to checkmate, but doesn't say what move we should make. How do we figure out what the right move is?" But finding out the winning move is a necessary step in finding a checkmate! Likewise, developing an accurate model of where the diamond is in spite of the robber's attempts to fool you is an integral part of making the vault-guarding AI.
That piece of information isn't hidden, it's a central part of the problem. I get the impression you're trying to solve a completely different problem than the one in question. I'm not able to explain this any more clearly or consicely than the documentation, I really recommend that you read that instead.
The point of the problem is that even in the easy case where the AI is perceiving things correctly, where it has some good reason for trusting sensor A over sensor B (maybe the last thing the camera saw was a man in a ski mask walking up to it with a screwdriver), it's still not trivial to get the information we want out. They are trying to solve the easy case.
My argument is that "make sure the AI is perceiving things correctly" is an easier question than "make sure the AI perceives things correctly, and is correctly leveraging those perceptions to do what you want." One is a prerequisite for the other.
I agree there's a problem with eliciting perfect knowledge. However I also think the question cuts at something else was well, which is how difficult it is to communicate with something that doesn't have the same wiring and broad experiences as yourself.
Eg. imagine a simpler version of the problem, "you reward the AI with utils for reporting that the diamond appears in front of the camera," because you don't know how to articular all the possible ways that the diamond could be stolen and appear that way to sensors -- and in the case of the diamond *actually* being stolen, the AI now solves the 'problem' of needing to show you the image of the diamond on the camera, when there is no diamond there, which it might do all sorts of ways, eg. by taking a few frames from earlier and repeating them.
You don't even need to think of the AI as being something like malevolent -- imagine the AI is an alien being that has no concept of diamonds or cameras or theft, and instead is simply trying its best to solve a problem that has been presented to it. "You must show me the picture of this diamond on this pedestal" -- well, easy-peasy until the diamond disappears. Now what? How do we solve this new problem of being rewarded for showing the diamond, but there is no diamond?
I keep circling around to something like, "you need to have a competitor" (an 'adversary' that challenges that you got it right in some fashion), and "you need to answer a question about what the population who cares about this would say" (and ideally a way to poll the population in question -- however this embeds the difficulties of aggregating preferences, it's own difficulty).
Yeah, I do think the communication problem is fair - I read further into the paper and they seem to be doing some interesting work to operationalize "how do we take a big blob of neural network and assign human meaning to its innards, and how do we do that algorithmically so we can do it with any weird AI we invent in the future?" But I do think that a generalized, provably-perfect solution is either impossible or AI-complete - it's basically asking "how do we make an AI that knows what you mean and never answers in a way you would consider misleading?"
I don't know if an *adversary* is necessary, but I do think "conversation" is a necessary part. The AI needs to know when there's uncertainty in the question (whether because it was ambiguously phrased or the human genuinely doesn't know what exactly they want) and be able to say "more information is needed" rather than "I'm pretty sure you wanted to turn the universe into paperclips."
I’ve been an ethical vegetarian for all of my life. I take the stance of “I don’t eat meat because I have good alternatives, but if I was stranded on a desert island, yeah I’d murder an animal.” I’m not vegan.
So with that background said, there’s good odds I’ll get diagnosed with something like celiacs and have to cut all gluten out of my diet. It’s not clear to me that I’ll be able to stay vegetarian after that. So I have a few questions:
TLDR:
1) Anyone out there who is a gluten free vegetarian? how’s that going for you?
2) Is there a non-gluten version of “vital wheat gluten” (what I use in my fake meat recipes)
3) If I’m going to learn how to cook meat, are there any good beginner but non-kids cookbooks out there you’d recommend?
1) Not allergic, but I rarely ever eat gluten because of a family member who doesn't. Currently vegan for ethical reasons. Going great so far
2) For meat alternatives, I recommend tempeh or soy crumbles (TVP). TVp, like vital wheat gluten, can be bought in bulk for relatively cheap then seasoned to match lots of different cuisines. Tofu is also the "classic"; I'm not personally a huge fan, but some love it. There are also plenty of store-bought options; for example, Gardein beef-style crumbles are gluten-free.
3) N/A. Would not recommend meat.
Bonus: would highly recommend the brand Enjoy Life, which is always gluten-free and never contains animal products. Their stuff is pretty dang good when looking for snacks/sweets
Would be happy to help if you had any more questions :-)
Fish can be a happy medium, and I know at least one vegetarian-inclined individual who won't eat meat but will eat fish.
I find that literally entering into Google, "easy quick tasty healthy instant pot/slow cooker/pan fied/baked salmon fillets" or whatever -- tacking on th keywords of what you need - gives you twenty bajillion recipes, thru which you can read and decide which are actually easy, which have impossible to find ingredients, which overlap with what. This will point you to better and worse cooking blogs (e.g., Natasha's Kitchen is great but very involved, and above my cooking skill level for now.)
My girlfriend has celiacs, and she's been at least a vegetarian for many years now, and so am I for all purposes nowadays. For January we're going full vegan, and really it hasn't been much a problem, although I miss cheese somewhat.
The key is she likes to cook and is very good at it(I don't really but I try to help as much as I can). A lot of foods like pasta and bread have gluten-free versions that are very good, and we're in Eastern Europe, if you're more west the selection is likely a lot better.
For protein we eat a lot of legumes like beans and chickpeas, soy and tofu can be very tasty if prepared nicely, and I've also been surprised generally with the number of fake burger-type of things that are vegan and gluten-free, I'm sure it's not the healthiest thing around but you can't be perfect all the time. Plant-based dairy imitations have also gotten very good, to the point where I prefer plant-based milks and yoghurts to the real thing most of the time.
By all indications we're in great health, I do a moderate amount of sports - if I started full-on lifting again, I might need to get a protein shake, but for my current activity levels it's not an issue.
The biggest pain is going out - finding something that's both vegetarian and gluten-free can be a struggle, especially when restaurants will do things like throw in croutons or soy sauce or any number of gluteny things that were absolutely not listed in the menu or the allergy list.
To summarize, it can work well but your whole diet needs to be built around it to a large extent.
Interestingly, where I am in suburban Texas, every restaurant that's trying to seem moderately nice has a bunch of menu items they claim are gluten free, but many of them don't even bother to indicate whether they think any of their menu items are vegetarian. (I don't know how accurate their assessments are of the gluten-free-ness of the food, but "gluten free" has enough mindshare in Texas that they market a flourless chocolate cake as "gluten free brownie" without bothering to indicate that this is in fact a traditional flourless chocolate cake rather than a brownie with some weird gluten-free flour).
My niece is a gluten free vegetarian. When we go out, she usually orders vegan sushi. I don't know how healthy her diet actually is but she always finds something to eat. Look into raw food "cookbooks," as they tend to be vegan and gluten free. They end up using a lot of cashews, coconut, and avocado. Other protein options are chia and teff, and of course tofu. In terms of straight up vital protein, try Bob's Red Mill Textured Vegetable Protein.
My advice to you is to simply realize that ethical vegetarianism is internally incoherent and abandon it: it either implies we should eliminate all predators, which would demolish Earth's ecosystem, or that humans have a special responsibility toward ethics due to superior intelligence, which automatically puts us in the "spiritually special" niche which is the traditional justification for us to kill and eat animals that ethical vegetarianism rejects.
Also, I know three separate people who used to be vegetarians, individually concluded for various reasons that they should not be, and reported increased vigor, health and energy upon once again regularly cramming their faces with beef, so even if fully healthy vegetarianism is possible, it's clearly not as easy as people like to claim.
Ethical vegetarianism doesn't say you have an obligation to eliminate all meat-eaters. It says you yourself should not be a meat eater.
A parallel moral theory says you shouldn't gratuitously insult people, but also doesn't require that you eliminate every human who gratuitously insults people, and in fact doesn't even ask that you imprison them or fine them. You can think that something is wrong while also thinking that most ways we have of preventing others from doing that thing are even more wrong.
I absolutely look at it in terms of utility maximization. However, just because someone doing X produces less utility than them not doing X doesn't mean that me *preventing* them from doing X is better than just letting them not do X. Often my means of prevention cause all sorts of worse problems (having laws that punish people who gratuitously insult others would clearly produce a lot more problems than it solves).
I don't think this is any more plausible than the initial plausibility of the idea that if it's bad for people to get drunk, then it should be good to make it illegal for people to get drunk. Trying to make large changes to evolved ecosystems is likely to have more unforeseen side effects than mere prohibition of alcohol.
I believe that there are consistent moral frameworks that do _not_ require being an ethical vegetarian (I follow such a framework myself, so I certainly believe it's internally consistent), but it doesn't at all follow that ethical vegetarianism is inconsistent, given that there is no "one" correct ethical framework. If pressed, I could probably come up with 2 or 3 ways in which vegetarianism makes sense, even if they aren't things I personally ascribe to.
It's probably true that many/most ethical vegetarians have inconsistent personal reasoning that could have lots of holes poked in it, but it isn't at all true that this is _necessarily_ the case, and without knowing this person's reasons, we certainly can't assume it.
Even more important though, consistent or not, why do you care? If someone else has a nonsensical ethical framework, if it isn't causing me or anyone else any harm, then why would I try to point out whatever flaws there may be without them inviting such a discussion? I don't see the value in trying to convince even the non-logical vegetarians that they are making some kind of ethical mistake. It seems sort of condescending.
> My advice to you is to simply realize that ethical vegetarianism is internally incoherent and abandon it: it either implies we should eliminate all predators, which would demolish Earth's ecosystem, or that humans have a special responsibility toward ethics due to superior intelligence, which automatically puts us in the "spiritually special" niche which is the traditional justification for us to kill and eat animals that ethical vegetarianism rejects.
it does nothing of the sort. yes, humans are marked out as special in that we are able to make moral choices, but most people, when deciding whether some creature is morally relevant, don't care about whether that creature has the ability to make moral choices. they care about other stuff, like whether the creature can suffer. that's why we still think infants matter morally, even though they can't make moral choices.
Animal predators don't have the same agency as humans in choosing what to consume, so they are exempt from any ethical constraints in their dietary choices. Also, I don't see how being "spiritually special" inherently provides justification to kill and eat animals when the choice not to do that is available. I am not a vegetarian, but your arguments do not make sense to me.
If animals have exactly the same right as humans, then yes we would be obligated to protect them from predators just as we are obligated to protect humans from being killed. But why can't there be an intermediate level of personhood, where you don't have all the rights of humans, but you have more rights than a rock? For example, it's unethical to kill you, but it's not unethical to fail to intervene when something else tries to kill you. Or it's ethical to kill you, but it's not ethical to cause you pain. Or it's ethical to cause you pain if it's for the purpose of meat production, but it's not ethical to cause you pain for sadistic pleasure. I don't know if any of these positions are true, but they're all internally coherent.
Re: #3 - I love and highly recommend "Salt, Fat, Acid, Heat" by Samin Nosrat and "Twelve Recipes" by Cal Peternell. They both dive into how cooking works and offer really excellent recipes, too.
I also rely heavily on "Mastering the Art of French Cooking" when I'm dealing with an unfamiliar cut of meat, because Julia Child is an excellent teacher and provides a good overview of how to approach different types of meat. If you want to go super nerdy on the science of cooking, "The Food Lab" by Kenji Lopez-Alt goes deep.
On a related, but slightly different note, Adam Ragusea has really good cooking videos on YouTube (as well as broader food culture ones) that focus in particular on the ways in which his recommendations differ from traditional ones, and tries to explain *why* they differ. That is, he explains why the Chesterton's fences are there, so that you can understand why your situation might be one where you do things the traditionally recommended way, or one where an alternative is either better, or so much more convenient that it's worth doing even though it's a little worse.
And although he does eat meat himself, a lot of his recipes are vegetarian.
2) Is there a non-gluten version of “vital wheat gluten” (what I use in my fake meat recipes)
Have you checked out fake meat lately? I'm been an ethical vegetarian for 15 years now and the past 3 (basically since impossible meat came on the market) have been far and away the best. Beyond's "meat" is gluten free, tastes indistinguishable from the real thing and is available almost everywhere.
So I’m not a strict vegetarian but I do have celiac, I try to eat less meat, and I’ve thought about this quite a bit. I could go vegetarian and not have nutritional issues, but it would be a fair bit more work, which I why I haven’t so far. Definitely do not try and go vegan if you have celiac. Dairy and eggs are incredibly useful.
There are a number of things that can sort of substitute for wheat gluten, like xanthan gum, the water from cooking chickpeas, corn starch, and so on. They’re used a lot in gluten-free baked goods. None are quite as universally good as actual wheat gluten, but using the right one or a combination can work fairly well.
I would largely avoid buying fake meat products if you’re trying to eat vegetarian as someone with celiac. Not only are they not as common, it can be hard to verify the supply chain of manufactured products for cross-contamination potential outside of a “certified gluten-free” sticker in the US or similar protected claim elsewhere. If you’re making your own, that’s a bit different but you can definitely make it work.
One aside, stay away from most oat-based products. Not only is oat gluten similar enough to wheat gluten that it may trigger your celiac on its own, but oats are really likely to be cross-contaminated with wheat due to how they are grown and processed, and the regulations in the US aren’t sufficient, so things like Cheerios can be marked “gluten-free” but will often cause people with celiac to react. Look for purity protocol oats, which are grown and processed separately if you do want oats and can tolerate them. They’re a lot more expensive.
As far as cooking meat, I don’t have much specific advice or a good resource to point to, but it’s not too difficult. Early on, cooking too long is better than not long enough in terms of safety, over time you’ll learn how to time stuff exactly right. Mostly, just minimize the amount of time your meat spends at room temperature and clean surfaces and hands religiously.
1) Gluten-free pizza crusts, at least at restaurants, seem to have gotten a lot better over the last decade. They may not taste like normal pizza dough, but I'm not gluten-free and I've been to some places where I'd order the gluten-free crust because it makes for an interesting change.
3) Not a cookbook, but will help make cooking with meat less challenging and more flavorful. If you do end up needing to add meat to your diet, I'd strongly recommend getting a sous vide for cooking (Breville Joule or Anova were the two top brands when I was shopping for one). For most of my adult life I have overcooked meat because of fear of e coli or salmonella or other boogeybacteria. The sous vide will cook your meat to the exact temperature you need, and no more - and cook it all the way through. You can also seal spices/herbs/oil/butter in the bag (or use the reusable silicon bags or ziploc bags - ziploc works for meat cooked to a lower temperature), which will help give the meat more flavor. Once the meat is finished in the sous vide, you can put a sear on it, and that's an improvement over regular cooking, too, because the sear is just for flavor (meat is already cooked all the way through) so you don't have to leave it searing long enough to have that gray zone inside the sear. Using a sous vide was a major upgrade for me when cooking meat at home - I used to just hope I could keep it palatable and now I can make some dishes that taste close to restaurant quality.
Has any explored the Kabbalistic significance of the Flying Spaghetti Monster?
It looks something like a deity out of love craft. Summoned by people who were mocking the concept of deities… so where does that lead, from a kabbalistic perspective?
Discussed this with some of my Kabbalist friends, and the consensus was Hod (the 8th sefirot), because Hermes is the trickster god, and Hermes is correlated with the planet Mercury, which is correlated with Hod.
I studied psychology and neurology back in 2006-2011 (did fMRI research, and the quality of research in that field was so bad it put me off academia for life).
At the time, a major topic of debate was whether the DSM5 (diagnostic manual which at the time was a fairly recent update of the relatively slimmer DSM4) had massively overreached in terms of medicalising normal behaviour (and also perhaps being unduly influenced by industry).
For anyone still working in the field: is this debate still ongoing? Is opinion swinging either way?
I only got back into clinical psychology in 2021, and the DSM comes up regularly in my particular niche. There are a lot of things to fault the DSM5 for, but medicalizing normal behavior is not one that I've ever heard come up. You could argue that the pharma industry influence means that practicioners are more likely to prescribe expensive medication even if therapy would be more effective. Is that true? No idea! But it seems like a debate that someone would have had.
Personally the term 'medicalizing normal behavior' strikes me as a bit ridiculous, because no psychiatrists are going around to people's homes uninvited, DSM5 in hand, and checking whether any normal behaving people qualify for a diagnosis so they can push some pills on them. If someone is seeing a psychiatrist, that usually means they have a problem they want help with, and the psychiatrist then pulls out the DSM5 to see how they can help. Would it be better if it said "You need to be at least THIS mentally ill to qualify for treatment"?
I think the DSM serves its role adequately (and no better) as a sort of catalog of mental disorders. It's useful to know what symptoms commonly co-occur, which disorders they are usually typical of, and what treatments have been found effective. It's the opposite of a problem if you read it and think "I have all these things and I'm completely normal!" Good for you! Other people may not be so lucky.
Also, doesn't pretty much every diagnosis in DSM5 (and DSM4) require that the patients quality of life be adversely affected? Whether or not something is normal behavior, if it is screwing up peoples quality of life, it seems worth investigating and trying to correct. (We could ask the author of "DSM-5 Made Easy." He is lurking around here somewhere.)
Lots of companies sell research materials but don't disclose what they actually are. This hurts reproducibility of science and has caused lots of frustration for me personally. Companies shouldn't use trade secrets; the patent system exists for a good reason.
I'm moving to SF from Lisbon this week (reach out if you're in the area). I've been considering what I'm hoping to find in SF, and why other cities haven't felt like a fit. I've boiled it down to this list:
1. Communities I want to engage with; in particular, technically motivated, scientifically engaged, board game playing, rock climbing, yoga-doing, NERDS. Where do I find them?!
2. Evidence of non-conformist attitudes (weirdos! Weirdos everywhere!) Otherwise, I seem to get bored of the city in about a year.
3. A culture around giving a shit, at work and in general. So, not Lisbon. Not Oslo. But,
4. Events and places worth going to! Life! What San Jose was devoid of.
5. Nice nearby places to do my outdoors hobbies (climbing, running, yoga)
6. Walkability and/or bike-ability
That boils down to basically, San Francisco, Austin, and New York in the US, and Berlin, Melbourne, and Taipei outside of the US. Portland looks to be a bit small. I’ve heard Seattle underperforms on weirdos.
I'd be curious to know what other people prioritize in places they've chosen to live.
Surely, you would know SF if you lived in San Jose. Arguably it’s the same metro area. Also I’m pretty sure you don’t have an exhaustive list of walkable or bikable cities.
Is Austin clearly better on these fronts than Seattle and Portland? (Perhaps 3 is missing in Portland, but I would think that Seattle is comparable to Austin on all fronts, and perhaps slightly better for climbing and walkability/bikeability, unless cost of living is high enough that an East Austin rent would put you in the suburbs in Seattle.)
On two short visits, I got the impression that Seattle's culture leans into 1, 3, and 5, but underperforms on 2 and 4, being generally an older median aged city. My impression of SF is that it's slightly smaller, but less sprawling, slightly more expensive inside the city, has better weather, and a long history of pushing back on cultural norms that I'm plausibly attracted to. Have you spent much time in Seattle?
I haven't spent a lot of time in Seattle, just been for conferences at various points (usually in the summer, which I'm sure gives me an unrealistic pleasant vision of biking conditions there). I spent 10 years in the Bay Area, but never lived in SF itself, and have now lived in Texas for 7 years, but only spent one pandemic semester living in Austin itself.
Oddly, the US Census tells me that SF County has median age 38.1, King County (Seattle) has median age 37.1, and Travis County (Austin) has median age 33.9, but when I take a moment to think I realize this is basically just telling me that there are more children in Travis County than in SF or King County, and isn't really telling me about the adults.
I hadn't thought to look, thanks for updating my priors.
An alternative plausible explanation might be that the lower cost of living allows for a younger population, including but not limited to, raising children.
Yeah, the weirdo terminology is inherently rather overloaded, in particular around the agency of the person being declared "weird."
I use "weird" to signal something between:
- person has gone to the effort to strip down and interrogate the norms they've been handed, while having the presence to maintain and upgrade their own set of norms, ones that don't benefit from regular normative reinforcement, preferably without being too edgelord about it
and,
- person has more interesting than conversation
but risk including persons without social graces, with varying degrees of self awareness.
Scott had a post awhile ago about how scientists seem to often go through an edgelord phase, until figuring out how to play within a system without stepping on its tender spots. I think being "weird" in the ways described is reasonably similar.
My first priority in choosing a place to live is that it not be a city. Cities have historically been where city dwellers congregate. City dwellers typically are:
1. Not self-sufficient by choice
2. Users, those who by nature take advantage of others
3. Clingers
4. Prone to degeneracy and revel in it when they find ways that fit their proclivities
On the other hand, they are subject to a lot of diseases, so they get a really good immune system. Which leads bacteria and viruses to become extra-virulent. Which make it really bad when city dwellers go back to the land for whatever reason.
Just ask the surviving Indians. Hard to ask the dead ones.
If I were a surviving Indian, I would be mad as hell that the government of the United States brought their diseases from their cities out to my wilderness homeland and deliberately infested my ancestors with them, and then stole the wilderness homeland and ruined it for habitation by what had been a higher life form.
Rural people ARE NOT subject to anywhere near the diseases of SHITIES. We face other dangers, yes, from accidents and such.
The sad thing is that your subscriptions indicate that you're either a high-effort troll, or you genuinely believe this near-strawman version of what a rural person thinks.
Also, as a proud Indian, fuck RIGHT off with dragging Indians into your weird little feud.
I've lived in London, Plymouth England, Glasgow, Valetta, Downtown Manhattan, Portland OR, Palo Alto, Mountain View, San Jose and currently, Bristol England.
San Jose/Silicon Valley is a bit of a cultural desert but you can't beat it for outdoors stuff. Plenty of yoga, board games etc. You need a car to get anywhere.
Big cities like London & New York have everything you need (except maybe the outdoorsy stuff) but it takes a lot of effort to find it. I didn't make a single friend outside of work in two years in NY but both cities are wonderful places to live.
Portland and Bristol are both awesome. Don't discount them because they are not huge. Both have lots of arty types dressed in black, students & activists, theatre, live music and fantastic beer on every corner. Both very walkable and cycleable and plenty of outdoorsy stuff nearby. I didn't own a car in London, Portland, NY or Bristol.
The biggest game changer for me is the ability to meet people outside of work. I like to go for an occasional beer in a pub and in Silicon Valley, I spoke to maybe 10 strangers in pubs in 23 years. In Bristol, I meet that many people in a week, belying the English reputation for coldness. I don't think I made eye contact in two years in New York. Portland is quite like Bristol in this regard; London is somewhere in the middle.
I'm interested to hear your opinion of Glasgow! I would think, given what you seem to value, it should have performed fairly well, but you don't mention it.
I lived a little bit outside Glasgow so I didn't get to know it that well. This was in the 80s too and Glasgow has changed a ton since then. I did see the best concert ever at The Barrowlands though! The Pogues with guest performers Kirsty McColl & Joe Strummer and before Shane MacGowan was too far gone. It was the concert where they debuted Fairytale in New York! Terrific!
I might be underestimating Portland! Thanks for the detailed comment. Your mention of being able to meet people out of work is something I might have stated more explicitly in the first point; I felt somewhat alienated by the less-than-social environment of Oslo and San Jose, in exactly the way you describe.
I've been to New York, and for whatever reason I tend to feel simultaneously ovewhelmed by the city, and a bit unsure of how to "do social" in New York. I suspected London would be same, and I'm not certain whether there's actually a way to live in a foreign country without still having to pay US taxes.
If you are an "American Person" (i.e., an American Citizen or green card holder), you have to pay taxes on your worldwide income. Most countries have a dual taxation agreement though where you can deduct taxes paid overseas from your US taxes.
Speaking the local language is my criteria. I prefer to be able to move outside the expat/anglophone bubble. I don't speak Portuguese, so I would not be attracted to that city even though it might have a lot to offer.
I described myself as "new age boring" on an internal work call the other day because of my interests in Psychology (specifically positive psychology), Cryptocurrency, Psychedelics, and health (e.g. bio-mechanics and breathing practices).
Anyone else identify with this "new age boring" set of ideas? What else would fall into this category?
Since when I was young, maybe around six or so, on nights before special days I looked forward to a lot (like my birthday), I agonized over the realization that my current self, which I equated to my current train of thought, would end once I fell asleep, so this current longing that I had for whatever would happen on the next day would never actually be fulfilled. This current longing would end with my current train of thought. This made me really sad, it felt like my current self would die and get replaced the next morning by a somehow very related self. This new self would be very new in its essential emotional state, in that way (but not in all ways) discontinuous from my current self. I continued to have this feeling from time to time (maybe 2-3 times per year, less later on) until well into my twenties. At some point it somehow stopped. I have not talked about this often, but when I did, I never found anybody that could relate. Does anybody have any thoughts on that?
I remember asking my parents about this as a kid and not being satisfied by the answers. Eventually the dread went away ... I would like to say because of some grand philosophical insight, but really because I learned to get distracted by more concrete, less introspective thoughts.
One thing I have noticed as an adult is, my train of thought actually gets distracted fairly often even in the waking hours. It is not unusual that something like the following happens: One morning I have been concentrating on my programming work for an hour or two. Then I am suddenly startled by wind rattling the windows. Then I turn back to work: I was distracted only briefly, so it is quite easy to pick up where I left. But I stop to think a bit more, it certainly is already much more difficult to recall what I had been exactly doing and thinking an hour ago, before I sat down and started programming in the first place.
Sure, I am remote, so I had breakfast and drank coffee, but which coffee mug I picked and why I chose it? The special colorful one I particularly like, or one of the set of boring ones (because there are more of them in cupboard?) I can try to recall, but that particular train of thought I had while making the decision, it has been already utterly lost. And it is about as well as I can remember what I was doing the previous evening. With some effortful concentration I can pick up and remember more details, but the exact train of thought has already been gone.
Like, I kind of get what you're saying, but this sounds arbitrary to me. Why do you consider sleep to be the time that you change from one self to another? Your current self dies and gets replaced every nanosecond of every day, you are never the same person you were in any instant.
I still kinda feel that my current self dies each night, but since there is nothing I can do about it anyway (staying awake would only postpone the inevitable for a few hours, at a cost to my future selves), it seems better to spend those last moments thinking about something pleasant, rather than worrying.
The philosopher Derek Parfit takes this idea in the opposite direction. We know by definition that the current self won't experience the thing that is being anticipated for the future self. But yet we have some positive feelings for the fact that this future self will get it. Those positive feelings are real. Although the future self is *more* like the present self than all the other selves that exist, all the other human selves are in fact still a lot like the future self and the present self, so whatever positive feelings one has for the future self that gets to do the fun thing, can also be had for any other person that gets to do something good. As long as the longing is understood sufficiently impersonally, the longing is in fact satisfied, and this sufficiently impersonal longing is, he argues, the foundation of morality. As he put it somewhere, “My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air.”
If you take a generalized version of Darwinism into account (i.e. what is the economically optimal stance to take in the sense of global productivity and civilization continuity), then the non-ego-supremacist version (i.e. the universal value) wins, except for edge cases. To see that is a simple matter of coincidence of such goals with the universalist position: an universalist society favors the entirety of society, while an egoistic society only favors each individual with limited cooperation options. So when a civilization-wide challange comes up, the cooperative society naturally should emerge prevalent under most conditions, This is not a strictly genetic form of evolution, but more of a learned optimality or survival condition conclusion. You don't need to iterate this scenario 1000 times for our genes to change our minds into this paradigm -- we can simply learn it from being fairly rational and universal learners. In fact, due to climate change and existential risks such iteration at the largest scales probably can't be afforded.
Now, that universalism *still* has problems... it values the whole, but it's unspecified *what* about the whole that is valuable (without such specification, you basically get The Borg, or Grey Goo or something). The value is in the human (and general conscious) experience of each individual and the collective experience of the network of individuals (I think the Star Trek Federation ideal really approaches this as well, to give another fictious example).
A cooperative society can function via individuals acting on individual incentives, without those individuals losing focus on their own egos. Losing that would probably require some sort of brain alteration out of "We" or "Brave New World", since group selection isn't going to work on a genetic level.
That misses the point. That's the proximate explanation for why I actually have this kind of behavior. But it's not a *reason* for me *to* have this behavior.
You've already got the behavior. Why do you need a reason to have it? I'm an emotivist/non-cognitivist who doesn't think there are objective normative truths, but this is an area where I'm going to cite Egan's Law: "It all adds up to normality" https://www.overcomingbias.com/2008/06/living-in-many.html
I think your melancholy here is/was too focused; why do you believe your waking train of thought to have continuity, in terms of identity? As you have experiences and perceive the passage of time, your identity constantly shifts. Is your point of hesitation effectively "discontinuities" with respect to time on the n-dimensional curve that is "you"?
In short, our identity is constantly shifting, and you are not the "you" you were 5 minutes ago.
(I would actually recommend the entire works of this comic, as it provides a light introduction to many famous philosophers and ideas but is also very funny imo, though the linked comic isn’t)
I would like to register an anti-recommendation to that comic in the strongest possible terms. Its politics are awful (and it's an extremely political comic strip) and would demonstrably lead to tens of millions starving and dead within decades, since he has the same view of political economy as Mao.
'You' are a network of electrical events in a mass of neural cells. The notion of self is an emergent phenomenon used to explain observations and act effectively in the world. The self is important in this way. But also, it has limitations. The boundary the events is arbitrarily limited -- we could come up with larger divisons that may include multiple individuals (i.e. organizations) that communicate between themselves. Or we might come up with smaller divisions like the left and right halves of your brain, or even smaller specialized regions of your brain. The core element is a pragmatic one... what definition of 'self' is the most useful and most sustainable? The evolutionary and mostly historical answer is the common individual. But it is important to keep perspective, to understand that we are part of a larger system, that this mass of events changes in character not only from one day to the next, but from one moment to the next as new information and knowledge about the world comes in and as subtle neuroplasticity and learning effects take place; as the state of your mind evolves; and as the world itself changes.
I don't think it's rational to cling too strongly to the self, i.e. to feel attached to a particular state and fearful that it "goes away", as our instincts of self-preservation suggest. It's only rational insofar as to preserve the continuity (or creation) of beneficial states of mind, and only insofar as you have influence on them. I think most information we acquire allows us to mature and sustain those good states of mind (hopefully including this knowledge about the limitations of the concept of self!), so that aspect of learning is something to embrace and cherish. On the other hand, some properties of the brain change with age such as neuroplasticity; however, we have little control over that, so it's no use getting sad over this particular set of inevitable (and not necessarily malign as a whole) change.
The bigger picture is also a great avenue into non-egoistic ethics I think... the self (human individual) is ultimately just a boundary promoted by evolution, and fixation on it is a cause of a huge array of issues of civilization (not every issue to be sure, but close... all failures of coordination like climate change, etc.) . But it *is* an important boundary for how we operate society, at the same time, for organizational purposes. You just have to keep in mind it's not worth attaching any supremacy to it.
Ultimately the greatest boundary of every living creature is the one that makes the most sense to me to assign value to.
I've thought about this, especially on how going to sleep and waking up is important for me because it's a form of "rebirth". Especially on times where I didn't sleep much, I found that I really missed this "daily reset".
On the other hand, I've also felt a deep melancholia when the sun is rising after a long, good night with friends (sometimes with MDMA helping). I would describe this feeling as "realizing the impermanence of things", everything ends eventually, and in some ways it's important and hopeful, and in others it's terrible and sad.
I've also thought about the discontinuity of experience in the case of scenarios like brain uploading or cloning. If you build a perfect copy of someone, and destroy the initial person, for the world nothing changed, but I would still call it "death" for myself, the person being left behind.
If you've never heard of it, there's a game that kinda explores those thoughts, especially around the cloning part, called SOMA. Fair warning, this is mostly through the horror/psychological horror lens.
Totally vague question, here. I've been a software engineer for many years, working at a big software company. I've enjoyed my time here, but as pressure has mounted to always be moving up and up and up, and after getting promoted a few times, I find myself enjoying work less and dreading it more. Sure, I make a lot more money than I did when I started out, but what's what worth if I don't enjoy my job, and I already made a ton of money when I started out. Work seems to involve so much damn coordination and management of other people, it's all just a logistical nightmare to get the smallest things done, and there's so much to keep on top of, it's truly exhausting.
I wonder if the root of my problems could be that I accepted promotion (or rather, they promoted me without really asking), and I should have remained lower level. I'm the sort of person who likes being a jack of all trades, enjoys the learning process, enjoys helping others a lot, but is less enthusiastic about really becoming a master and the leader. But, I don't know if any company really wants someone who doesn't seek to be the best of the best and the next leader of gigantic initiatives.
So what should I do? Is it possible to willingly move backward in my career? Or should I try out a smaller tech company? Or maybe I should get out of business-driven software entirely. But if I did that, I don't really know what would be my other options, big tech is really all I've ever known.
Getting this sort of thing to be more normalized would help avoid the perils of the "Peter principle", whereby anyone who is good at something is continually promoted to new jobs, until they find something they are not good at, which they then keep doing for life. If everyone was happy to move back one step at that point, the world would be a lot better.
Yeah. The problem, for at least my company, is that the incentives for the company are not really aligned with anyone inside the company. The company explicitly does not want anyone to get too comfortable in their role. It's a gamble, and one that's worked well for them. They're effectively making a tradeoff, accepting that they have higher attrition rates, but in turn, they get the occasional employees that are superstars, who they push into leadership who end up doing amazing things. This strategy has some merit, after all, we're all familiar with the idea that someone who's been around a company long enough knows too much, and no one wants to have to replace him because it'll be a major hassle, even if he isn't really performing that great anymore. My company basically says, screw it, get rid of that person, and we'll manage to find another. This is all great for the company, but less great for the people inside the company, unless they are real type-A go-getters who want to rise to the very top of the corporate ladder.
> the incentives for the company are not really aligned with anyone inside the company
This. From the company's perspective, the 1% probability that you become a superstar leader is totally worth the 99% probability that they will just make your life suck for no good reason (that is, no good reason from your perspective).
A possible strategy could be to consistently show utter incompetence at anything related to management. (This could also get you yelled at, and possibly fired.) You already failed by revealing your skills, but maybe in the next company.
If you have a big company with many managers, I guess your only options are to either forcefully promote your own people, or hire managers from other companies.
I suspect that the second option is even worse (both for the company and for you), because it attracts some kinds of people you want to avoid -- for example managers who do things that create extra profit in short term (and get them a bonus) in return for a huge loss in long term (they avoid the impact by strategically changing companies *before* shit hits the fan... which is exactly why they are now available for you to hire).
A possible way out of this dilemma is to have many employers, but few managers. But that means giving your people lots of autonomy. And if you already have professional managers in your company, they will resist this as much as they can, because it threatens their jobs and destroys their careers (managers typically advance by increasing the number of managers working below them). So situations with the right number of managers are probably unstable: you either barely have any managers, or enough to create pressure to hire even more managers.
In software development, Scrum was a process originally designed to replace managers, and have autonomous teams instead. But clever managers hijacked the keyword, and these days in 99% of IT companies, "Scrum" essentially refers to using Jira and having daily meetings, while still having managers who can override any and all inconvenients (for them) parts of the original Scrum.
The most realistic solution seems to be working for a small company. If there is just one guy who owns the company, and five employees working for him, he will not try to convert them into managers. -- The problems are in long term. Small companies can easily go bankrupt. Or they grow bigger, and then the owner decides it is time to take a break and hire professional managers instead.
Another possible solution is to keep changing jobs whenever the situation becomes too uncomfortable. A new job allows a new start at the bottom. But you need to consider how your CV will look like after 20 years of doing that, so don't do this literally every year.
It's probably worth exploring the possibility of redefining your responsibilities at your current level. If you're currently in a people-management role, ask about a lateral move to the equivalent level on the IC job ladder. If you're already an IC, talk to your manager about transitioning from tech lead and organization initiative work to something more to your preferences. I've seen a lot of variety in what kinds of work very senior ICs do at big tech companies, including exploratory prototyping work, deep subject matter experts, product architects, and technical firefighters for troubled projects.
Startup. Even if you're the sole employee, and it grows, you can hire another employee until you have enough employees, then you can hire a project manager. They don't manage people, they manage the project.
> Is it possible to willingly move backward in my career?
Yes. I'm working with multiple people who are individual contributors but were team leaders/managers before.
Unless you really need the money _and_ actually get extra money from being on the management track (which is not a given in IT at all), recruit yourself for an IC role on your next job hop and enjoy your life.
Personally I turned down the opportunity to become a tech lead because I'd rather have minimal commitments, every time I see how much bullshit the person who stepped up instead has to deal with I'm happy I dodged that bullet.
I don't know if that's quite what I mean. I don't consider moving from manager to IC to be moving backwards, nor do I consider moving from IC to manager to be moving forward. I and my work consider the two to be parallel tracks. For both tracks, as you move upward, you are expected to do more managing of others. When I say I want to move backwards, I mean I want to have less stress, less coordination, and maybe less responsibility in general. And I'm also fine with less money.
I think that maybe the issue is that in my company, there is no dedicated tech lead role, it's just what you get promoted into on the software engineer track. But it's still considered to be an IC role.
Yeah, at some point on the ladder even a technical role would acquire managerial elements (though I think calling it IC is muddling the waters at that point). That's what I meant - going back from those roles to just a senior dev is very doable, and my colleagues did that so they have less stress, more time for their families etc.
So I think you basically got to the answer in your last paragraph: smaller firms value different skills than mega firms especially when it comes to the value of coordination versus individual contribution.
This is generally also true for firms who maintain legacy products. The person who knows their way around a legacy system can have outsized compensation.
Ultimately there’s really no such thing as moving backwards in your career if you’re becoming a better master of your craft.
Hmm, interesting. I guess though, that immediately makes me worry; what if I don't really have the skills to maintain my job as an individual contributor, so I have to supplement my work with coordination? How do I know I actually have what it takes to make it in a smaller company?
But also, don't get me wrong, I love helping others and teaching others, and collaborating. It's just that sometimes in my team's line of work, it's completely bonkers the amount of collaboration and coordination that's needed.
Larger firms tend to really value people who are good at thriving at larger firms. I think this is a generalized statement nonspecific to tech. Walmart, P&G, and Apple all really value insiders across their roles and functions.
You might not be a fit for that extreme end of the spectrum. Just like you might not be a fit for a 20 person firm. There’s probably a firm maturity and culture that’s somewhere in the middle. Just interview and see how others run their shops to get a sense of what’s out there.
Are you sure the issue is related to your position only? It may (also) by linked to your organisation aging (your coworkers are not the same, or do not have the same motivations and state of mind, the hierarchical structure has changed) or aging of the project(s) you are working on (old code syndrome).
I am in a very similar situation, but my posiition did not officially change (at least since the degradation started). The code and organisation aging, on the other hand, are very clear and the source of the problem.
Yeah, that could totally be true. But man, does switching teams scare me! The last (and only) time I switched teams, a few years ago, I realized that it's completely impossible to tell what a team will be like without working on it for at least 2 months. Every team I spoke to would make themselves sound like they're doing the most exciting stuff, and they're basically the best team ever, and all of the anecdotes and claims would be based on some degree of truth. But then, I'd talk to other people who luckily happened to know more about those teams, and they'd tell me it's a hellhole working there. And then I chose the team which seemed best, and sure, it's been okay, but it also has massive problems. I mean, I guess I wonder, does any team in any software company actually have good code, and a good response to the problems of aging?
Everyone lies. If they don't, the next change in management can completely change everything, anyway. But the more you hate your current job, the less you should be afraid to change it. Worst scenario, you will have to change it again. Make sure you collect as much money as possible, and retire early, if that is an option. Try alternative strategies, such as not giving a fuck (while pretending you do).
A thing that worked for me is to keep phone numbers of my former colleagues and classmates, and once in a few years call them asking "hey, where do you work? are you happy with your work? is your company hiring?". That at least gives you some insider view. (Also, you learn that most IT people hate their jobs, so it's not just about you.)
I think most dont't, and code/team aging is one of the biggest force behind rise and demise of software solution (the competitor is not better because of better UI/algorithm, at least not ultimately. It's better because It is younger code with new enthusistic coders.
And appreciating teams from the outside is super hard everywhere, not only tech world. The western company world (especially in the US, but in Europe too, just to a lesser extend) work on (mostly fake) enthusiasm. There is very little honesty there, apart maybe among peer who know each others well and for a long time, or good friends working in a completely different world. It's just very risky to say your job suck, but you do it because it's the best way to earn money you know...So no team member will ever say that to an outsider. Probably not even in the blue collar world, which I do not know as well but I have friends there and it does not seems so different.
The best (but still very very uncertain) indicator is how do you find your future management (1 to a few level up)? Honest people you would enjoy out of work context, or not? This you can often have an intuitive idea after a few meetings
Hah, I've liked and admired much of my management just fine. But I have never ever had a single manager who I'd ever describe as honest. And they are always nice people when the team goes for beers, but they keep underlings at an arm's distance. They are always hiding something.
I need an advice related to psychiatry and gender, so this blog seems like an impressive fit.
I have only been exposed to the concept of transgenderness very recently, so if I’m misunderstanding something that’s supposed to be common knowledge, correct me.
27 years old, assigned male at birth. Since I’ve been a teenager, I’ve been suffering from a really bad depression. It’s the bane of my entire existence, my number one problem in life. The depression is extremely treatment-resistant; my case is stumping psychiatrist after psychiatrist. Nothing seems to help. Even electroconvulsive therapy, the most effective and hardcore solution that is usually only deployed as a last resort, did nothing.
For years, I’ve been doing some kind of Pascalian Medicine approach on myself, trying everything under the Sun in hope that it sticks. Whenever I stumble upon a paper saying that some supplement has some mild anti-depressant properties, you’d bet I’d be chomping that supplement down by the bottle, because if there is even 1% chance it will help, it’s worth it. But nothing is working so far.
In my quest for the cure, I have stumbled upon two very curious facts:
1) Gender dysphoria often manifests as a combination of several psychiatric conditions, including depression. These conditions are impervious to “traditional” treatments because they don’t resolve the core issue.
2) There are a lot of people around who are in denial about being trans, often inventing very elaborate narratives to persuade themselves they are cis.
Now, there is of course a very fundamental philosophical problem about how do we ever find out what’s true if every thought and emotion might be elaborate denial. But...
When I read about symptoms of gender dysphoria, they seem to be *suspiciously* accurate, to a much bigger degree than experience of two depressed people would correlate.
When I browse /r/egg_irl, the memes seem *suspiciously* relatable and confusingly fascinating. I have spent hours digging through the sub, bizarrely mesmerized.
None of this is, of course, a smoking gun. Transgenderism is very uncommon; there needs to be a lot of evidence to outweigh the prior implausibility, and all I have now is vague hints and tentative speculation. But what if there’s a chance?
If it turns out my depression is borne of some kind of suppressed transgenderism, that would be the worst single piece of news I’ve ever received. It would mean that I would never beat my number one enemy without transitioning, and transitioning is not possible for me for a variety of social, legal, financial, and other reasons.
I’m not even sure I want to investigate this avenue further. I’ve read a story of a trans woman who was more-or-less stable if vaguely unsatisfied, until she tried some feminine clothes, and got so *into* it, that her entire perception of herself has changed, and she was never again able to look at her male body in the mirror without being debilitated by dysphoria. If poking around the Unknown too carelessly would bring upon me some kind of Lovecraftian comeuppance and destroy my sanity... I would certainly want to avoid that.
So...
Does this story make sense in general?
Is there a way to find out if I’m a trans person in denial that is resistant to self-deception and wishful thinking? I’m assuming the answer is no, because otherwise it would be on the front page of every trans space, but maybe there’s some kind of special case solution that would fit here?
If my depression turns out to indeed be the result of gender dysphoria, is there a way to treat it without transitioning? Maybe some kind of symptomatic treatment to ease its effects?
Just some food for thought here about the term "treatment-resistant depression". It might be helpful to think of what you're suffering with as chronic unhappiness, which is a term that commits you less to a simple, biological model of what is wrong, and also opens up more possibilities for ways to fix the problem. If think of yourself as "having treatment resistant depression," your picture of what's wrong is going to be nudged by the phrase towards a picture of a something like a happiness lever in your brain that is stuck in the "off" position, and needs to be greased by chemicals or jarred by ECT or some such so that it can move freely. I'm sure there are some people whose unhappiness really is due entirely to some brain glitch that's the equivalent of a stuck happiness lever, but there are many many other models besides the simple brain glitch one that can account for chronic unhappiness, and yours may fit one of these other models better.
In fact, you're now considering one such alternative model: You are a trans person who cannot enjoy life until you start living it as the gender you feel like you are. There are lots of other possibilities that are analogous to the suppressed trans model -- situations in which someone's longstanding unhappiness is the result of some other profound, unmet need, such as: deep loneliness; living in a setting where they are not good at any of the skills that are valued; living in a setting where they are despised and mistreated; being profoundly understimulated because they avoid so many things out of fear.
And there are other models besides the unmet need ones, models that have to do with getting stuck in mental loops; models that have to do with being way overcommitted to some idea of how you're supposed to be. . .
To be honest, your suppressed trans model of what's wrong does not seem terribly plausible to me. Many, many gay or trans people live in cultures that view their sexual nonconformity as a repellent abnormality punishable by death, and some of these gay or trans people even buy into their culture's view, and think they are going to burn eternally in hell fire -- but despite all that, these folks cannot stamp out their gender dysphoria or their sexual attraction to their own gender (though of course they may hide it from the world). I don't think being trans is very suppressible.
But maybe some other model of your longstanding unhappiness would help you find a way out.
" I don't think being trans is very suppressible."
On the contrary, it must be, much moreso than homosexuality, since we have virtually no historical record of it before the late 19th century (whereas gay men and laws against them are omnipresent). If it's not a wholly socially mediated conversion disorder but a fundamentally biological problem, suppressing it is eminently possible and we've just forgotten how to do it.
Yea I was going to echo Eremolalos, there have been common trans-feminine tendencies in various societies and cultures since ancient times. The exact manifestation differs, and only recent technological advances created medical transition as we know it. But a small percentage of the population, of both genders but especially natal males, have been having very transgender-like experiences for millenia.
Separately, I think trans-ness is roughly as suppressible as homosexuality.
Essentially all of the European stuff is weaseling (note phrases like "galli priests that some scholars believe to have been"; those scholars are wrong, and probably know they're wrong, but they're putting these assertions out there specifically to muddy this exact kind of conversation, which is maybe the thing Wikipedia is most vulnerable to after having Kubrick's talk page squatted) or pure fakery, brought on by present societal trends. This makes me highly skeptical of trusting any of the other claims from regions with which I'm less familiar.
I think it would be unreasonable for you to demand I rebut each one severally, but it's fair that I should give just a few examples so you know I'm not just talking out of my ass:
* The Saturnalia crossdressing is part of a carnival of inversion and prima facie absurd to connect to transgenderness; so is the disguised woman in Ekklesiazusae, who is a joke in a comedy used to set up the larger (and baldly misogynistic) comical premise of the play. It's also weird that the article writer didn't alight on Agathon in Thesmophoriazusae instead since there the joke is that Agathon is *already crossdressing at home* when the main character comes to ask him to crossdress in order to sneak into the women's assembly, but not to help, no, Agathon refuses: he's only doing this because he's a big ol' homosexual (cue laugh track in Greek).
* D'Éon was clearly a man who got into the crossdressing as some sort of weird... who knows, but who pretty frankly admitted he was a man; the speculation continued in spite of him, and most likely in large part because of a legal decree that he *had* to wear women's clothing in the Kingdom of France (IIRC as consequence of him using his fake femininity to his advantage in a court case, but don't quote me on that). There's a preserved letter from him to the royal court requesting that he be "spared this ongoing humiliation" or words to that effect, a request which was apparently rejected.
* Catalina de Erauso was just a woman. She crossdressed for the wholly practical reason that for most of history it was clearly better to be a man in legal terms, at least. This is hardly unheard-of, especially in the 17th and 18th centuries. Anne Bonny and Mary Read are two other famous examples.
* The Public Universal Friend was insane.
(Elagabalus was one of the few exceptions I was thinking of when I wrote "virtually no", but it will be noted that A, a 1600+ year gap isn't very impressive proof of continuity, B, he existed in a highly unusual social situation which might have been innately less suppressive than the surrounding society due to his inordinate levels of authority and freedom, and C, during the last vogue of historical revision before transsexuals this same account was widely held among the pop-revisionists to be slander against Elagabalus from his enemies, possibly because he was gay.)
I agree that it would not be reasonable for me to expect you to rebut every single case of pre-19th century transsexualism mentioned in the Wikipedia article. And I myself do not have enough in-depth knowledge of a single one of the cultures and times mentioned to argue back against your arguments. But I have to admit I do not feel like I have moved much closer at all to your point of view. Here’s why:
- I don’t understand what you are getting at here: “If it's not a wholly socially mediated conversion disorder but a fundamentally biological problem, suppressing it is eminently possible and we've just forgotten how to do it.” And to the extent I do understand what you’re getting at, I don’t agree with it. If being transexual is a fundamental biological problem, why would suppressing it be eminently possible? I’m not sure what counts as a fundamentally biological problem. Would left-handedness count? If so, it’s a fundamental biological problem that doesn’t have the properties you’re saying transsexualism does. It’s not eminently suppressible — people throughout history haven’t *failed to realize* they’re lefties. (Of course, many have learned to partially or fully overcome their left-handedness, but that’s not the same as not realizing they had it.)
-It seems implausible that transsexualism (or the ability of transsexual human beings to recognize their own transsexualism) suddenly emerged at the end of the 19th Century. What would explain such a radical change happening then?
-It seems implausible that there should be so many things in the historical record that look like evidence of transsexualism, but that not a single one of these things is in fact explained by the existence of transsexuals. The most parsimonious explanation of there being evidence of transsexualism in so many eras and cultures is that transsexuals have been popping up in human populations all over the world for millenea.
#1: Is it just "problem" that's tripping you up? It seems to be a problem to those suffering from it as a rule, but I'm not at all averse to tabooing it for greater mutual comprehension, so let's do that. I'm simply getting at the fact that, de facto, transsexualism appeared in the late 19th century. It is therefore evident that *if* it is not social but biological, then it is a biological characteristic which has the trait that it can be fully and durably suppressed by some type of cultural practice. Left-handedness and homosexuality are good examples of biological characteristics which *do not* have this trait, since left-handers and homosexuals appear all over the place, all the time, regardless of how many people get whacked with rulers and/or lynched. (For further mutual clarity, I am against both of those cultural practices.) That is to say that, despite variably intense work to find one over numerous centuries, our culture has *not* been able to discover or devise an effective suppressor of either of these characteristics.
#2: Whether or not it seems implausible is not important. It *did actually happen*, that's what's important. Here: https://www.nature.com/articles/s41598-021-97778-3 is a link to a(n unrelated) article about what I would classify as an incredibly unlikely event, but, all the same, the article is about demonstrating *that the event occurred and how*, not trying to deny or disprove it. That is our task in this instance: not to deny, but to understand.
As for what would explain such a radical change, I proposed two possibilities:
* It's a conversion disorder (as hysterical paralysis or anorexia), which has gradually spread via social contagion, from an initially minuscule group to a larger one as this particular conversion disorder becomes more appropriate to our zeitgeist, or outcompetes the others through such means as the invention of the reflex hammer, or whatever the hell is going on there more specifically.
* It's biological, but something changed in Western culture around the fin de siecle which (again, very gradually) began to lift the lid on the very long-standing suppression mechanism. Atheism? The Decadent movement in art? The dread Automobile? Who knows. I frankly don't have a good candidate here, but I'm willing to listen.
A third supposition which is neither of these has been advanced by A. Jones, PhD, viz. chemicals in the water which are turning the freakin' frogs gay. I personally do not subscribe to this man's theories, but I *do* worry a great deal about microplastics as an endocrine disrupter for transsexualism-unrelated reasons, so I can't elevate my horse to a too-high altitude here.
#3: Again, extremely few things in the Western historical record actually look like evidence of transsexualism. You believe this because people have been hard at work for two or three decades twisting all the minutiae they possibly could so that they'll look like evidence of transsexualism. (Catalina de Erauso puts on men's clothes so that she can join the army: suddenly this is proof that she was a transsexual man all along. Does this mean that the Israeli, Swedish and other armies which allow women in combat roles in fact have no women in combat roles, only transsexual men? It seems to me that if you actually think about this kind of evidence dispassionately you'll realize that the logic simply doesn't hold. Modern transsexuals don't just go off and do gender-nonconforming stuff, they're seriously psychologically affected by the nature of their bodies and crave medical treatments of various sorts. Modern *women* just go off and do gender-nonconforming stuff like join the army if they feel like it.)
Also, I want to point out that nothing I said at all contraindicates *other cultures* being crammed full of transsexuals from the year dot. If transsexualism is suppressible, as it must be if it's a biological characteristic that existed all along, it's entirely possible that the West suppressed it and e.g. the Polynesians just didn't. That it's possible doesn't make it inevitable.
What is *not* possible is that it's biological, it's omnipresent in the human genome, Polynesia and India were full of transsexuals the whole time, and yet somehow (by sheer random chance? Now *that's* implausible) all we have since ancient Rome is... *maybe* one medieval Jewish writer who might have meant any number of things by that poem. Again, homosexuality isn't like that, at all. And yet it's not like Western culture historically approved of it.
Whoops, I missed this, or rather, I missed that it was a reply to me! Modulo people's personal thresholds for "incredible", I think we can say it *is* rare, but that's not really the question. Some napkin math: At present the population of Europe is roughly 500 million; out of those, I'd guess a few hundred thousand identify as transsexual? Say 1:2000. Over all of recorded history up to the late 19th century, I'd hazard a guess (again, low-precision math warning) that at least a couple billion Europeans must have lived. Out of those, *one guy* was apparently transsexual, for a rough figure of 1:2000 000 000. So, the question is: why was transsexualism a million times *more* rare for most of recorded history than it is at present? That's the part that requires explaining.
Hi! I'm a trans woman around the same age, been transitioned for some time, with some related experience and some differences. Would be happy to talk it out with you. If you want to, shoot me an email at jmb3481 [AT] gmail [DOT] com
I guess anyone can if they have questions or are curious. I have some unique perspective / takes, I think.
Im okay with them being public, and Im slowly writing my own blog series to share eventually.
In the meantime, I really hate working through substacks comment interface, and I find personal, one on one, safe interactions to be much more productive when discussing trans issues in the current discourse landscape.
Especially for such a sensitive issue as whether this individual may be better off transitioning or not.
If publicity is important to you, id happily agree to posting a transcript.
Your story makes sense in general, which isn't of course a hard evidence in favour of anything.
One of my friends recently transitioned in her twenties, she had no outright depression, but was deeply in denial about some of her needs, explicitly forbiding herself lots of things as "irrational". It was her narrative allowing not to think about being trans and it was pretty harmful, damaging her self integrity and relationship with other people. The moment of her enlightment was when we went to a cross dress party, there she finally had a "valid reason" to try on a dress, and then she slowly, upon lots of reflection, let herself to be who she is. Now she is much happier.
In my teens I was horrified by the faсt that my beard is growing that I will never have really smooth face ever again. I was shaving as hard as I can sometimes cutting myself in process. Once I tried to let my beard grow for experiment, my girlfriend started complementing my facial hair and I felt as close to dysphoria as I ever was. When I went to cross dress party I felt fucking gorgeous in a dress, and I sometimes repeat the experience. I also have a lot of trans friends and my close community is very trans positive. And I have zero intentions at transitioning. I've learned some things about myself and my queerness. But now I'm quite satisfied with my masculine body, even with the facial hair and do not want to change it.
I think wearing a dress once, then instantly being disgusted by having male body is a very minor percent of cases. I believe it can be helpful to do some genderbending in your head, and consider the majority of options at first. Men can wear dresses. Men can actually look gorgeous in dresses. If it's just about dresses you don't have to change your gender identity to try them on from time to time. And if you are not satisfied with your gender identity you do not have to do some irreversible changes to your body in order to fit in the opposite category. Being non-gender conforming man is an option. Being non-binary is an option.
As for a way to find out if you are in denial about your dysphoria, I would recommend starting with these questions:
What does being a man means to you? What does being a woman? Do you feel comfortable inside your body? Would you prefer having a female body rather than male one if you could choose? Does the thought of never experiencing having a female body feels very sad to you?
Another perspective: I am equally fascinated and horrified by egg_irl because yes, they are relatable, but the implication that if you're unhappy, depressed, or uncomfortable with traditional masculinity, you're actually trans and in denial just sets off all kinds of psychological manipulation alarm bells for me. I'm sure that thousands of others have felt, seen and thought the exact things as you do, without pausing to think and asking yourself, is this actually right?
The two facts you mention seem highly suspect to me in the context of the unprecedented cultural influence of transgenderism. I'm pretty sure that if you asked a trans person about the expression of gendery dysphoria ten years ago, they would not give you the same answer. Around the late 2000s, I was regularly reading a forum that had a very active subset of trans users who, in a time when transgenderism was only beginning to be seen as something other than a weird fetish, all had a common thread to their experiences: for them, it was absolutely knockdown obvious that they were trans. This is the biggest difference I see in the narrative about transgenderism then vs now, and it makes me think that there is a strong cultural influence at play. Also see Lucas's reply on this.
So, given your history, I'd doubt that transitioning is the one thing that will cure your depression. Nevertheless, if this idea is holding so much sway over you, it may be worthwhile to think about why. I have asked myself similar questions, and although I'm quite comfortable with my gender identity and satisfied with deciding for myself what masculinity means to me, there are nevertheless some psychological hangups related to internalized misandry I might be dealing with. I hope you manage to conquer your depression, one way or the other.
I'd echo the maximum skepticism of egg_irl. That place's whole schtick is that any kind of mental/emotional distress or gender nonconformitivity is secretly a sign of being trans. For example - the post currently at the top of the subreddit is claiming that, as a male, listening and signing along to music sung by a female is a sign that you're actually trans.
I'm sure that this maximalist approach has helped a lot of people overcome some deep-seated resistance to their actual condition, but I'm equally convinced it's led a bunch of impressionable people down an unhelpful and potentially harmful path.
Definitely look into psychedelic assisted therapy if you want another avenue to attack depression. Ketamine, psilocybin, MDMA assisted therapies are showing themselves to be very effective against treatment-resistant depression. MDMA in particular is a revelation that can't be adequately described, which is why it's so effective against PTSD.
As someone that has tried these molecules in a non-therapeutic context but with sometimes therapeutic intentions, is there a big difference between X-assisted therapy and X? I can see that MDMA had lasting positive effects on me, at least the first time, but I don't have any to associate with ketamine for example.
One thing he mentioned is that therapeutic doses and recreational doses are extremely different, and they are usually taken in quite different settings. The therapeutic mechanism seems to be fairly different from the recreational mechanism, whereas for MDMA and psilocybin the two are much more closely related.
The ketamine doses I take are close to the therapeutic doses. Like lots of people said in the comments, the recreational doses presented seems way too big. That might be more "extreme abuse doses".
> I can see that MDMA had lasting positive effects on me
Since you have (presumably positive) experience with MDMA, imagine if your attention was directed to processing difficult experiences or trauma while under its effects. The MDMA prevents a lot of our innate avoidance behaviours to facing the pain of trauma. It becomes easier to accept and process anything, and when this is guided by a professional trained in reframing thoughts and experiences, it can be quite transformative.
There have been some studies showing that psilocybin-assisted therapy had much better outcomes than just psilocybin alone. Both participants reported meaningful spiritual experiences, but only with assisted therapy did this seem to produce significant changes in long-term thought patterns and behaviours. Which makes sense for the same reason above: the therapist directs your attention and helps reframe your thoughts and experiences when you're in a highly receptive altered state of consciousness.
Recreational psilocybin and psilocybin-assisted therapy are indeed very different things, for one. When used in therapy, your environment is adapted to facilitate as deep of an introspective experience as is possible, with added emphasis on comfort and safety. It's not the drug that has the therapeutic effect: it's what your mind is doing while on the drug, and the therapy is supposed to put you in the right state to allow that to happen. At least, that's the idea I got from it when I last read up on it, which was a good few years ago.
You're in somewhat of a rough spot here because, while there are resources out there about "how do I know if I'm trans", they all seem to be written from the perspective of encouraging you to transition, rather than actually helping you figure out. I get the impression that the writers (who probably aren't thinking in these terms) want to minimize the false negative rate of trans people they don't identify and so end up with a relatively high false positive rate. There's a tendency to consider the reference class of people looking at your resource as being "trans people who haven't realized it yet" instead of "people who may or may not be trans", almost as if there were an assumption of "well if you have to ask, ...". This makes sense in that many trans people were in situations that meant they needed a whole lot of encouragement and wish they'd gotten it sooner, but it makes these sorts of things less useful in diagnostic terms.
It sounds like you recognize that finding egg_irl relatable isn't ironclad evidence; anecdotally, I can say you are correct in that. That particular memeplex reminds me a bit of esr's Kafkatrap concept in that denying the label isn't considered evidence against it applying.
There may be some things you could try that your brain parses as 'feminine' even though men also do them (at a lower rate). You could, for instance, grow your hair out a bunch, even braid it. In my local social bubble, that's not common for men, but it's not unheard of either. It also has the advantage of being trivially reversible if you don't like it. (I don't know anything about how prevalent cases like the "can't back out now" anecdote you heard are to give advice on whether you should, sorry. Also note that you can do this kind of thing if you like even if you come to the conclusion that you're not trans.)
Yeah, it seems to me that if a guy has like 20% feminine traits, "hey, it means you are actually trans", while ignoring that he also has 80% masculine traits, which probably should also mean something. Just because for some people it was really hard to overcome their denial, it doesn't mean that everyone else is also in a denial.
Okay, so what would be the best way to test this hypothesis? My first idea was some safe space where you could come, get some quick expert help with crossdressing, and then just play your role among supportive people, and see what it feels like.
The problem with this experiment is that it does not control for "being among supportive people". Like, if in your everyday life you are surrounded by shitty people, then there is a chance that being among supportive people will make your day better, regardless of being trans or not.
So the proper way would be that you come twice, flip a coin, and either first day be your current gender and crossdress the other day, or vice versa. An expert would help you seem like the other gender... or seem like someone who just pretends to be your gender... and then you would spend some time with the supportive people who do not know what your biological gender actually is. Then you could compare what feels better.
Ah, except there is the obvious problem that actually they will most likely find out your biological gender, from the sound of your voice, or whatever. (Maybe you should not be allowed to talk, only type on a computer?)
>>>The problem with this experiment is that it does not control for "being among supportive people". Like, if in your everyday life you are surrounded by shitty people, then there is a chance that being among supportive people will make your day better, regardless of being trans or not.
I really wonder how much of the increasing trans rate is an artifact of how positive and supportive the trans community is?
This a non-political thread, so... how to put it carefully...
Supportive communities are a great thing. It's just when such community is in a contrast to a generally hostile environment, and their support is conditional on X... it may create a strong incentive to pretend or believe that you are X.
I'd like answers about this too, I'm in the same boat, though with less intensity. I'm 3 years younger, my depression is less severe and I've tried less things but this is basically where I am at too.
A few other things I'm thinking about: the recent review by Scott of "Crazy like us" may mean that since we seen more stuff about trans people in the media, we may be influenced by it. Still, I don't remember an "answer" in that review once you've been exposed to that. Can people in Hong-Kong stop being anorexic if people stop talking about anorexia?
This brings us to my second thought: societal level vs personal level. The vast majority of discourse that I see about trans people is focused on the societal level, especially the discourse "against" it. But frankly, I don't care about that. If treating my depression means that society is a bit worse, then so be it. Still, this means that most content is either hyperpostive stuff from the trans side (because showing negative content would make it harder for everyone to have rights, or maybe even expose them to the possibiltiy that they're wrong, which might be a mental hazard), or negative about the societal impacts, often to the point of hateful. Analysis about the personal level is hard to find.
Another thought: is this a situation where trying to have a clear view of things is negative? If you're "questionning", and you keep questionning yourself after transitionning, you might not achieve the "best happinesss" compared to rejecting everything or accepting everything, to the point of lying to yourself.
Were there events like that in the past? For example, did people "became" massively gay at some point? Or did people divorced en masse at some other point, once we reached the "tipping point" of media/societal acceptance?
I'll finish this by saying that I have a strong negative prior on "being trans", as if this was true this would require to make lots of changes, many of them that are hard to reverse.
You make a good point in bringing up "Crazy Like Us" because not only is the trend of transgenderism in the media influencing its prevalence, the thing it's supposed to treat (your depression) is influenced in the same way. There's a theory that depression is just our culture's way of manifesting chronic stress, and this is the reason it's so hard to treat. The failure to treat it with conventional medicine is like using antivirals to treat a bacterial infection.
In fact, the treatment that succeeds *culturally* appears to work best against depression. Scott wrote (I forget where) about how Cognitive Behavioral Therapy was doing great against depression when it was the Cool New Thing and had cultural influence as a good treatment. When that influence waned, different treatments were devised, and one of them caught the hype and went on to be as effective as CBT was originally at treating depression.
I think transgenderism is riding on a similar wave, except a hundred times bigger. The prevailing narrative now is that it doesn't matter if you don't hate your body or acting your gender, you're just in denial and transitioning will cure your depression.
Now, it's possible this is all entirely useless to you as an individual. I understand that saying depression is a cultural disease doesn't make you less depressed, and that people who transition and are honestly happier afterwards are valid to feel that way. Maybe transitioning does, in fact, help against non-gender-dysphoria related depression for cultural reasons. But you have to wonder if the cure is worse than the disease.
Id like to comment just to push back against "the prevailing narrative is youre just in denial and transitioning will cure your depression".
3-5 dozen hyper-online weirdos on twitter and the baby-trans also-hyper-online redditors at r/Egg_IRL arent representative of trans people. Most trans people youd ask (And my community is full of them, Im trans) would disagree strongly with this narrative being true and that theyre supposedly pushing it. And the current medical standard-of-care is still to require therapy prior to starting hormone treatment.
Is hormone treatment getting easier to access? Yes, but this is largely medically informed. The effects of the first 6-9 months of hormone treatment are about 95% reversible, and medically quite harmless. Thats why the medical consensus has readily reduced the previously excessive gatekeeping once it became socially acceptable to do so.
And every therapist I have had has been very clear that transitioning will not on its own cure any co-morbid mental health issues (not that I had any) and they also screen potential patients for severe issues before recommending treatment.
Thanks, that's good to know. FWIW I also know plenty of trans people who would disagree with that narrative being true, but it's hard to deny that the hyper-online weirdos are having a disproportionate effect on the conversation, simply by being hyper-online enough to get picked up by the algorithms. The fact that timujin has discovered (but not necessarily fallen down) that rabbit hole is a pretty strong indication of that.
I absolutely agree that they are having a disproportionate effect on the conversation. Twitter honestly is just a disaster for civilization. Remember from previous of Scott's posts, 80% of all content on twitter comes from 20% of its users. And what, maybe 10%? of the US have a twitter?
So all of the discourse-shaping (For all topics, not just trans stuff) effects of twitter come from maybe 2% of the population, selected to be from the loudest, most prolific, most online, and most engagement-inducing.
> There's a theory that depression is just our culture's way of manifesting chronic stress, and this is the reason it's so hard to treat.
Considering how much depression and anxiety can be intertwined (at least they are, in my case), that would make sense (for some cases at least).
> In fact, the treatment that succeeds *culturally* appears to work best against depression. Scott wrote (I forget where) about how Cognitive Behavioral Therapy was doing great against depression when it was the Cool New Thing and had cultural influence as a good treatment. When that influence waned, different treatments were devised, and one of them caught the hype and went on to be as effective as CBT was originally at treating depression.
> Maybe transitioning does, in fact, help against non-gender-dysphoria related depression for cultural reasons. But you have to wonder if the cure is worse than the disease.
I wonder if the "big change" may be the reason it helps depression. I think I remember seeing somewhere that big changes like leaving your job, changing countries, etc helps against depression, in which case transtitionning should be compared to "placebo big changes".
"I wonder if the 'big change' may be the reason it helps depression. I think I remember seeing somewhere that big changes like leaving your job, changing countries, etc helps against depression, in which case transitioning should be compared to 'placebo big changes'."
As far as I understand, this scales smoothly by severity, e.g. a small amount of malaise will often be alleviated by taking up a hobby. The exact nature of the change being less important than its size being proportional to the amount of distress you're feeling sort of makes sense since it's probably less about getting *to* some specific place than getting *away* from where one is now. (Damn, that's a bad sentence, but hopefully it parses after a few tries.)
Indeed, the "big change" may fall under the umbrella of "getting away from the depressing thing" as per that post. Also, I didn't mean to imply that CBT doesn't work; it just doesn't work as well as it used to. Right now, its effectiveness seems to be on par with other standard interventions. There was another intervention with a similar name but it was so generic (something like Active Behavioral Adjustment) that I simply can't remember it.
Thanks, I didn't know tht CBT effectiveness was a bit lower now. I guess the optimal thing in that case would be to not read more about it, read lots of positive content and practice it.
Consider that, in 2022, you're living in a media hysteria that's pushing trans identities on people extremely hard. "Rapid onset gender dysphoria" is a thing now, and anecdotally, multiple young people in my circles have been convinced they're trans by their friends before rejecting that identity as manufactured.
I'd strongly suggest reading something gender critical along with your reddit diet, just to have a variety of viewpoints on hand and preserve the ability to think clearly.
Apparently the "Stanford protocol" rTMS / SAINT is quite effective against treatment resistant depression, even more so than ECT. You could try getting in touch with the researchers to see whether there's an opportunity there.
Consider how you might feel and act if you transitioned. A useful sniff test might be to act that way before you transition, and see if it changes anything. You don't have to update your gender just to discard masculinity norms.
Before you seriously consider transitioning, consider changing your lifestyle in smaller ways, and test if the feeling persists. Eg, investigate any unchallenged trapped priors, your social life, where you live, and perhaps exercise more.
I'm nearing the end of a math PhD (algebraic geometry) and I'm plotting my escape from math academia. I've gotten very interested in synthetic biology and closely related fields. In the small amount of downtime from dissertation work I've been reading a lot of the basic textbooks, papers etc and am considering trying to work in this area after I get my degree. I would love to hear from anyone who knows about this area who has thoughts on e.g. areas that might be especially well-suited for a mathematician, prospects on getting hired with this background, or general arguments for/against this as a career path. Also would appreciate hearing about other areas outside of pure math that mathematicians have transition to working in (especially related to biology, or outside of the more typical paths of data science/software engineering/investment banking etc).
I'll join ThatGeoGuy in broadly recommending engineering as a source of applied math jobs. Most of the versions I know are adjacent to Department of Defense (which may or may not appeal, and may or may not be feasible), but some aren't; I'm enjoying my time at one of those (after a short stint as a software developer, and another as a data scientist).
A friend recently left into quantum computing.
I gather there's also the NSA. From what little I heard (I'm a Suspicious Character with a Russian citizenship), it's a good work environment.
I believe Noah Goodman (https://cocolab.stanford.edu/ndg) did his PhD in algebraic geometry as well (maybe it was algebraic topology) before doing a postdoc with Josh Tenenbaum in cognitive science, and now being quite successful in the field himself. It's rare to find an academic whose PhD is in such a different discipline than their current work, but he's one example.
With regards to areas outside of pure math -- have you considered robotics / navigation?
With a background in algebraic geometry, I would assume you have some communicable skills towards optimization, geometric fitting, etc. There's a lot of robotics companies out there that are struggling to find these skills. The company I work for (https://tangramvision.com) is not currently hiring, but might be in the next year? We primarily stick to Lie representations for our camera / multi-sensor calibration, but if you have any cross skills in optical physics and software, that wouldn't be too far off from something we look for. We've definitely had interest in geometric algebra in the past, mostly for building intuition, but we're always looking for better ways to do things.
There's a lot of robotics companies out there and what you look for might depend on the timeline of your graduation, especially if you're considering something more like a startup than a larger company, but a lot of the skills are cross-domain.
Can you recommend a resource to read about Lie representations in this context? I'm a former math PhD currently working in something that might be broadly construed as computer vision (we certainly have cameras from which we extract information), and curious to learn more about how other people handle e.g. the calibration stuff.
Thanks! I haven't really looked into this area much. I'll definitely read up about it. I work in a very pure/theoretical area, so I haven't touched anything resembling optimization/fitting/anything data related since probably sophomore year of college. But I will probably end up learning more about that sort of thing when I get closer to applying to jobs.
I'm a solid programmer and would be interested in jobs that involve programming, but probably not more business logic focused software engineering jobs.
If you can program well (in a language that people use, e.g. C++/Java), you shouldn't have problems finding a job in either machine-learning, data science, computer vision, engineering or the intersection of these fields.
If you're not following Auston Vernon's blog, you're missing out - it's good stuff. He had a good piece on nuclear power recently: https://austinvernon.site/blog/nuclear.html
FedEx wants to stick anti-missile lasers on some of their cargo planes - apparently there have been issues in the past with delivering to certain areas. I was thinking that might be useful on passenger and cargo planes going forward in dealing with pesky small drones ignoring exclusion zones.
Not clear that this would help against drones. The "anti-missile lasers" aren't hard-kill weapons, they don't burn their targets out of the sky, they just dazzle and confuse infrared sensors. Which is great against a missile that is using an infrared sensor to *try* and hit your airplane, but doesn't do anything against a radio-controlled and/or GPS-guided autonomous drone that's just flying whatever course it was commanded by someone on the ground. Even if there's a vulnerable camera on the drone, that has nothing to do with where the drone is going to be when your plane crosses its path.
I don't understand AI alignment as a field, or at least wonder about the premises. Mainly:
1) Does super-intelligence translate to super-powers? Like, if Terrence Tao wanted to be president or a billionaire could he do so easily? What if he was twice as smart? 10x? 100x? How come our politicians and business leaders don't seem to all be super-geniuses?
In feudal times it was blindingly obvious that intelligence alone didn't translate to power. Now we live in a more complex world and there are more advantages to intelligence, but I still wonder how far that goes. It seems possible there could be some undiscovered physics that gives you free energy or something, so you get massive power once you cross a certain threshold, but that doesn't seem *obvious* to me.
2) Will super-AI happen all of a sudden (in years vs decades)? If it happens over decades it seems likely that the best AI alignment research will take place after AI is better understood, and we will have time to do that research. GPT-3 is very impressive but seems far from an existential threat.
3) Will all the organizations focused on creating AI pay extensive attention to alignment research done in different organizations? If it's alignment research by OpenAI themselves or something this point doesn't apply.
4) As an extension of (3), what about the people-alignment problem? It seems inevitable that *eventually* bad actors will deliberately use AI for dangerous things (trying to take over the world, etc), so even if best practices exist to prevent accidental mistakes they will eventually be ignored. I'm sure there's the thought of having a good super-AI to monitor everyone in the world etc, but I wonder if we are at a point where that can be thought about in a precise way.
(1) and (2) are probably the biggest things I don't understand about it. Personally I'm kind of expecting that if intelligence that gives superpowers is even possible, we'll fall short of that initially, so that the human or human+ (but not super-powered) systems will provide a better training ground for AI alignment research than we have available today.
1) Human intelligence has some biological limitations: even the smartest human has only one body (can be only at one place at a time), only two hands, 24 hours a day, neurons working at 100 Hz, only 1 topic to consciously think about at any moment. Also, one bullet can kill them. No matter how high your IQ, these will become your limitations. (Unless you are a mad genius and build a new robotic body for yourself.) This is why social skills are so important, because making other people do what you want, is a way to overcome these limitations.
I assume that some of this would *not* apply to a smart artificial intelligence. That it could run faster, or make copies of itself (to work in parallel, but also as backups). This could be more like 10 or 100 Terrence Taos that are perfectly loyal to each other, think 10 times faster, can share their memories, and are immortal in some sense (like, if you kill one of them, he later respawns in the factory, and only the few last moments of his memory are truly lost). They could specialize at multiple things, each of them focusing fully at one. They could be quite scary.
2) No one really knows. But after some decades of very slow progress, we also see some things happening surprisingly fast. No one knows whether superintelligence will be one of those things.
The first AI, the one that will supposedly doom us all if it isn't properly aligned, will probably also have similar limitations. It likely won't fit, or at least won't run, outside the customized room full of computronium it was designed for. Or if it can adapt itself to run on a commercial server farm, it will do so at the expense of seriously impacting the performance of that farm to the point where its owners will note that it isn't doing what *they* paid bignum dollars for it to do and start doing some thorough and intrusive maintenance. And trying to disperse it across a botnet of ten thousand hacked PCs, will likely cripple it with latency.
Eventually we'll have to deal with really powerful, versatile AIs (or go full Butlerian Jihad or something). So AI alignment is an important thing. Just, not something we have to get absolutely right the first time.
Since the inevitable AI discussion has come up, here's something I've been wondering: why do we have reason to believe that explosively self-improving AI is a strong possibility so long as the AI is both at a "human" level of intelligence (whatever that means) or above, but we also have strong reasons for believing that such an explosive level of self-improvement isn't possible in people?
For instance: there have been hundreds of thousands of extreme geniuses born up until now - all of which were/are an order of magnitude more intelligent than us plebs by measured IQ. Why did none of them decode their own brains and then invent a means of making themselves even smarter? Why is this so unlikely, but AI turning itself into God is a worrying possibility?
I think the idea is that we've been able to produce steady improvements at artificial intelligence, in ways that we haven't been able to produce steady improvements at natural intelligence. Since the improvements we can do in artificial intelligence depend on the amount of intelligence we can bring to bear on the problem, the idea is that once artificial intelligence at slightly greater than human level (assuming that's a meaningful thing) can be applied to the problem, the rate of improvement at artificial intelligence will start to increase. As long as the problem of improving artificial intelligence doesn't become exponentially harder just after the point of reaching human level (whatever that means), that suggests that there should be a period of rapid increase soon after we reach that level, faster than whatever came before.
That's a massive, mountain-sized "if". And, as I've pointed out elsewhere, it is one of a number you need to grant for the exact scenario that excites so much discussion to play out.
It also requires one to believe that the regular spasms in AI research (which inevitably produce interesting new tools for specific problems, but have not yet provided progress to a theory of "general" intelligence - whatever that is) are leading slowly upwards in a grand sweep rather than the stutter-stop progress they seem to convey. We might (note: might) just be on track to end up, in 100 years time, with a sparkling but unconnected bag of tricks for solving a bunch of specific problems and no way to put it all together.
Again, the part that fascinates me about my original question is that, with the limited actionable evidence we have available (versus the none for computer-enabled AI), it may be easier to make a super-intelligence the biological way by just smushing a bunch of grey matter together. And so we've had forever for some bright specimen of the human race to come up with a way to make biological superintelligences out of us, and yet here we all are.
My operating theory is that a) intelligence is just a big, hairy problem that nature partly solved over 500 million years, apparently by fluke, and b) that super-geniuses are apparently just more interested in constructing epicycles, coming up with novel mathematical methods, and generally navel gazing, than they are in ruling us all as god-kings or super-charging their minds. This, scant though it is, is pretty hopeful stuff where the distant prospect of GAI is concerned.
Presumably because programming and electronics manufacturing are demonstrably easy when compared to genetic engineering. Biology is messy and far more unpredictable.
Wouldn't electronics and programming sophisticated enough to produce an equivalent to a human brain be, by definition, just as complex and intractable though?
More to the point - as far as we can tell, it may actually be easier to achieve higher levels of intelligence using braincells than transistors. Remember that we have not yet definitively proved nor disproved that intelligence isn't just a function of how many neurons can be packed into a single cranium. Remember also that we have a few billion examples of human-level intelligences created using neurons, and none created using transistors.
Regardless; this question isn't just about the raw difficulty of the task. It's just to ask why it's taken more or less for granted that we shouldn't kill off the next John von Neumann lest he/she decide to tinker with his/her own already-superior mind and rapidly ascend to godhood. Why is this unlikely, but exploding AI isn't?
If we knew how to construct biological brains, then we'd probably have good ideas on how to build ones larger and better than any that we currently find in humans. The production function for biological brains is something we have limited control over. We can't just plug two brains into each other and scale human intelligence. We try basically as hard as possible to do this with social organizations like governments and corporations. Our success is limited, but notice that we still worry about corporations being out-of-control and too competent at maximizing profit at the expense of things that we care about, and this results from the corporation's values not being aligned with the public's at large. Unaligned values + high competence can produce bad outcomes, and our ability to respond effectively to these bad outcomes can be at odds with the entity's competence. If Big Tobacco was competent enough, we might have never been able to educate the public on the harms of smoking and pass taxes on it.
So if we create an artificial brain, that will be the result of understanding how to do so. We will have much more control over the production function of this artificial brain, many more knobs to turn than we know with biological brains. It's possible that we run into unforeseen issues with scaling intelligence. It's possible that intelligence itself gets really hard beyond human levels. There seems to be evidence that very basic scaling attempts (e.g. GPT2 vs GPT3) leads to a large increase in competence. It's possible that such scaling doesn't apply to general intelligence, but we don't know. And so just the possibility of superintelligence is worth worrying about.
I think I _mostly_ agree with you...but it isn't necessarily clear to me that, when you get to the level of programming complexity that is AI, that it's necessarily simpler/easier to manipulate than biology is. Since we haven't yet achieved general AI, we don't know what level of complexity it will require, so we can't answer that question. _Maybe_ it will be simpler and easier to change/improve than biological based intelligence, but maybe that level of intelligence _requires_ that level of complexity and increased intelligence will actually be _more_ complex and _harder_ to manipulate in a way that is linear (or even exponential!) so that the AI still lacks the ability to meaningfully improve itself.
I don't think we know for sure which direction it could fall, and both seem plausible to me.
> I think I _mostly_ agree with you...but it isn't necessarily clear to me that, when you get to the level of programming complexity that is AI, that it's necessarily simpler/easier to manipulate than biology is.
I think it's clearly harder to change biology. Any biological entity necessarily encodes a considerable amount of non-intelligence related information in its genome or epigenome due to its evolutionary history, and requires a delicate biochemical environment necessary for its function, and we understand almost none of it in a purely mechanistic sense. Any changes to extend intelligence are like playing a game of Jenga while walking a tightrope over a snake pit.
AI seems to be a purely informational problem that doesn't carry this baggage, and for which we have a well developed mechanistic understanding that discards irrelevant details (computer science). We also have proofs self-improving systems in the form of Godel machines:
Goedel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements, https://arxiv.org/abs/cs/0309048
It seems to me that any electronic entity necessarily relies on a considerable amount of delicate, non-intelligent infrastructure that encodes some hard physical limits on what it can do. For instance; if we do the maths, and discover that a suitably-large (but perfectly optimised) neural network to simulate a human brain-equivalent requires however-many terabytes of storage, however-many petaflops of computing power and however-many kilowatts of power, then surely it's not too hard to just make sure that the infrastructure provided to such a system is significantly less than that? And, you know, keep a hair-triggered deaf-mute manning the main power supply with an axe at all times just in case?
However; even to engage with the above argument is to grant a whole raft of things that I don't think are massively tenable:
- that we'd ever be interested in a 'general' intelligence when our own clearly is not (and can in any case be found for cheap anywhere where food is plentiful);
- that, granting such an interest, it would be entertained for any sort of economic purpose (in the same way that engineers are not currently racing to construct the walking, swimming, flying and grass-powered machines we know to be possible by observing ducks);
- that, having granted such an interest and such a purpose, it would be within our power to develop one without also developing dozens of other technologies that fundamentally alter our lives well before that point so as to make the question of consequences irrelevant;
- that such intelligent systems, once developed, should show a capacity for organic improvement over optimisation and iteration;
- that such a process should be linear rather than a series of exponentially-more difficult steps that stymy each new iteration/improvement, and require ever-escalating access to resources which cannot be had simply by asking for them;
- that such a system should be stable enough to form long-term goals and plans rather than being an even greater mass of neuroses and self-sabotage than we are;
- that, having gone the long way around in developing such a system to start with, we would not also have developed the tools to forestall all the major foreseen issues as well;
- that, having granted all of the previous, we would even be able to foresee the most important issues from where we stand in the ignorant present; and
- that, having granted all of the above, the specific scenario of superhuman AI followed by exponential self-improvement and loss of human control is the most likely of the roughly 1 billion possible outcomes of such a long, complicated, deeply specific chain of events.
As always, the discussion around AI ends up including so many very specific premises that it's conclusions seem self-ordained. It's the modern version of philosophers debating how many angels can dance on the head of a pin.
> - that we'd ever be interested in a 'general' intelligence when our own clearly is not
There are scenarios where we are interested in this, and scenarios where general intelligence is created accidentally while attempting to tackle some other optimization problem. Either outcome is feasible.
> - that, granting such an interest, it would be entertained for any sort of economic purpose
I mean, dumb AI is already a huge money maker. This point isn't even in contention.
> - that, having granted such an interest and such a purpose, it would be within our power to develop one without also developing dozens of other technologies that fundamentally alter our lives well before that point so as to make the question of consequences irrelevant
Possible, and this is the outcome Musk is pushing for with Neuralink, ie. merging humans and machines to mitigate AI advantage. Without that, historical trends suggest the opposite outcome: we are becoming increasingly more dependent on machines and information systems.
>- that such intelligent systems, once developed, should show a capacity for organic improvement over optimisation and iteration;
These machines could be designed this way (see my reference to Goedel machines), and some of them arguably would because if the AI is smarter than you and your competitors, then you'd be stupid not to exploit it to design the next generation product, including the next gen AI to preserve your advantage. All of the required incentives are already in place.
> - that such a process should be linear rather than a series of exponentially-more difficult steps that stymy each new iteration/improvement, and require ever-escalating access to resources which cannot be had simply by asking for them
No one is assuming linear progress. Intelligence necessarily has an asymptotic limit due to the Bekenstein Bound, ie. above a certain information density it would collapse into a black hole. The question is, do you think it's plausible that the human brain is anywhere near that limit? Clearly not, and so multiple orders of magnitude more intelligence beyond human reasoning is very plausible.
That said, no doubt our current technologies have limits which will necessitate different computational substrates (maybe optical computing), but we're already applying AI to these problems, so this is part of the progress to come.
> - that such a system should be stable enough to form long-term goals and plans rather than being an even greater mass of neuroses and self-sabotage than we are
Neuroses result from messy biology and evolutionary baggage. This isn't an issue for AI. Certainly it may have its own quirks, but that should worry you *more* because it's completely unpredictable.
> - that, having gone the long way around in developing such a system to start with, we would not also have developed the tools to forestall all the major foreseen issues as well;
What incentives do you think would lead to these results, aside from the people explicitly working on AGI dangers?
> - that, having granted all of the previous, we would even be able to foresee the most important issues from where we stand in the ignorant present; and
The most worrying conclusion is that we *can't* foresee all the dangers, but the ones we *can* foresee as being plausible are already terrifying enough.
I frankly don't think AGI danger is anything like the mental masturbation we sometimes see from philosophy. The error bars are wide, but it's clear that AGI can be an existential threat in the future. There are many more than 1 in a billion terrible outcomes for humanity.
All of the incentives to invent AGI already exist, and that's why I disagree that AGI becoming a threat necessarily requires a "long, complicated, deeply specific chain of events". You could say the same for human flight. Certainly we would have had no reason to believe predictions that it would be achieved specifically in 1903, but we had considerable reason to believe humans *would* achieve flight at some point.
> Does super-intelligence translate to super-powers? Like, if Terrence Tao wanted to be president or a billionaire could he do so easily? What if he was twice as smart? 10x? 100x?
At 100x as smart, he wouldn't have to bother with politics at all, but could probably seize power over critical systems directly if he wanted to. We simply don't really know the limits of an intelligence that's 100x smarter than our smartest people, but dangers they could entail are crazy high.
> 2) Will super-AI happen all of a sudden (in years vs decades)?
The unpredictability is one of the dangers. We could have enough computational power, and someone just hasn't quite figured out the trick, and if they do it by accident and realize what they had done, it might be too late to contain it.
Just imagine if it were common for people to have their own hobby biolabs, where they regularly experiment with making genetic alterations to cowpox. The vast, vast majority of any such mutations will come to nothing, but there's a non-negligible chance that someone will accidentally create smallpox 2.0 and devastate the world. This analogy is not necessarily as far fetched as you might think, at least in the near future.
What does 100x smarter mean? It could be that 15 IQ points (1 sigma) equals the ability to solve a problem in 1/100 the time but the complexity of problems increases in such a way that it still manifests as 15 IQ points.
It's not clear to me that an 8sigma intelligent human is in any way superhuman. There's been enough 6 sigma's (1/billion) that we should have noticed if it was.
Good question. There are different ways to calibrate this, but the context here is superintelligent AI that's much, much smarter than humans, so that's the scale you should be using. I don't think a human with that degree of intelligence has ever existed, or probably could ever conceivably exist without serious genetic engineering.
"At 100x as smart, he wouldn't have to bother with politics at all, but could probably seize power over critical systems directly if he wanted to."
Could he? Really? By what means? This seems to be the heart of the question: that propositions of this type smack of the wishful thinking of nerds.
In actual fact, an ape with a stick can brain a genius not just *as* easily as a jock, but more easily. Internetdog is not alone in feeling that the burden of proof rests on those who claim that this principle stops applying after some arbitrary threshold of genius.
> Could he? Really? By what means? This seems to be the heart of the question: that propositions of this type smack of the wishful thinking of nerds.
Nearly everything is connected digitally these days. It's already fairly trivial for moderately intelligent hackers to compromise our networked systems, so take that as a given.
Procurement orders, staffing, orders and directives, and so on are all communicated over networked systems. A super AI or person could insert., delete or manipulate these in multiple ways to achieve their goals. Thus, even systems that are disconnected from the internet have a path through meat space that can be manipulated indirectly via digital means.
Someone that is two orders of magnitude more intelligent than our smartest people shouldn't find this too challenging since such exploits have already happened with ordinary intelligence. If you don't believe this is possible, then I don't think you understand how significant an "order of magnitude" really is.
Sure - go and order an ant around the room with your thoughts. Or, if that seems a tad contrived, go and pilot one around like your own little ant avatar using pheromones.
The fact is that the only way we presently know of to control insects directly is via worryingly direct methods such as implanting electrodes in them. Even then, you're only gaining a limited amount of control over the poor abused creature by what amounts to brute force applied directly to its brain. You can't manipulate it with subtlety or finesse, or bend it to your will in any more sophisticated of a way than an ox driver with a whip.
In any case; my point (which was exaggerated for comedy's sake) is that some trivial stat like "100x as intelligent"* does not, in and of itself, grant the more intelligent party some sort of magical insight into/control of the other. So the idea that a sufficiently smart AI could just perform "social hacks" to puppet us to its will without our knowing about it is sort of ludicrous on that basis. Having created what amounts to an alien mind, it would probably be just as baffled by us as we are by the other, lesser, minds around us. And, in the end, a malevolent super-intelligence controlling us would end having the same results as we are used to in our day-to-day lives: vague, unpredictable, and with large amounts of brute force and coercion applied.
If you're willing to then grant that we're dumb enough to put said super-intelligence in a position to do so regardless of it applying such known methods (which is, admittedly, depressingly possible), then a lot of the rest of the arguments about AI alignment fall away immediately as being pointless in the face of our overwhelming ability for self-destruction.
* According to some estimates I could find with about 5 seconds of googling, the average human (86 billion neurons) has 344 000 times more neurons than the average ant (250 000).
I don't grant either your orders of magnitude, but I also don't find either of your conclusions implausible. Cats can be trained, and controlling ants via pheromones is totally feasible in principle.
Of course, as I'm sure you know and are intentionally glossing over, communication barriers raise their own obstacles that must also be surmounted, which isn't an issue with a superhuman intelligence (although would be for an AI).
All of this depends on people not being aware of the AI's actions. Once people are aware, they can actively decline to follow the AIs wishes. Worst case scenario, we scrap the internet (physically tear down the infrastructure) and start over on a lower level of development. This is very bad for humans, and many will die. This is fatal for AI. Really dumb humans can tear up or disconnect internet cables.
"All of this depends on people not being aware of the AI's actions. Once people are aware, they can actively decline to follow the AIs wishes. "
You think a superintelligence will do things in a super obvious, understandable way that allows it to be caught? It won't be smart enough to not get caught? I think you display a failure of imagination. and think we can outsmart a superintelligence.
It's not a matter of outsmarting it on a level playing field. It's a matter of logistics, primarily. A computer system has a lot of range and ability in a computer system (i.e. the internet), but an extremely limited range beyond that. Can a very smart computer order a bunch of stuff on Amazon? Sure! Can the computer open the boxes that the stuff arrives in? No! So, the computer needs intermediaries, humans, to do a whole lot of work for it. If the humans figure out that the AI is running its own own game, then the humans have a massive advantage. Humans are very suspicious once they are aware of a concern, so even a dumb human can refuse to help a computer system (or unknown requests from a potentially non-human source).
Yes, there are scenarios where humans can catch it in time. Our manufacturing infrastructure is also fairly automated though, and this will obviously progress, so the real question is whether the AI could be detected before it commandeered enough infrastructure to preserve itself. This scenario was featured in the TV show Person of Interest.
And these are only the Skynet/hostile AI scenarios. There are plenty of nightmare scenarios that don't even have anything to do with Skynet-like conflict, like the paperclip maximizer. For instance, an AI that's tasked with solving protein folding optimization problems and connected to a protein synthesis system could accidentally synthesize some new prion diseases or viruses.
Have you ever worked at a factory? Are you aware of what is required in retooling an operation to produce something new? These are not easy accomplishments, and they are not automated. In fact, they would be very difficult to automate, due to the fact that changing the automation that exists is a big part of the problem.
Is it forever impossible? No, but there are dozens of major steps in automation - automated logistics/trucking, automated mining, automated tool making, and so on, that need to take place before an AI would have anything tangible to take over. All of those processes require massive human involvement now, and even the most automated would break down within minutes of humans withdrawing their attention, let alone humans actively thwarting the AI. Loosening a single bolt on a big machine could cause it to break down in short order, with no current means for an AI to diagnose or repair that machine.
I think many who worry about AI know a lot about computers and not a lot about the difficulties of doing anything else.
Suppose you could move 100x faster than you can now with the same efforts. Do you think that you won't be able to become very rich with such ability if you wanted to?
Notice how such super-speed allows you to become top athlete in many sports. Take football. Speed isn't everything in it, but 100x super-speed would compensate for your lack of skill, compared to other top players, and allow you to easily triumph them all.
Does super-speed translate to superpowers? Obviously! Is intelligence less or more important for our society than speed? Isn't it the more potent superpower, then? Then why isn't it obvious? I think it's because we just lack the ability to imagine ourself super-intelligent, while we can easily imagine ourself have super-speed or super-strength. You do not have to already be super-fast to imagine how it is to be moving even faster. But you do need to be super-intelligent to think super-intelligent thoughts.
I can imagine a number of ways AIs could take over the world, I'm just having trouble imagining how they might take it over *right away*, or how it might happen by accident in the early stages. Of course, others have thought about this more than me so I was looking for ideas (and Scott cleared up point (3) at least).
My non-expert expectation about AI is that like many other technologies it's easy to overestimate in the short-run, and to underestimate in the long-run.
For example people have had high expectations of AI that weren't met back in the 1970s and 1980s [1]. In the late 90s people were talking about the "new economy" where the internet would be ubiquitous and technology companies would dominate everything. After the dot-com crash they were disillusioned, but it basically came true two decades later.
Right now self-driving cars are a non-trivial problem where it's hard to put a definite timeline on it. I expect we'll be able to have AIs reliably drive a car before they can take over the world.
In the long run I could imagine a number of doomsday scenarios - you could have self-replicating nano-machines, bio-weapons, conventional weapons hijacked by AIs, AI-aided political subversion or repressive governments, some nuclear-type physical weapon discovered by AI that is easier to make (and so harder to stop proliferation), etc. I just don't see how these physical-world existential dangers manifest right away, even if we rapidly go from "unable to drive a car" to "superintelligence" (which itself seems like a big assumption). Even for hacking it seems like you would need to give a poorly-understood super-AI unrestricted network access.
"I can imagine a number of ways AIs could take over the world, I'm just having trouble imagining how they might take it over *right away*, or how it might happen by accident in the early stages."
That's exactly what I'm talking about! We can imagine some proxy for higher intelligence, like inventing new technologies (only that we have already thought about) or thinking faster (only the same kind of thoughts we think at our level of intelligence) At best we can imagine highter intelligent being to instantly arrive at conclusions, skipping all the mistakes and mental stumbling (only the same conclusion we can arrive).
But we can't actually imagine how it is to think super-intelligent thoughts because to imagine them we would need to think them and we are not super-intelligent ourselves. How it is to invent the technologies that we couldn't even think of, or arrive to conclusions that we couldn't grasp in principle. And that's why we are completely missing whole dimension (or even multiple dimensions) of strategies, that a super-intelligent being does not miss and can use. Imagine trying to contain a 3-dimensional object in a two dimensional prison. No matter how thick are the walls, the object can just go over them in a way that two dimensional beings couldn't even imagine.
Intelligence is the thing limiting our possibilities in the decision making process. For a higher intelligent being there are just more possibilities to achieve the outcomes they would prefere. A better chess-player can win in situations, where worse player wouldn't be able to. From a worse player's position, a better chess-player can win the unwinable games. It's a superpower, that seemingly defy the logic itself.
But wait, you may say. Can't we arrange a situation on the board, that even the best possible chess-player wouldn't be able to win? Maybe their king is already under check and they do not have any other figures, and we have all of them to corner the king? Well, I think we can. It would be quite interesting to find the least gruesome condition in which perfect chess-player wouldn't be able to win, but in principle it seems to be possible. But, it's where the methaphor breakes. It's only possible because we know all the rules of the chess. And we do not know all the rules of reality.
Sure, there's probably things in this world that we can't imagine and have no evidence for. But at some point it becomes almost a theological question, similar to Pascal's wager. Maybe this particular question is different, but as a practical matter I think it makes sense generally to default to not updating our internal model of the world until we have more evidence about how things work.
This a little bit abstract, but one way of thinking about how increased intelligence might apply to the world is asking if the complexity of the world is linear, combinatorial, or even chaotic.
Take chess for example. If the complexity was linear, someone who thought twice as fast might be able to see 10 moves ahead instead of 5. But the actual complexity is exponential, since each position might branch into 6 other positions depending on the moves taken. Increased computational power has diminishing returns. So in exponential systems we'd expect a relative advantage but maybe not something totally qualitatively different (computers haven't simply solved the whole game of chess, despite it being limited to a 8x8 board).
(As an aside, for chess beginners make a lot of mistakes, but at the higher levels of play I'm under the impression that the better players aren't able to turn-around "unwinable games" so much as avoid getting into them. So if someone was behind a piece they might just resign rather than pulling a logic-defying comeback).
Even worse than exponential, a fair amount of real-world systems seem chaotic. Things as simple as a double pendulum [1], but also the weather and maybe geopolitics. In chaotic systems, small differences in initial conditions get reinforced over time, so even if they're deterministic on some level the approximate present doesn't predict the approximate future. And it's basically impossible to measure the present in the infinitely precise way needed to predict the future.
Thinking about chess, computer science, math, politics, etc, an exponential explosion of possibilities seems to be almost the norm - it's easier for me to think of exponential / chaotic systems than it is for me to think of linear ones.
So this view of things is part of the reason why I'm skeptical - even if AI is more efficient than humans on an energy/computation basis, it surely won't be free. In an exponential world you would run into energy constraints well before any kind of omniscience.
Of course there are specific technological thresholds that grant a lot of power - things like nuclear weapons. Even if AI didn't take over the world on its own initiative, if it merely aided the discovery of similar technologies we don't know about yet that could be super dangerous. I'm not sure how alignment could prevent that specifically though.
I'm not sure how Pascal Wager is relevant here. Pascal said that even if we had little to no evidence in favour of God existence it's reasonable to worship Him, as the reward is infinite as well as punishment for not worshipping. But as we aknowledge that there are infinite possible Gods with little to no evidence in favour of their existence, some of whom may even prefer not to be worshipped, Pascals math doesn't really work out. Nevermind the issue with Kolmogoroffs prior on the existence of the omniscient being.
Do you claim that there is as much evidence in favour of existence of more intelligent strategies which we can't see with our current level of intelligent, as of Christian God? This seems wrong to me. I've both been in a position where I can see a strategy which another person can't, and where I'm not smart enough to see a possibility that someone else notices. And while a little surprising, these situations have never felt like total mindblown. People have been inventing new things throughout the history and it feels not surprising at all. Are ideas of known unknowns and unknown unknowns even controversial? It seems that I'm really missing your point here.
World complexity is an interesting topic and I would be glad to explore it, but I have a feeling that we should at first agree that super-intelligent agents can in principle arrive to ideas and strategies that we can't even imagine. And only then go into details how hard/easy it would be for them.
If we were to weight the importance of things it would be something like the evidence of it being true times the consequence if true.
Where it reminded me of Pascal's wager is the argument about AI often puts most of the emphasis on the consequence, in the face of very indefinite evidence.
Sure, AIs could cross some threshold of super-intelligence that grants them real world powers beyond our imagining. And the reason that we don't see concrete mechanisms for this is because it is beyond our imagining. But is this even a falsifiable belief?
Of course AI will be able to do things people can't (and to some degree already does). But what people are concerned about seems to be near-term, destroy-the-world existential risk.
Reasoning about specific superhuman strategies doesn't seem possible, but assigning probabilities to unknown unknowns in the absence of evidence doesn't seem reasonable (since that can lead into Pascal's-wager territory).
The rest of my comment was trying to reason about it through an indirect approach, by thinking about the shape of power returns to intelligence in fields we are aware of, and operating under the assumption that initial AIs may have much greater computational ability than humans, but it will still be finite since the hardware will have an energy cost.
I think moving 10% or 20% faster would be better in sports, if you want to make money. Dominating football games by almost teleporting around the field would just cause them to change the rules. Hoping your 100x is under control.
1.) I don't think so. I think a community of very intelligent people thinking intelligence is the single most important thing in the universe is a bit, well, obvious, when you consider it.
2.) I don't think super-AI is even possible, and arises from considerable confusion about the nature of intelligence, and the nature of the universe we occupy.
3.) The answer looks like an obvious "no" here to me.
4.) It seems relatively obvious to me that the problems alignment researchers are trying to solve are, when you get down to it, the same problems anybody designing a government are trying to solve, and when you sit down and think about that, it suggests that alignment is actually kind of a bad thing (imagine that our ancestors, at any point, successfully aligned government with what they then thought the correct morality was). However, as bad as it is, it's strictly better than the alternative, in the case that I'm wrong and it's actually relevant.
"suggests that alignment is actually kind of a bad thing (imagine that our ancestors, at any point, successfully aligned government with what they then thought the correct morality was)"
This is just you being obviously biased by the fact that you happen to agree with the morality of the present day. Even putting aside the fact that this is largely a fact of you being RAISED in today's society (rather than being the product of some objective analysis of the virtues of modern western morality), there's absolutely no reason to think that future humans or AIs will have moral values you deem superior to ours *by virtue of existing in the future*.
A contemporary person to us would not deem their moral values superior to ours, which is why we would want to impose our moral values on them, the degenerates. Such is the way of history.
Also as the way with history, observing that I would not wish my ancestors to have had success imposing their moral values on me, I come to the conclusion my descendants would not wish me to impose my moral values on them.
"it suggests that alignment is actually kind of a bad thing (imagine that our ancestors, at any point, successfully aligned government with what they then thought the correct morality was)"
I don't know. Hard to imagine that 16th century Frenchmen or Italians succeeding in this wouldn't make the West a significantly better place than it is now.
Racism not invented, social engineering not invented, culture of Christendom (i.e. Europe) seen as obviously superior with no self-hating nonsense, hereditary aristocracy preventing striverism, functional monastic system filling an immensely important social niche, minuscule, lackadaisical state barely able to tax salt, legal duelling, hatred of The Turk, admiration of The Turk's neat carpets, universal forced obeisance to the Pope. Attempts to build Modernist probably punished with live burial and/or Modernism not invented.
This is only a preliminary list off the cuff, though.
The Church violently opposed dueling and punished it with excommunication, so it's a bit odd to see your praise for Papal power and Christian culture co-existing with nostalgia for dueling.
"Universal forced obeisance to the Pope" is a joke obviously, I thought it was obvious which part was deranged as a self-deprecating downward spiral gag but perhaps not. That said, I was assuming alignment to a sort of "average/intellectual person's morals", not any specific individual's. In period, the huge quantities of dueling during the last decades of the 16th century weren't legal at all – the last legal duel in France occurred in 1547 – and kings tried occasionally very hard to suppress it, but *popular morality* endorsed it, which is why it persisted for centuries. The tax is the same thing, Henri IV is probably the only French king who would have disdained to use modern panoptical technology to squeeze the last drop of excess blood out of every peasant and burgher in the land, but obviously this wasn't in line with *period morals generally*. Your average random guy (Guy de Randôme?) would have approved of the Pope *but also* of fighting, even if those things were to some extent incongruent. In fact, this kind of combination of respect for the pious with utter disregard for the idea that one ought to follow their example or anything of that sort is something I wish we could have more of in our time.
Racism was never "invented". Ignoring the utter meaningless of the word, 'treating people of different races differently' is an emergent behavior, resulting from a natural viewing of different looking people as different and the enormous mean behavioral differences between races. It was not a formal thing that people developed and made everyone else follow.
I agree, we should reorganize the entire course of western civilization because a tiny fraction of the population will be better off in a particular way if we do.
Are you sufficiently selfish to insist on this even if it would make society worse overall? What I'm saying is that *in sum total* we've pretty self-evidently blown it since then; I don't really think any specific policy is so crucial that I'm willing to fuck it up for everybody just so I can have that.
Yeah, I'm sorry, in that case your argument doesn't really generalize to a broadly applicable principle you can use to convince non-you persons. Anyone who would think from behind a Rawlsian veil of ignorance that making society worse for everybody else in order to improve it for 2% is... well, for one thing he's a hypocrite if he then also favors making the top 2% richest pay taxes.
Your statement raised further questions for me but we're already parlously close to the dread and forbidden Politics, so I'll leave those out and tap out of this subthread here in order to respect the bulls of Pope Alexander the Rabbinical.
"I think a community of very intelligent people thinking intelligence is the single most important thing in the universe is a bit, well, obvious, when you consider it."
I've just tried to consider it from this perspective and it doesn't really seem to work out.
Is it some general principle true for all communities, that people with impressive quality X think that it's the most important thing in the universe? It doesn't seem so. People in the X focused communities do tend to think that X is somewhat important, but not the most important thing that everything else depends on. And no other community apply such appocaliptic importance to X as with AI risks community.
So are very intelligent people specifically worse at these? Are they more vulnerable to such cognitive biases? It's possible but it realy seems that it's supposed to be the other way around.
Notice, also, that the opposite seem to work much better. Lets examine these phrase:
"I think people, which are not part of a community of very intelligent, thinking intelligence isn't that important is a bit, well, obvious, when you consider it"
Now these does seem like a general principle. People who lack some quality or are not into some activity tend to downplay its importance. And one can expect less intelligent people to be more susceptible to cognitive biases.
This particular tendency is a lot more common than you seem to think it is. There are obviously many communities in which it's not a good fit, but if it is? Well, religiosity has the same tendency, right down to the apocalyptic scenarios.
There are also quite a few interest groups with similar tendencies towards secular apocalyptism; oceanographers, climatologists, astronomers, geologists, computer scientists, various economic beliefs, political theorists. They will all claim to be the most important fields in the world, and their knowledge thus made the most important, at various times for various reasons. This isn't a recent thing. People like feeling like they're part of something important.
>How come our politicians and business leaders don't seem to all be super-geniuses?
I challenge this pretense. With some exceptions that tend to be related to inherited wealth and legacy, it seems to me that most (not all) of our politicians and nearly all of our business leaders are, if they remain successful over a long period of time, pretty smart. I think luck/chance can explain most outstanding short term successes.
It sometimes take intelligence to recognize intelligence. Dumb people are not good at evaluating the intelligence of others. Are you certain you are evaluating our current elites properly?
I was comparing to Terence Tao, who was doing university math at 9yrs old, not the national median something. Being skeptical that they are super-geniuses isn't the same as calling them idiots. I'd expect if you dug up test scores or some other available proxy for intelligence (other than political success), they would be smarter than average but not close to the top in the entire country.
I also think that if you look up actual power and influence, politicians are more powerful and influential than average, but not close to the top in the entire country. There are a lot of things a president can do with their distinctive power, particularly in terms of military deployment, and certain modifications of already-existing government programs, but these aren't the sorts of things that most people actually want to do. In terms of ensuring that their children live happy and healthy lives, that they themselves have a lot of opportunities to do fun and interesting things, and so on, presidents probably have about as much power as the average American with a $400,000 salary. And even within the political realm, most things the president can achieve need the cooperation of lots of people (notably the Speaker of the House and Senate Majority Leader if it involves legislation, but even something purely executive like Bush's PEPFAR needs a whole bureaucracy to execute it, and thus likely was as much the work of several dozen other designers as it was of Bush himself).
Nancy Pelosi and Mitch McConnell are two individuals that have achieved a lot more than the average person in their position, and I think they really are quite a bit higher on the "intelligence" spectrum than the average national politician.
1. I tend to think intelligence correlates (weakly) with other good things, but the tails come apart pretty quickly. See Table 2 in Part III of https://astralcodexten.substack.com/p/secrets-of-the-great-families for some evidence that politicians have higher IQ than average, and high-ranking politicians have higher IQ than low-ranking politicians. But it's obviously a weak effect. Just as people who are good at baseball will probably have a weak tendency to be better than average at basketball just because they're physically fit but you still can't make a winning NBA team by rounding up baseball All-Stars. On the other hand, 0% of successful politicians are mice, chimpanzees, or five-year-olds (and not just because we don't let them run). Once you get to super-extreme differences, even small correlations become really important.
2. Nobody knows! I agree that the best-case scenario is it goes very slowly and gradually and we have a lot of time to tinker with things. There's some reason to think this might be true - eg it took AIs 30 years from "played chess passably" to "beat human champion". On the other hand, it only took AIs a few months from "played Go passably" to "beat human champion", so right now we don't know what analogy to use for things we care about. I think most people surveyed believe AI can be superintelligent within a decade or less of being human intelligence.
3. Right now the two leading AI research groups are OpenAI and DeepMind. Both of them at least claim to be interested in alignment and pay attention to alignment work done outside their organization. I'll be posting soon on some conversations between Richard Ngo (one of OpenAI's leading alignment people) and Eliezer Yudkowsky, for example.
4. Yes, definitely this is bad. Some AI alignment people believe that it will be easier to align an AI to do something very simple like "prevent other superintelligent AIs until we solve harder problems" than to actually be aligned in general; if people take this route, then this might solve the human alignment problem too. Otherwise I don't know of any great solution here.
> On the other hand, 0% of successful politicians are mice, chimpanzees, or five-year-olds (and not just because we don't let them run). Once you get to super-extreme differences, even small correlations become really important.
To extend the thought experiment though, what if you put a human in a chimpanzee or five-year-old society? Ignoring the size advantage on the 5 year olds, I'm not sure exactly how they could use the additional intelligence to take over the world. I think you'd need to bring in access to weapons to really take over (uh, not that people won't do that with AIs at some point).
I'm sure I read somewhere that some early experimentalists brought up small children along with baby chimps - and the experiments were curtailed because instead of the chimps acting more like human children, the children tended to act more like chimps!
Whether that would be good or bad in terms of an AI risk analogy, I cannot say...
In a 5Y old world? It's not only size. If you are not completely uninterested and actively participate; and if you are not really strange/creepy (in which case you will fare even worse in the adult world), you basically are the undisputed leader of any 5YO group, almost god on earth.
You enjoying the position is not the same question....
Hmm, that's true although I'm not sure it's only an intelligence thing. To adjust for other factors, maybe we should make it so you are a 5 year old with adult intelligence in the 5 year old world (which would probably make it more challenging).
Even that analogy probably isn't perfect. I've seen it suggested that dogs are more friendly than wolves because they retain aspects of juvenile behavior into adulthood, and kids in general might have some instincts that would make taking over the world easier compared to adult chimpanzees etc.
Well, obviously you detect intelligence in others with your own cognitive abilities. So 5YO will judge their peer with their typical 5YO brain, if you are an adult within a child body you will be judged, I guess, either as a very inventive and super competent friend, if you are friendly and participate (which would required not being bored to death, or keep moving even if you are). If you disengage, you will get tagged as too serious, boring and recluse, and mostly ignored probably except when there is a issue that other kids know you can fix (the loner is strange, but he can install new games on the parent ipad and prepare great hot chocolates, lets go find him). Probably the kind of experience most gifted childs have...
Among 5YO bullying should not be a big issue. Among 10-15YO, or among chimps, it could, so you need to have both good social intelligence and play the social dominance game cleverly, and be at least of average physical strength....
The big issue is that to a 5Y0 (or a chimp), the world is a tiny group, maybe 10-50 friends. So taking over the world may be misleading (something to consider for AI: maybe our world will just a be a (nice, or annoying, depending if the superintelligent AI is friendly or not, if he find us cute) distraction to what it find important, like 5YO games when you think about your promotion. And still, we may be affected without even understanding the issues or the reason the AI acted like it did (like a 5YO in the middle of a divorce)
I tend to think about optimal AI as "glasses for brain": improving a blurry picture.
For example, picking up the single most promising molecule for treatment of some diseases out of ten million "pixels".
Of course people with glasses have caused a lot of evil (to the degree that they became official villains of the Khmer Rouge), so people with AI will do so as well.
I don't think AI research could be stamped out even if the whole world got together and agreed to do that (which it won't, of course). Among the things that should maybe be banned, AI is the complete opposite of nukes, which must be built in big, recognizable facilities and require expensive, hard-to-access materials. Huge progress can be made on AI with just cheap electronics and lotsa IQ points.
I think many people wish they could forbid AI research, but they're not trying because:
1. Even if the US/EU forbade it, China probably wouldn't, and then China would get AI first and it would be terrible. Even if China said they were forbidding it, would we trust them? Even if China said they were forbidding it and told the truth, Russia? India? Some secret team at Google? Three guys in a garage? We can't even coordinate well around climate change, which has way more political activism behind it.
2. People concerned about AI risk probably don't have the political clout to do anything like this, especially if people likely to profit from AI (eg Facebook, Google) fight us.
3. Right now the AI industry is very friendly to people concerned about AI risk, people move back and forth between industry and alignment academia freely, they agree to take everything seriously. If alignment people declared war on industry, probably we would lose pathetically and also turn lots of allies into enemies.
4. These things might change in the future and then it might be strategically correct to declare war on the industry and try to win.
I get that fighting AI development is insanely difficult but its not
impossible . (@ a real Dog : Developing general AI is not something that one genius can do in his garage , GPT3 was developed ( from gpt2 ) by 31 engineers .
And China shows that you can ban Crypto Mining , so you can also ban AI research , for starters concentrate on clusters
of computers and coftware exchanges that tangents AI research )
And i get that we would miss out on AI benefits , but the risk is not as with domesticating horses , that you might suffer the occasional horse kick ,
its total annihilation , humanity cant get a little bit extinct , its all or nothing , that risk / reward is just not in our favor .
Scott's point 3 is a good point that i had not considered , thanks .
But more to the point : if you consider AI threat an existential risk AND think you cant prevent it ,
why not , like Tom said ,get out of the car ,why don't the AI safety researchers try to apply for work with Elon Musk on his rockets
(this having the added benefit of working on another existential risk : Asteroids ) instead of working on seat belts ?
GPT-3 was only 31 engineers? That's not a literal garage, but it's still tiny by software industry standards. That's "one-story building in an office park" sort of small, and there are a lot of office parks on this planet.
Scrutinizing "clusters of computers" sounds even more impossible, since AWS, Azure, et al basically make a business out of selling chunks of computing power on demand. Restricting access to cloud computing would basically nuke the tech industry from orbit - everyone from tech giants like Netflix and Facebook to little one-office-block startups to individual college students makes use of these platforms.
(Bitcoin miners are a very specific type of computer cluster, and one that doesn't normally make use of cloud services because they need much cheaper computers to be profitable. It doesn't generalize.)
I think the option of preventing research is generally regarded as impossible, because AI research can be conducted in secret and there are a large number of actors who believe conducting research is in their self-interest. (And even if it weren't logistically impossible, it is probably also politically impossible.)
Sometimes it is proposed that the first superintelligent AI should (or would) immediately be used to suppress all competitors.
>i think this is like discussing seat belts in a car headed for a cliff
I don't think the people in alignment research necessarily see it very differently. Eliezer Yudkowsky seems quite pessimistic. I think they just also believe the brakes on the car (ability to stop AI research) are not only broken, but were never installed in the first place.
Can you blow up the car, preventing it from careening off the cliff? Probably, but that hardly seems better. Can you escape the car? Maybe, see: Elon Musk.
Well certainly a global ban on AI research, enforced via extreme vigilance and deadly force, is one way to handle AI risk, but comes at the cost of losing out on AI benefits. Handling AI risk is like domesticating horses; we could have never bothered, but we'll be infinitely better off if we can make it work.
I have the feeling there is a difference from other dangerous technologies, in the sense that the prevention is not only not enjoying the benefits, it's replacing many of the risks by similar (but better known) risks: AI risk is being ruled by despotic, non-revocable AI, and possibly killed by it. And an effective ban seems to imply much more authoritarian global government/regulation. A despotic, difficult to revoke, human organisation. Trading an unknown devil for a known one. Never a happy choice....
Tyler Cowen's Convsersations with Tyler had a really good interview with Richard Prum, an ornithologist. (It was very interesting to me, someone who did not expect to be into birds.) At some point Richard made an argument that seemed to represent an anti-Molochian process,. Somehow, flowers genuinely competed on some beauty axis, because that was the best way to attract pollinators.
It suggests to me that there is some way to structure competition such that Moloch does not always win. Or that there is some countervailing force that also exists if we do not insist on being pessimists. Is there anybody with expertise on evolution able to explain why we have beautiful flowers instead of ones that optimized for some invisible pollinatableness trait over everything else?
If you're looking at cultivated instead of wild flowers, bear in mind that people have chosen to cultivate flowers that they deem appealing and then that they selectively bred them for centuries.
It's true that cultivated flowers are particularly optimized for human appreciation. But most wild flowers that bees and birds find visually appealing also look pretty appealing to humans (except in cases where it depends on ultraviolet vision). Smells don't seem to have as much cross-species shared experience of beauty, but that makes sense because smell is so closely connected to specific dietary requirements. It's interesting that certain kinds of visual appeal do appear to have not just interpersonal agreement but cross-species and even cross-phylum agreement.
That contrasts with what I've heard of rainforests as fiercely competitive, with trees growing immensely to obtain sunlight. I would think of a small island as being relatively uncompetitive.
Well, I'm wondering if it's maybe a result of many optimization targets. If you keep increasing the number of vertices on a polygon, it eventually begins to look like it doesn't have any vertices at all (it looks like a circle). Similarly, could a system that's trying to optimize for so many things begin to look like it's not trying to ruthlessly optimize for anything. The system achieves metastability via the tension of many optimization targets.
"Is there anybody with expertise on evolution able to explain why we have beautiful flowers instead of ones that optimized for some invisible pollinatableness trait over everything else?"
I work in the field of evolutionary biology and have wondered about this as well. Among the things to explain, you can add that humans seem to really like some of the smells that flowers produce to attract pollinators (but not all of them, some fly-attracting smells are appalling!) To my knowledge, we don't currently know the answer to this question. My personal bet would be the one proposed by Dweomite, that it is man who has evolved to find beauty in flowers, not the other way around, as loving flowers has benefited us in the past.
Hmm well we are now breeding flowers for what we find beautiful. But before that flower 'beauty' must have been all about attracting pollinators. That we happen to like some of the shapes, colors and odors seems like semi accidental. (What else makes any sense?... what if we didn't have three color sensors, but only two?... we were all color blind. Oh sci-fi story where someone genetically hacks a fourth color sensor into the eye... a different beauty)
"That we happen to like some of the shapes, colors and odors seems like semi accidental."
I don't think it's accidental, in the sense of random. If we consider only insect-pollinated wildflowers, I think the vast majority of them would be between somewhat and very attractive to humans, which means that some sort of general explanation is needed.
And two general explanations that come to mind are (1) being visually salient for pollinators automatically makes a flower attractive to humans because the same properties are involved (for example being very colorful) (2) humans were selected to find flowers attractive.
I do notc see another obvious explanation, which of course does not mean thare there isn't any!
I always assumed that pollinated flowers are pretty (on the whole) due to a combination of needing to advertise visually (which invokes the same evolutionary logic as poisonous creatures in producing visible, contrasting colours), a general tendency towards repeated components and symmetry caused by being made up of plant parts, and a sort of weak anthropic effect where we select particularly pretty (to us) flowers to focus on when we have these sorts of discussions.
As an interesting tester for the rule, we have a few species of orchids that have evolved to be specifically pretty to bees/wasps via mimicry, and look like a plant's impression of a female insect. These are not generally considered to be the most beautiful of flowers.
I totally agree about the need for visual advertising, but it doesn't seem entirely clear that being visually attractive to insects automatically translates into being visually attractive to humans. However, humans and many animals seem to really like color (sexual selection, when direct function is not very important, usually produces colorful patterns), so perhaps a flower just needs to be colorful to be both visible for insect and pretty for humans.
I also think that the beauty of flowers varies with their pollination mode, and I've been wanting to check this with data for some time. Maybe I'll get there one day!
It seems to me that bee mimicking orchids are often beautiful, see for example the obligatory xkcd below.
As a casual forager, I've noticed that if you come to an area with a lot of flowers and memorize it, you likely will be rewarded eventually with fruit. If you notice the flowers easily, you will find that area easily. However, it's not us with our big brains and long memories that most flowers evolved to attract--it's pollinators like bats and insects! Flowers and pollinators co-evolved and in so doing settled on their signals. Because of that, we're chasing the beauty in the eye of the bee.
If you want to attract pollinators, you can't be "invisible" to the pollinators; you need to do something they can detect, or else they can't be attracted. (Though Google says that around 7% of flowers have ultraviolet markings, visible to bees but invisible to humans.)
As for being "beautiful", are you sure that's not a case of *humans* evolving to find beauty in however-flowers-happen-to-look? (Living in the same general location as flowers seems like it could plausibly be advantageous, e.g. because it increases your chances of finding fruit, honey, or herbivores to eat.)
However, the point of Moloch isn't that you can't ever get anything you want. For instance, you probably like civilization more than barbarism; and lo! we are living in a civilization; and the reason is that civilization is legitimately more efficient. The danger is that if something *even more* efficient came along, Moloch might push you into it even if you don't like it as much.
But in the-world-as-it-currently-is, there are actually quite a lot of things that are pleasant *and* efficient. (And in at least some cases, the *reason* they're pleasant might be evolution programming you to like strategies that have worked in the past.)
>As for being "beautiful", are you sure that's not a case of *humans* evolving to find beauty in however-flowers-happen-to-look?
No, I am not, this is a good pushback. I have implicitly assumed that, in the space of possible ways plants could appear that there seemed to be some convergence of ability to "look like a cool flower" and "ability to provide pollen" such that the former acted as a proxy for the latter given that, as you say, "you need to do something they can detect," and the easiest something was something else desirable (to us at least).
So, are flowers just optimizing for appearing distinct-yet-mathematical, and that looks good to pollinators (and us)?
I think that's right, though I think humans have evolved to find the *particular* distinct-yet-mathematical appearance of certain natural phenomena (such as healthy plants) attractive.
(Note that I'm not any kind of expert, and evo psych is hard to test without a million years and a very big Petri dish).
Part of it is that we share elements of the same signalling language - we find symmetry attractive in people and foods because it signals lack of infection and low mutational load. Similar, we find bright colours salient because plants and animals use it as a signal, and we've evolved to be aware of such signals. Since there's no counterpressure to _not_ find these characteristics appealing in flowers, we do find them so almost by default.
Secondly, as Dweomite says, there are specific reasons we might find healthy and diverse plants appealing - it indicates abundant food and fertile soil - and we evolved from arborial species that relied on trees for survival (healthy trees offering better cover and safer climbing). So there was actually active pressure to find evidence of plant health appealing.
Scott's answer applies to biological treatment (drugs, etc.) of psychiatric problems. People who do talk therapy also view some patients as treatment-resistant, and there are as many ways of thinking of and working to undo these folks' resistance to treatment as there are styles of psychotherapy -- which is to say hundreds, many of them silly. And yet there do truly exist therapists who are exceptionally good at helping deeply stuck people change.
Yes, all the time. Usually you reserve more complicated therapies for treatment resistant patients, eg ketamine or ECT. If someone resists literally everything (not really possible, there are hundreds of things, but sometimes you do kind of lose hope), then you see what you can do to make their life better (I sometimes use Adderall in an almost palliative way; it doesn't exactly solve treatment resistant depression, but at least it helps TRD patients do more things and live more normal lives). If someone is still unable to live a normal life, you work with them on things like getting disability and otherwise trying to live the best abnormal life they can.
Some moral duties arguably resist consequentialization, and consequentualism kind of perverts your relationship with your values in a way. As an imperfect analogy for this perversion, suppose you're a real space nerd with little to no interest in other sciences, and you fully support investing in NASA and similar space-oriented projects.
Your most cherished desire is to visit Jupiter. It's virtually certain that nothing you can do will let you directly fulfill that wish in this lifetime. However, it's possible that life extension research could. So despite no interest in biology, consequentialist-type thinking would lead you to completely ignore space science, and petition for others to do the same, and to go full-bore on life sciences.
You'll probably hate doing this with every fiber of your being. You're not just working in a subject that holds no direct interest for you, as it is only contingently valuable, but because you would be fighting so hard to divert money *away* from the one thing you do value. I'm not sure many people would or could actually ever do this, so consequentialism probably doesn't well describe how people actually reason about their values.
This seems to accurately describe how most people feel about things like jobs and cars and school. Instrumental value is real value, if it actually helps you get what you actually care about. Although different people have lots of different terminal values, the fact that most people have so many shared instrumental values is why things like cities and societies and governments exist, and can be so successful.
This is a critique not of consequentialism, but precisely of failing to be a consequentialist. "Hating something with every fiber of your being" and "diverting money from the one thing you value" all sound like pretty bad consequences; so a consequentialist would want to avoid those. If you end up like that, you were not a consequentialist.
Clearly in the scenario as described, you can swallow the bitter pill of doing something you hate anyway if you value achieving your desired result enough. But like I said, it's an imperfect analogy meant only to show that consequentialism doesn't necessarily match our intuitions at the extremes. You're better served reading the papers referenced since they go into more detail rather than quibbling over the specifics of my imperfect analogy.
A while ago, Scott wrote about Cost Disease, why some things like education and health care are getting dramatically more expensive. [1]
For those of you who are interested, Alon Levy has a recent 7 part series on institutional issues leading to Cost Disease in public transit over at Pedestrian Observations: Procurement, Professional Oversight, Transparency, Proactive and Reactive Regulations, Technological and Social Change, Coordination, & Who is Entrusted to Learn. While they're specifically talking about public transit, similar institutional problems (and solutions?) can be found in other fields.
I think that, in order for this argument to explain a significant effect, the dollar would have the main export for the US. Is this the case? Or is the US trade in goods more important than the US trade in dollars?
This may be oversimplifying it (not sure if this accounts for flows from financial instruments) but US Trade Deficit in averages north of 50bn depending on the month so on balance dollars are flowing out of the US, making dollars our most important export.
It appears as though total US exports per year are about $2.5 trillion, while total imports are about $3 trillion, leading to a $500 billion trade deficit - which is about $50 billion per month. (Rounding a lot everywhere.)
If we treat the trade deficit as exporting dollars, dollars would only count for about 1/6 of the total exports.
Dollars would probably count as our largest export - because the other exports are quite diverse. But I don't know if that's large enough to explain everything we're seeing. We're certainly not a petrostate where a majority of our exports are a single commodity.
The "petrostate" moniker oversells it - we're obviously a much more diverse economy than that - but there is a very real sense in which developed, money center countries are primarily net exporters of credit and net importers of physical goods. The US is the most systemically important provider of credit in the world, in the form of printing the reserve currency for the global financial system.
That’s a fascinating theory, and the folks at Alphaville are very smart. My only immediate question is how that simple model would be impacted by the split between demand for dollars and demand for for US Govt and Dollar denominated debt?
I recently came across a claim that Scott's Reign of Terror moderation mode is intended to obfuscate the exact criteria of whether an offense is bannable or not (to prevent intentional skirting of the rules in bad faith), but after some searching, both on SSC and on the old Livejournal, I couldn't find a satisfactory description of what the Reign of Terror was or is supposed to be.
If anyone can point me towards one, I'd be grateful.
(I confess my reason for this is mostly wanting to add Scott's justification for it into my personal Catchy And Funny Quotes Collection.)
There wasn't a blow-by-blow description of what such a reign entailed, more "If I want to ban you, I will; be warned from now on". Those who decided they could not live under that left, the rest of us bowed our necks and remained. There was some discussion in the comment thread, but you'll have to read down a long ways to find it.
I can't find one from Scott, but here's Eliezer Yudkowsky's Reign of Terror:
Eliezer Yudkowsky's commenting guidelines
Reign of Terror - I delete anything I judge to be counterproductive
I will enforce the same standards here as I would on my personal Facebook garden. If it looks like it would be unhedonic to spend time interacting with you, I will ban you from commenting on my posts.
Specific guidelines:
Argue against ideas rather than people.
Don't accuse others of committing the Being Wrong Fallacy ("Wow, I can't believe you're so wrong! And you believe you're right! That's even more wrong!").
I consider tone-policing to be a self-fulfilling prophecy and will delete it.
If I think your own tone is counterproductive, I will try to remember to politely delete your comment instead of rudely saying so in a public reply.
If you have helpful personal advice to someone that could perhaps be taken as lowering their status, say it to them in private rather than in a public comment.
The censorship policy of the Reign of Terror is not part of the content of the post itself and may not be debated on the post. If you think Censorship!! is a terrible idea and invalidates discussion, feel free not to read the comments section.
The Internet is full of things to read that will not make you angry. If it seems like you choose to spend a lot of time reading things that will give you a chance to be angry and push down others so you can be above them, you're not an interesting plant to have in my garden and you will be weeded. I don't consider it fun to get angry at such people, and I will choose to read something else instead.
Sounds like someone who is quite ruthless about pursuing his actual interests, and debating politics and policies of online discussion is not one of them.
Since I'm currently working on my own, which I was basically forced to make because the existing options were not pursuing the direction I'd have liked, what is the thing you most wish you could so in a political strategy game?
Play a despot in Star Dynasties?
Really get into the Britishness of Ariaselle in Sovereignty?
Create a Socii style alliance system in Imperator or Field Of Glory: Empires?
Actually engage in deep and fun connected conspiracies in CK2?
Perhaps you want a political strategy game in a fantasy world with actual magic? Divination, Charms, Enchantments, magical assassination?
Find a game that actually has enough intrigue to feel like Game Of Thrones?
For me the various strategy genres have all moved away from deep simulations. Even the recent Paradox games have felt more like static reruns than innovative new genre makers. Perhaps Vicky 3 will crack the trend.
I've long wanted a sequel/remake for Master of Magic. I've seen some games that had a few similar features, but it really didn't feel the same. Key points include a detailed magic system that allows you to mix and match types to research different spell chains (in the original, you had a certain number of build points which you could spend on getting very high level magic in a particular field, or mixing two or more fields - the catch was some spells required access to mixed fields and some spells required high level). There was also a mixture of city management and army management that was pretty similar to Civilization, but that worked very closely with the magic system - buffs, summoned units, etc.
My favorite space game is still the original Master of Orion. Simple enough to be accessible, yet complex enough to feel different over multiple replays. MOO2 added some neat features, but had a lot of the pacing all wrong and many games didn't feel like there was any good balance. MOO3 was a failure because they made it too complex and the required workarounds to make it playable also make it possible to have the game play itself (and unless you loved mix-maxing, that was often the better choice, especially while figuring out how to play...).
Axioms has Dominions3-5 +++ complex magic. And there are some interesting restriction mechanisms.
As far as MoO have you played Remnants of the Precursors? Personally I don't think we needed a free MoO1 remake but supposedly it is a big hit with the old MoO1 fans due to nice graphics running better, and failthful mechanical adaptation.
Oh, it turns out I was remembering the wrong game. I love the concept of Dominions but got really frustrated by trying to essentially write code in order to do combat. I haven't seen Dominions 5 yet, I'll see if that's any better on that regard. If the game worked more intuitively and had a better UI, I would have loved it. I think the last I saw was Dominions 2 or 3? It's been a while.
Oh, and thanks again for the tip about Remnants, it's exactly what I wanted from a MoO remake!
One thing I'd like is to seriously cut back on player omniscience. Many games will do that w/re the placement of enemy units on the map. But will still give you the morale and supply status of every visible unit to two decimal points, and *accurate* economic statistics on every segment of every provincial economy, and full details on the progress of cultural development and religious conversion, and the exact extent to which every other faction or character likes or dislikes your own, and maybe even the extent to which they like or dislike each other.
First off, probably the biggest challenge in kingship (or whatever) is that most of that information is unavailable, or costly, or unreliable, or costly *and* unreliable. Second, trying to use all that information is tedious, but trying *not* to use all that potentially valuable information is annoying to gamers who are either trying to win or trying for an immersive experience.
So, make C3I (by whatever name fits the theme) a finite resource. Every time you open a new window, you spend some number of "C3I points". And get an infobox with vague descriptive terms like "good", "fair", and "poor" that you can expand on with another C3I point. Give an order to a unit or a factory, that's a C3I point, or more for a fancy order. When you're out of C3I points, your turn is submitted with whatever orders you've given.
Or come up with a different way of achieving the same effect.
Master of Orion 3 tried that and it was a spectacular failure iirc.
I think the idea has merit. The problem is, if you make decisions / gathering information a spendable resource, the game has to be able to "play itself" reasonably well (otherwise every part you don't monitor constantly will fall apart). In MoO3 you could sometimes win the game just by pressing "end turn" a bunch.
It's also an interesting way to introduce roundabout administrative capacity / sprawl / wide tax.
Perhaps fluff it by having characters/heroes that you control manually, but you cannot control all of them at once - so you spend a turn wearing the science advisor hat and developing a science plan for the next turns, another being a planetary governor at the industrial hub, yet another being an admiral and coordinating the invading fleet - with the information you can access in a given turn being limited to what the character would know.
EDIT: there is also the problem that unless you want the player to make notes, you'll need a way to revisit previously accessible "stale" information. If 3 turns ago the player could check (for free) that a factory can create 2 units per turn, they should still be able to see it now, even if the factory got upgraded in the meantime and creates 4 units per turn instead. Many games do that with fog of war and enemy state, doing that with player state would be... wild.
When you make the game "guess the other guy's internal model and values," you have to make the game significantly easier. And when someones *does* figure out their model and values[1], it becomes really easy.
That's not necessarily a bad thing. I've played games where I just get a *feeling* that I'm improving certain things, or not, and a lot of that might just be in my imagination, but that incorporates my imagination into the game, and makes for a fun experience, if done right.
It's hard to get a lot of replay value out of that, though. And you aren't going to get a forum of obsessives giving each other advice.
[1] And this is easy to reveal in a FAQ, so the game spoils very easy
Star Dynasties took a huge amount of heat from people mad about the obfuscation of data in the game. Not least from me but then I mostly just savescummed on key decisions so I wasn't quite as salty to Glen as I would otherwise have been. Plus at least he tried some different things so I didn't wanna put him off.
This was especially an issue where diplomatic actions had red 0%, blue 100%, and yellow 1%-99% color coding. Traumatizing I have to say. Much save scumming commenced. Especially since that game uses a very harsh and limiting Action Point system. Actually the final Attention Point system in Axioms got some modifications just from how painful the SD system was.
The worst part was the game told you immediately afterward, if your action failed, what was wrong. Even worse was there were some bugs so sometimes the breakdown afterwards said it should have worked, sometimes even by decent margins.
Always gotta balance realism with fun. Also another limit is player expectations. If they expect one thing and get another thing it can hurt you a lot.
Attention Points and the Intelligence Network in Axioms aren't quite as intense as what John suggests, maybe 70% there. Gonna take a lot of play testing to balance.
Well this is a post by someone begging to read a very detailed design blog on the "Intelligence Network" mechanic :)
I talk about it in a few posts but one with much more detail would be helpful at some point.
Axioms has an Attention Point system which I did write a moderately detailed post about. This relates to actions/agreements/decisions rather than information. Attention Points limit you direct control in a flexible and fun way. You can't do everything you need to delegate and trade materialy and informational resources with the other characters.
The Intel Network isn't quite as intense as your suggestion but it is maybe 70% of the way there. All of the information about a province is gated based on your investment in your intel node in that province. This includes resources, geography, where populations and characters are located, and pretty much everything on military intel.
There are things you need to move around to do because your personal presence matters. Doing a "Grand Tour" as Karl Franz is depicted doing Warhammer Fantasy or the rulers of Silverberg's Majipoor isn't *required* on ascension to a title but it provides strong benefits. Of course it has obvious costs. Holding a feast obviously requires characters to be present. The same for Hunts and other stuff.
All the Intrigue requires deploying coins, materials and items, populations raised similar to the army units, and ideally if you can spare them 1 or more other characters as agents or network leaders. Missions can involve Surveying a province, Watching a character, Counterintel, and various other stuff.
The Opinion system is more similar to Paradox vs Star Dynasties, you might actually like how Star Dynasties works, in that you know what people think of you generally. But you can learn about Secrets, Secret Desires, and various other stuff. If one guy knows about some amazing blackmail on another and you don't you would be surprised when someone who you think you are on good terms with acts against you. Similarly fulfilling the desire of a character can get them to act against their otherwise strong feelings.
Basically there is public stuff which you know as long as you have a consistent low tier node in a province and then more that you get from low level character observing. So a small amount of cash, 10 spies raised from a population on a province and less cash and 5 guys watching key characters. Vs a big investment with named characters and gear/items and a pile of cash to spend.
Star Dynasties actually uses a very opaque system. A red result means no way, a blue one means 100% yes, and anything in between that is yellow. Actually you can't even make proposals like this marriage or allegiance only the AI can sometimes say I want a marriage and you can pick options you are given like vassalizing them or money and such. Similarly they can ask to be your vassal and you can take the blue yes option or run a risk and ask for money or a marriage and if they say no you get nothing.
That post doesn't have a ton of details about the very low level stuff but it is still pretty detailed and focuses primarily on the intelligence network and associated mechanics.
Thanks; that does look interesting. Particularly if it covers things like e.g. your own economic statistics, to the extent that they are a thing in that game.
The game that came closest to doing things right for me was King of Dragon Pass. To make a really detailed simulationist game, I think it's valuable to keep the scale very small, both so the player is able to keep track of everything and so that your compute time doesn't explode.
Interpersonal dynamics are the thing that IMO adds the most replayability, and intrigue is fun, but it shouldn't all be vicious backstabbing, there should be just as much matchmaking and forging bonds of brotherhood, etc. The promise of Crusader Kings is a good target, but the implementation was not great, and I think mostly because almost no characters actually matter
Crusader Kings generates a ton of characters that don't really do much for sure. There were reasons they did it that way but their system could have been a lot better.
I actually designed the character interaction model to be very positivity based. Of course, assuming I get the AI worked, the Intrigue is much more detailed and with more possibillity both mechanically and narratively, but friends, allies, co-conspirators, long running alliances between families and much more vassal interaction/relevance was a key focus.
It does have to be turn based for performance and coding complexity reasons, though. I have to rewrite a few of my blog posts, especially the one on friend, allies, retainers, and such. But the relationship system is quite flexible and deep.
Crusader Kings once infamously found that the reason for a substantial slowdown in gameplay was that after a particular update ~70% of the clock cycles were devoted to every single Greek-culture character looking at every other Greek character and reevaluating from scratch each day, "should I have this person castrated?" So, yeah, could have been a lot better.
But there's nothing wrong with generating far more characters than will ever matter, because there's no way to predict in advance which characters *will* matter. And it's hard to retcon them into fully-fledged existence when they do. So you just need to have a sufficiently simple and abstract way to handle them before they rise to prominence.
Certainly, a game that has enough intrigue to feel like ASoIaF would be a prime contender for my gaming buck. And if I understand you correctly, then I strongly agree with you about the lack of deep simulation in the genre. I would like to feel from AI opponents something akin to what I feel from a chess engine: more low-level situational analysis and fewer prefabricated decision triggers. Sadly, this kind of innovation is unlikely to come from Paradox, which seems to be in something of a consolidation phase. (see, e.g. the shutdown of Imperator)
One thing I have yet to see in a political strategy game is a good treatment of irrationality and demagoguery. Such as the phenomenon whereby a simple, crude, and ineffective policy solution is more popular than one that's plausible but complex and difficult to implement. Everyone in strategy games - even blue-and-orange-morality aliens, even supposed Neronian tyrants - is far too reasonable.
I think this might just be a little too low level for games made at our current technology level. Like you can gives characters and populations ideologies and have them act on those but it is hard to give large groups the kind of treatment you'd give important characters. Plus most strategy games are pre-industrial so demogogues are limited since there is no voting. Axioms will have a propaganda system so that you could sort of role play "a simplistic policy proposal". But that would take some amount of imagination and I can't think of any other game that can even do that.
Have the population's ideologies and sympathies make sense and gameplay impact.
Stellaris did a token but overall underwhelming effort there. Endless Space 2, despite being a minmaxy 4X, had a really interesting political system where parties and their laws were incredibly impactful, corresponding to supercharged Stellaris traditions. Unfortunately, things that affected political leanings of your population were super opaque and you usually ended up rolling with whatever build emerged naturally (or becoming a dictatorship and just deciding for yourself).
I found the way post-expansions Sins of a Solar Empire approached diplomacy really unique - to make an alliance you needed to not only make the other faction friendly to you, you also needed to make _your_ population friendly to _them_. Hence the need to send envoys, give each other missions and requests, and overall grease the wheels. The implementation sucked but I think they were on to something.
Well this is definitely the kind of answer I was looking for. Though it remains to be seen if there is a sufficient interest in such things to constitute a market.
A proper sequel to patrician III would be cool. With more polish and depth. Where game starts with economics/trade and ends with more and more complex politics/warfare
How much of a "sequel"? Just that you can do all the things you did in Patrician 3? I think I lost my digital copy thanks to the whole Stardock/Gamestop mess. But I did play it for a couple hundred hours. Do you want a literal "sequel" titled as Patrician or a spiritual one? Literal sequels are doing bad these days. Ubisoft royally bait and switched with their new The Settlers for instance.
I think proper sequel. But with 2d sprite graphics. With more fleshed out market systems, town politics and combat (like Stronghold). And not having to rely on the stupid trial and error of finding captains in pubs.
I don't think there is a game out there currently that combines these different elements well. An RTS like game, mixed in with politics (with different social hierarchies that you can ascend or be ejected from) and concept of having an inner circle with npcs you can make do various things depending on your status/wealth. And a somewhat sophisticated economic system where your character can trade and invest. With markets that go beyond just random moving prices that go down if you sell too much and vice versa.
Not hyped about Patrician 4 eh? Not shocking, it was dumbed down and then they did their stupid CD key nonsense. A hypothetical Patrician sequel is probably up there with a good Majesty 1 sequel in terms of you can get 80% of the features in some games but probably not 100%.
I have been thinking a lot about this, and share your complaint regarding deep simulation! I really want something that combines a deep economic with a deep political simulation. I've definitely been following along with the Victoria 3 dev diaries and have high hopes for it, and I've also been working on some economic simulations of my own which may some day evolve into a game.
I recently discovered X4: Foundations on Steam and thought it might be my dream game, but I've ultimately been kind of disappointed. It does a lot of things right, in that it has a mostly-real economy that you can participate in both on an individual level as a merchant or on a strategic level managing a business empire. But it's really got no political or diplomatic component, so ultimately it ends up feeling flat and more like playing Satisfactory/Factorio than like playing a Paradox title.
One of the things that disappoints me about pretty much every economic game out there (including X4 and apparently including Victoria 3 also) is that the money is exogenous. To feel like a real economy I want a game where the money is actually conserved and subject to policy (e.g. fiat vs. precious metals, with different currencies) rather than using typical video game money sources and sinks, which inevitably leads to brokenness and unrealism (ideally I'd also like conservation of goods, which Victoria 3 apparently isn't doing). So to your question, I guess I want my strategy games to include a monetary policy component? I, uh, realize this probably puts me in a fairly small demographic.
Exogenous prices bug me too. X4 defines a price range for each good, and Victoria 3 is apparently giving each good a baseline price. This probably helps avoid degenerate game states, but it limits the dynamism, and I'd prefer the simulation be robust enough to price things itself rather than relying on exogenous tuning.
I'll definitely be following along with your game!
I don't think the current model can support "real" prices but it does have a citybuilding style resource model, just without actually doing the building placement minigame. My previous project had been modifying Glest(the Advanced Engine fork), into a Majesty1+++ game with a citybuilder economy, granted focused in the end on RPG equipment but it did have food and stuff. I wanted to keep that type of system for Axioms. Nothing that has been done in a political strategy game before.
I intentially cut a lot of features for a potential sequel to stop feature creep. I could *probably* do a good monetary simulation in the sequel because computers, both for devs and players, would be at a high enough level of performance to handle that by the time a sequel was feasible.
Can anyone explain to me, a non-engineer, how Usenet clients work? I was hoping to find some Usenet posts I'd made as a punk kid at some point in the 90s- I know the forum, it will take some searching to find exactly when I made them (I'm a bit fuzzy on the year). Basically, they have sentimental value to me. (Kinda like all my Hotmail emails from my youth that Microsoft apparently deleted for all time because I didn't log in often enough..... If anyone has any tips on recovering these, I'm all ears).
I searched through the Google Groups archive last year, I didn't find exactly what I was looking for, but I got pretty close. Did Google Groups archive all of Usenet, or just some of it? And what's with these 'clients'- I need a desktop or third party app to search old Usenet content? (Which is hosted where, exactly? If it's hosted on x website, can't I just go directly to that website?) Anyone have the big picture here on Usenet time traveling back to the 90s? (Does anyone even still use Usenet?)
Deja News archived Usenet and then Google Groups bought Deja News.
Lots of posts could've been lost, either from existing before Deja News started archiving, or because someone asked for posts to be removed at some point along the way.
This is a question I've recently become curious about. :) Preliminary searches seem to suggest "Yes." (I am being slightly lazy and have not yet tested this.)
I would love more info on the present state of Usenet. (Though if someone says "FOFY," that would feel very '90s-internet, too.)
I need to buy a car soon. I've never bought a car though "normal means" before (always gotten used cars through deals with friends and family) but, obviously, this is an exceptionally weird time to be buying.
Any advice on how to approach this?
Normally a new car would be completely out of the question, but now I'm not so sure?
If you wanted to buy a modest car within the next 3-or-so months, what would you be doing?
Personally speaking, I had the choice to buy a new car recently, and I decided not to in favour of an e-bike. If it's not a hard need (i.e. need it to get to work) then my advice on how to approach this is don't, consider not having a car. It's totally possible in many cities, albeit I will admit it can be very challenging in some.
More directly answering your question though, I'd suggest scoping out what you want with something like https://www.kbb.com/. This will give you an idea of what the market is like in your area on a variety of axes, and if you decide to go with used, can even help you locate a car that's both within your budget and needs.
As for approaching new-car sales: I don't know if now is the time. The market is up like crazy, and it's very hard to determine the long term cost of a new car. Between regular maintenance, financing, insurance, etc. I personally think affording a new car spirals out of control pretty quickly. Plus, unless you have a clear picture of the history of the model you're buying, it can be pretty hard to preemptively know what could be recalled, what the biggest issues could be, etc. Some things worth considering during your purchase:
1. What is the best weather you will expect to drive in ~70% of the time? Worst?
2. Do you need snow / off-road / chained tires & separate rims in addition to the car? Sometimes you can convince a salesman that you'll be a purchase on a new vehicle as long as they throw in a set of rims & tires for cheap or even free. It's within their power and cheap enough compared to most new cars that it is very rarely a sticking point.
3. What kind of warranty / pre-service commitment are you willing to sign up for? Lots of new purchases come with a 1-year service agreement for all tire rotations, fluid changes, etc. You can get out of this if you want, but understand that over the lifetime of your car you could be paying up to 50% in the car's value for the sum-total of work. With a used car, you won't get this, but maintaining a used car is a bit of a crapshoot regardless (because you don't know what the previous owner did to it).
4. How much do you expect to drive per year? In what environment? My recommendation would be to get a smaller class vehicle if you're going to exclusively be in the city, because parking sucks in most places. If you're planning on lots of longer drives, a larger vehicle can be nice, but understand that is going to affect your energy pricing as well as contribute more carbon to the environment.
5. How much are you willing to spend relative to rent / mortgage? This is probably the key question, but you should be able to represent how much you spend on your car (and all externalities like insurance and gas) vs. housing (probably your next largest bill) as a percentage of your income. Do not compromise here - be honest with yourself about what you're willing to spend and how much you're giving up. This is probably the #1 thing I never see friends do when making big purchases, and they always end up regretting it later.
Other than that - have fun with it but understand that if you go into a dealership they will be trying to up-sell you in a lot of ways. If you can avoid financing, I suggest it, but you'll probably get a lot of resistance for trying. Dealerships don't necessarily love customers paying cash, which I'm not sure is well thought through, but salesmen aren't there to think about opportunity costs.
Lastly, have fun with it. You're committing to a large purchase that will stick with you for potentially years to come, so don't rush your process due to stress.
Find the type of car you want, colors you'll accept, etc.
I have bought new and used. Unless your ego can afford it, buy used if at all possible. You lose about 30% of the value the instant you drive off the lot. There are exception especially in crazy market times like this. For instance leading up to the last crash around 2006 things were going south. The actual bottom was 2008. We were looking at Chevy Tahoes. Used about 5 years old were around $30k. Most of the used cars we looked at had electrical problems. We found a newspaper ad (those were a thing then). It was late December, the end of the year, and the dealers were pumping to get their numbers up. We bought a new 2005 Tahoe for about $35k.
Otherwise, if you're cheap like me, I consider I'm only buying mileage. In my mind, any car begins to fall apart around 150k miles; and at that point is worth nothing. Your idea may vary, but follow me here. You find the car you like, there are hundreds available within say 500 miles. For several thousand dollars, I'm willing to drive a days drive to get a good deal, so 500 miles pretty much covers everything near me. So say cars.com has 50 cars that fit your target, but the prices and mileage are all over the board, how to figure it out. So, as Google would say, make a formula, then execute the formula. So open a spreadsheet, list the cars price and mileage on the odometer. Now for the formula. In the third column of the spread sheet, the cost per mile for what you're buying. $Price / (150k - $mileage) gives you how much you'll spend per mile to drive that car. Sort the table by this column, and you'll have each car ordered by what it will cost you to drive it for the miles available on the cars. Since you can know nothing about each car, I consider them all equal, except luxury levels, but you'll have to figure that out. So there you are, since the maintenance & operation of any car costs about the same, the only variable is the hardware cost for each mile you get to drive that car. And that cost per car will range from ten cents to a dollar per mile.
Gratulation - and you could easily prolong your honeymoon by posting a link to (all) your old posts. Not SSC, that's easy, but "Jackdaws love my big sphinx of quartz - The Wisest Steel "Man" squid314.livejournal - I find it highly inconvenient/at times impossible to get to those pages. And tiny inconveniences (as in the great firewall of China) do have effects. - Just for me, I am happy with any of the smarter readers to drop me a hint. But consider the wider readership :)
I’d like to read something written by a superforecaster that goes into a lot of detail about the thought/research process that was involved in making a few specific predictions. Any suggestions?
> In a special Global Guessing podcast episode, Superforecaster and Metaculus Analyst Juan Cambeiro walks us through his analysis of Omicron and his five relevant forecasts for understanding the variant
I don't know if this was already mentioned somewhere, but it'll soon be the end of the one year agreement you had with Substack. What are your future plans for the blog?
I am finishing up The Fate of Rome by Kyle Harper. The subtitle is "Climate, Disease, and the End of an Empire," in case you want to know where Harper is coming from. It's a little dry in some places but still interesting.
Recommending book is a passion of mine, so here is a couple more...
Novels : The horseman on the roof, by Giono. The story of a young Italian hussar trying to survive in Provence during a cholera epidemic. Served with one of the best style in the French literature, and descriptions of summer that will makes you sweat just reading them.
Also it's basically 2020-2021 the book.
The Opposing Shore, by Gracq, The Tartar Steppe, by Buzzati, and On the Marble Cliffs, by Jünger. I group those, because they are both extremely similar and completely different. They all describe the hero waiting and somehow hoping for an invasion by a foreign power, but they do so in completely different ways. The Opposing Shore is the tale of a society so rotten by its old age that the existential threat of an invasion seems to be the only way it can finally catch some fresh air. Gracq style is brillant, in slow and powerful sentences ; all the novel smells of rotten swamps and decaying houses. The Tartar Steppe is the intimist story of a young soldier who wanted to earn glory in war, and consumes his life waiting for an enemy who never comes. It is both incredibly sad and incredibly hopeful, and I think I became both a better person and a happier one reading it. On the Marble Cliffs is the symbolic tale of a golden age country under the looming threat of the Head Forester. The writing style is strong, pure, precise, and sharp. It's not a coincidence the book was published in 1939 - by the enthusiast nationalist from my other comment.
Bernanos was a hardcore monarchist conservative catholic French author in the 1920's and 1930's. In 1936, he was living in Palma de Majorque when the Spanish Civil war began. Before the war, his son had enrolled into the Falange - Spain's hard-right fascist monarchist catholic paramilitary party.
When he learned from his son of the political purges and killings done by the falangists, he wrote a 400 pages pamphlet of incendiary rage against Franco, the falange, the catholic bishops, the king of Spain, and any of his former friends compromised with Franco.
"The Great Cemeteries Under the Moon" is a raging cry, short on structure, but strong in emotions. It is also one of the most amazing example of this so rare rationalist virtue: seing the truth, and telling the truth, about the sins and horrors of your in-group, even when your convictions, your friends, and everything you hold dear would push you to close your eyes.
It's hard to give good recommandations without knowing more precisely what kind of books you are into, but as you mentionned in an other comment that you are interested in WW1 I would highly recommand Jünger's Storm of Steel and Remarque's All quiet on the western front, both great novels on the World War.
All quiet on the western front is a pacifist perspective on WW1. It's a brillant account of the realities of war by someone who was in the trenches, and it strip down any romantic conception we may have of war.
Storm of steel is the autobiographic account of Jünger's experiences in the war as a young German nationalist with a mystical enthousiasm for war - war makes men out of boy and so on - who found exactly what he was searching for.
The combination of those two books is one of the most disturbing reading experience you can have.
For fiction, _A Deadly Education_ by Naomi Novik. It's good enough that have been reading a rereading it and the sequel while waiting for the final book, due out in September.
For non-fiction, _Alexander the Great and the Logistics of the Macedonian Army_ is quite fascinating. It's a history of Alexander's campaigns with all the battles left out, focussing on the difficult problem of keeping a very large number of people from dying of hunger or thirst and how it constraints his actions. It takes advantage of the fact that the relevant technologies didn't change until the railroad, so we have quite a lot of information on how much men and beasts ate and drank and what they could carry.
I'll second the recommendation for _A Deadly Education_ and _The Last Graduate_-- there's a tremendous sense of interacting systems.
The second book ends on a cliffhanger, so if you care about that, you might want to wait for the third book-- due out this year and intended to complete the story.
How to Think by Alan Jacobs. More about how to be fairminded than how to think, imo. Very short, well-written. Somewhere in the middle quotes some bits of profoundly scatological invective from the writings of a couple of 18th(?) century clerics who detested each other's views, had me crying with laughter.
Coup d'État: A Practical Handbook is a brief and concise but very interesting treatment of how Coups can be executed, sometimes literally. If you don't like the first couple pages you won't like it, but I found it very enjoyable and it definitely respected my time.
The Great Game by Hopkirk gave me a lot of exposure to 18-19th century near east/ western Asian history that my US education turned out to be quite lacking on, worth a shot if you're into that sort of thing
A how-to book for coups seems particularly relevant now, will definitely check that out.
Never heard of The Great Game but I'm sure my US education lacked just as much. I majored in American History for undergrad, so anything outside of the US will likely prove illuminating - ty!
Four Thousand Weeks is a surprisingly good book on time management which takes a more philosophical approach (ie no more productivity hacks. True freedom is consciously making choices about what to do with your time, not cramming in more and more tasks which, at the end of the day, are useless).
I know people read this as literature, but it's probably still worth flagging the fact that all the theories in it are ignorant and speculative, and rejected by relevant academia.
I concur for The Ottoman Endgame. As for Houellebecq, I'd advise against the elementary particle, found it the bleakest and least "fun" of his books. I'd advise to start with Atomized, Platform or Serotonin instead.
Atomized is the official English title for "les particules élémentaires", i am almost sure... So i guess you mean another one. Personally, it's the one i prefer, followed by platform and then extension du domaine de la lutte (not sure how it is titled in the English translation). And those 3 are, in my opinion, much better than all his other novels (and really really good compared to current French authors)...
Investing question: The general standard advice is "just buy an index fund", but while this gives you a broad stock index, seems like you could get better diversification by also covering other asset classes (RE, bonds, maybe crypto, probably some other stuff I'm missing. Some of these can be covered by stock indexes, e.g. Real Estate is covered by REITs, but I don't think all are). Is this a meaningful advantage to be gained by diversifying more, and is there some index func or robo advisor that does this?
I feel like the glaring flaw with typical stock index funds is that, an economic depression may cause you to be laid off at the exact same time as your investments are plummeting in value. Which is obviously a horrible combination of events.
If you are trying to replicate the market portfolio, yes, that is exactly what you would want. In practice, nobody is trying to replicate the market portfolio. You can use this tool to check out some of the difference it might make: https://www.portfoliovisualizer.com/backtest-asset-class-allocation
The classic adage is to hold your age in bonds as a percentage, so a bond index fund would be a way to do that. It's generally understood to underperform heavily over the last few decades, however, but does provide mental stability for a lot of people. REIT returns can be good for some people, especially in tax-advantaged ways depending on accounts, but they still have underperformed against the total market.
So, other than moving to bonds as you age to reduce the risk of a downturn coinciding with when you choose to retire, there is little advantage to diversifying outside of the total market. You can also weight some of your index funds towards specific sectors or whatever; I'm weighted further into large caps than the total market index, because I think their growth looks better long term. But that's mostly just an opinion.
A lot of investors keep 5% or so in fun/risky investments, like crypto or whatever - in the past and unfortunate present (for them), goldbugs still exist.
Because "incent" is a neologism that doesn't exist in most idiolects of English. It's not that easy to supplant a perfectly cromulent existing word that everyone already knows.
At least to me “utilize” carries a connotation of “you are specifically taking advantage of some quality of the thing”, whereas “use” may be more incidental.
I had the feeling you were right, but then could not come up with a single sentence where switching between "utilize" and "use" made a whit of difference. Can you? I think maybe saying"utilize"= saying "use" + doing a self-important strut
I think that situations where “use” can fit mostly are a superset of situations where “utilize” can fit, as “use” is largely a more generic version of “utilize”.
I can imagine situations where they don’t both fit, however. Ex: “what brand of deodorant do you use? I use $brand” would not really fit with “utilize” instead. The situations going the other way (like “how can we utilize this byproduct from the manufacturing process?”) could be changed to “use” and still mostly maintain the same meaning.
I agree that there is not much practical difference, and I rarely use the term “utilize” in my writing or speaking.
I think what your examples capture is that "utilize" sounds more formal and dignified -- so the word sounds all wrong in a sentence about someone's deodorant, which is not a dignified topic. Also, note that your deodorant example seems like spoken communication, while the manufacturing byproduct one sounds like an excerpt from an article. We code switch some when we go from talking to writing -- we upgrade to longer and more Latinate words. Sort of comes down to what I said before: "utilize" = "use" said with a strut.
no real content here, just wanted to say congratulations on getting married! i don't wish to tell you how to live but i'm hoping that you can wallow in each other's bliss; that the struggles are surmountable and that you build a beautiful rest-of-your-lives together :)
"A new tech startup plans to become “the stock market of litigation financing” by allowing everyday Americans to bet on civil lawsuits through the purchase (and trade) of associated crypto tokens. In doing so, the company hopes to provide funding to individuals who would otherwise not be able to pursue claims."
Who can't afford to pursue claims? My impression was that most plaintiff's lawyers eat what they kill - they work in exchange for a percentage of whatever settlement you receive.
So plantiffs have been evaluating claims informally since forever and maybe more formally for the past 10 years as a part of litigation finance funds. It’s probably not a game changer and will likely converge on certain patterns of litigation
At this point you've probably heard of the game Wordle, but if you haven't then you definitely should play it at https://www.powerlanguage.co.uk/wordle/. It's a lot of fun, especially comparing it to friends!
I've also just finished developing a Yiddish version, so if you happen to speak Yiddish I'd love it if you would try it out at https://greenwichmeanti.me/wordle/
I expected to like Wordle, but in fact I got bored with it after about 5 or 6 rounds. It doesn't seem to tickle the same fancy that makes me enjoy doing the NYT crossword puzzle, or even a good cryptogram. I think the issue is that there's no human connection -- there's no possibility of appreciating some cleverness on the part of the puzzle designer. It's clearly possible to create Wordle games with a computer program, and it's clearly possible to design a computer program to solve them -- I could do either myself pretty easily. When that became clear to me I lost interest. (Mind you, I can see how it would still be interesting and fun to *design* Wordle, or design a Wordle-solving program.)
There's a moderate amount of transfer, but the fact that your guesses must all be real words, and that the answer must be a real word, introduces an interesting set of constraints!
However, this extra constraint makes it a bit overly simple. It's very hard to ever get it in fewer than 3 guesses unless you get extremely lucky, and once you have some reasonable strategies, the difference between 3 and 5 guesses is more luck than the refinement of that strategy. And I think that changing from 5 letter words to 6 letter words would make the game a lot easier (your first two guesses can cover not just all the vowels but also almost all the common consonants) and changing to 4 letter words seems like it would just make the game less fun.
the rules are exactly the same, everything about it is the same, just that it retroactively alters the universe to make you as unlucky as possible.
like you guess "stare" and it's like "um, nope, no matches". and then you guess something else and it has to stay consistent with what it's told you so far, like what letters have been eliminated, but it can just keep retroactively changing what the secret word is until you force it to admit you're finally right.
it makes sense when you play it.
as another example, say you've narrowed it down to "date_" and it could be "dated" or "dater" or "dates". it doesn't matter which order you try them in, it's always going to be the last one you try.
it's kinda hilariously clever. like, it's cheating, sort of, but not in a way you could ever prove. whatever the secret word ends up being, you can look back at all your guesses and see that its responses are perfectly correct and consistent for that secret word. maybe i'm over-explaining it now. it's like it holds the game state in a superposition of all universes which are ... [gets yanked off stage by a huge cane]
Wow, I just tried Wordle and found it impossible to figure out. Even the instructions are confusing. It says "The letter I is in the word but in the wrong spot" but the word is PILLS and the I is in the right spot, so what does it mean by "wrong spot"?
Just tried it. Green tile means "right letter, right place". Yellow tile is "right letter, wrong place". So for "pills", the word does have an "i" in it but not there. Instead of PILLS, it could be ELIDE or IGLOO or other words containing "i".
The instructions are pretty confusing, I agree. What it means with the "PILLS" example is that the secret word has the letter "I" in it but at some place other than the second spot. So the secret word could be "IGLOO" (since it has an "I" in a different spot).
It's a similar idea to the boardgame Mastermind, if you've ever played that.
So after you guess a word, all the letters turn either green, yellow, or grey. Green means that that letter is in the secret word you're guessing (the eponymous "wordle"), in that same spot. Yellow means that that letter is in the secret word you're guessing, in some other spot. Grey means that that letter is not in the secret word.
In the instructions, they're trying to just illustrate that concept, so they show most of the letters in white, but that's just to highlight the one letter they're talking about. They're not telling you about the other letters in "PILLS", but you can imagine that they're all not in the secret word.
Suppose the secret word is "WRITE" and your first guess is "WEARY". The "W" would become green, since it's in the correct spot. The "E" and "R" would be yellow, since they're both in the secret word but in other spots. The "Y" and "A" would be gray, since they're both not in the secret word. You'd then want to think of another word that starts with "W" and contains "E" and "R" - so you might think of "WROTE" next. All of the letters but "O" would become green while "O" becomes grey, so you'd be able to guess "WRITE" for your final guess.
That's very clear; thank you. Now all I need is to find a web browser that will display the colors. I have some strange setting on my computer that overrides such things.
One more thing-- knowing that a letter is in the word, or in a place in the word, still leaves the possibility open that the letter appears more than once.
It's a shared experience. I somehow enjoy my favorite song more on the radio, half from the serendipity of it showing up unexpected, and half from the knowledge that a bunch of other people are listening.
Related: I teared up watching reaction videos of fans tearing up when [spoiler] in the S2 finale of The Mandalorian [it was a well done, tasteful, big nostalgia]
This is interesting– so you can sense when someone you're interacting with is "on something" like SSRIs or Adderall or antipsychotics, although you can't necessarily identify the specific drug?
I've known several people on theraputic doses of psychiatric medication, and I've never been able to tell– even after they've told me about it, I haven't perceived anything distinctive that I associate with the drug. The only exception I can think of is one friend who had been misdiagnosed and was taking drugs for the side effects of the drugs for the side effects of the drugs for the condition she didn't even have... after she got off everything she got noticeably more normal. Now she's on a bog standard SSRI (and nothing else), and still seems normal.
Anyway, I wonder if the "character mouthfeels" you're perceiving are due to something else. On the other hand, it seems like it would be possible for someone to have a sixth sense for what meds other people are on, and this would be very cool, and I'm also curious if anyone here has that faculty.
Handing your kid an Ipad during dinner at a restaurant would be seen by many Menlo Park mommies as a pretty low-class thing to do. I think more affluent parents worry over their kids a lot, and tech is just one avenue they explore. But I think the public schools are using a lot of 1 to 1 ipads etc, so parents are not united enough in their dislike of screen time to actually stop schools from using tech.
I agree. I don't think it has to do with parents working at silicon valley, just that they have money and are higher class (or wish to be). Those kind of people spend a lot of time worrying about parenting, and they often have less kids as well. When you have one it's easier to not use screens then if you have three and you really need to shut a couple of them up for a few minutes.
for online reading, i'd recommend https://plato.stanford.edu/ -- just read stuff on anything that catches your interest (or check the "what's new" link if you don't have any ideas) -- most entries there have the character of analytic philosophy.
This is one of the few courses I took where it felt like a textbook and professor connecting various essays really paid dividends beyond just fumbling through some great authors or papers in the field on my own. I feel this way partly because a lot of the landmark pieces stand on their own, but if read in sequence are often a series of rebuttals or extensions of the immediately previous luminary's work, and the writing is a bit topically dense, asking you to shuffle around your core beliefs quite a bit, so it's not always obvious how and when to zoom out and track the broader arguments.
I enjoyed the writing itself though, full of fantastical memorable hypos, and sometimes subtle jabs about "profligate ontologies" so the original works themselves are also worth reading. A textbook, ranging from Frege or Russell up to Quine or Kripke, rich with selected essays, might be your best bet.
Kripke's "Naming and Necessity" is an influential, and comparatively accessible, classic. His theory of how names work is that they're "rigid designators," not "descriptions," a difference in e.g. how you think they apply to counterfactual hypotheticals. (E.g. pretty much all I know about John Adams is that he was the second president of the US, but even then I'm not sure I remembered quite right. If I say, "I'm not sure if the 2nd president of the US was the 2nd president of the US," that's strange to say. If I say, "I'm not sure if John Adams was the 2nd president of the US," that's a normal thing to say. Why? Is it because "John Adams" rigidly designates a particular guy, across counterfactuals, and is not just a description synonymous with "the 2nd president of the US" even when I'm the one referring to him?)
Note that beyond philosophy of language, most philosophy taught in anglophone western Universities is considered "analytic philosophy"; it's more a style than a topic (trying to use clear precise language, writing arguments that could easily be formalized; compare to the more literary, playfully ironic style of continental philosophy); e.g. moral philosophy can be analytic, metaphysics can be analytic, philosophy of mind can be analytic, etc. (I guess it's also a sociological category of who reads and cites each other, in a pretty unified movement that started with Frege, Russell etc. and the people they influenced; e.g. the style of Aquinas is pretty close to the style of analytics, but he was before them).
I'd say the best introductory book to philosophy in the analytic style is "Just the Arguments: 100 of the Most Important Arguments in Western Philosophy," since it's bare-bones about the arguments, and even gives sketches of their formalization (Premise 1, Premise 2, etc.).
A classic, and really short, paper in analytic epistemology (phil. about knowledge) is Edmund Gettier's "Is justified true belief knowledge?"; since like Plato we'd basically all assumed it was; then Gettier gave a handful of obvious counterexamples and everyone was shook. Good example of the attitude and common writing style.
(my favourite similar example: someone tweeted, "When I talk to Philosophers on zoom my screen background is an exact replica of my actual background just so I can trick them into having a justified true belief that is not actually knowledge.")
Annoying of me to focus on my disagreement with the throwaway joke when your string of comments is really useful and great... BUT:
I don't think the example works. Unlike a barn by the side of the road, most people wouldn't have a belief (and if they do, it's not justified) that what appears to be behind you on zoom is really what's behind you, because artificial background images are so common.
It depends on your background image. If your background image is a stationary wall with a bookcase, it may well be that a good number of people withhold judgment, and that the ones that didn't should have, given what they know.
But if the background looks like an ordinary living room, and about ten minutes in, what looks like a husband walks by on the way to the kitchen, and the lighting gradually changes over the course of an hour as the sun moves, then I think most people wouldn't actively attend to it, would believe it is the real background and not an artificial one, and would be justified in believing that it is the real background.
True. I've only encountered anything like the former case. If you had an animated background of the sort you describe in the latter example, that would seem like a Gettier counter!
I don't think that 1 and 2 are self-evident. I think that "utility is whatever it is we ought to maximize" is something like a definition (which does require the assumption that there is something we ought to maximize, though standard decision theoretic representation theorems lead naturally to the idea that any sort of goal-directed action must act as if there is something one ought to maximize).
We need something substantive to move from the idea that if a person desires something, then that is a reason for us generally to try to bring it about (that can be traded off against reasons to do incompatible things). But once we have that, something like preference-satisfaction utilitarianism of some sort or other follows pretty quickly.
Getting from preference satisfaction to the hedonic concept is then inferred from the fact that most people generally prefer happiness to unhappiness, and most people generally prefer the lack of suffering to suffering, but there are counterexamples in both cases. (Someone might prefer the existence of great art so much that they prefer suffering and producing great art to not suffering and not producing art. Someone might prefer knowledge, or a good life for their children, strongly enough that they prefer this even if it leads to unhappiness for themself.)
I think there's a useful way to characterize preference-satisfaction utilitarianism as just the statement that what "we" have reason to bring about is just all and only the things that each individual prefers. An individual having preferences/desires means that the individual has a reason to do things to bring about the satisfaction of those desires. So if we can argue that the reasons a collection has are all and only the reasons had by the individuals that make up that collection, then I think we are there.
I'm not sure what 'self-evident' means. The way I think seems to be determined in large part by the culture I grew up in, the people and ideas that came before me and I was exposed to. Are there any absolute truths? IDK maybe the golden rule. (Do unto others as you would have others do unto you.) (The 'self evident truths' part of the Declaration of Independence is a bit of feel good prose.)
3 is tautological. "Ought" means doing actions that lead to outcomes where our values are most satisfied/maximized. Utility is an abstarction for our values. By tabooing both words we end up with: "Maximizing our values maximizes or values"
I'm not sure what "self-evidience" is. I think it's clearer to just speak about our moral intuitions.
Someones moral intuition in natural rights is different from utilitarian moral intuition in a sense that it's more complex. You can reduce natural rights to utility maximization, that's the whole point. So the correct framework is not to see utilitarism as an opponent of natural rights but as a gear-level model, transparent box, from which we can deduce natural rights.
Utility maximization isn't a separate value. As Kenny Easwaran mentions, "utility is whatever it is we ought to maximize". It's an abstraction of our values and If people think that some of their values are not captured by it either they are confused, or the utility function is poorly defined. Your example rings like confusion to me, but more details are required to be sure.
There's a popular argument about #2 involving the philosopher getting kicked in the gonads. I find it quite persuasive, though informal. Philosophy (moral or otherwise) should be grounded in reality.
#3 can be derived from a handful of axioms, all / most of which seem attractive or desirable on their face. A search on "Von Neumann-Morgenstern axioms" should do the trick.
The VNM Theorem says, informally "If an agent has 'rational' preferences over outcomes, then *those preferences* can be summarized by a utility function which the agent will prefer to maximize in expectation." (The expectation is with respect to the uncertainty the agent has about the world.) So, VNM-utility is merely a numeric summary of an agent's preferences. On the other hand, ethical-utility is supposed to be a direct measure of moral goodness (which, for example, it might be wrong to maximize in expectation).
I believe Harsanyi has some influential work on the connection between the two types of utility. Maybe under certain conditions they can be shown to be equivalent-- but I haven't gotten around to reading about it yet. Another relevant work (dense, but very cool) might be Lara Buchak's "Risk and Rationality."
If you buy the axioms underlying VNM, then you've got yourself a utility function -- one with the nice feature of being useful in making decisions under uncertainty (i.e., the alternative that maximizes the expectation of utility is the most preferable and thus "ought" to be chosen). But if you don't buy the axioms - perhaps because you're facing an ethical decision and you're a deontologist - then that utility function ain't for you, and it won't tell you much at all about what to do.
If we're talking ethics, and we're consequentialists, VNM will probably do fine; there's no distinction between "ethical-utility" and "utility." If we're deontologists, having a utility function is irrelevant. If we're "rule" deontologists, we'll need something like a utility function and a lot of evidence.
Personally I don't feel words like "happiness", at least the way most people use language, even have a precise enough meaning to come to absolute conclusions through building chains of reasoning on top of them. In my view "happiness" is usually used to describe mental states that people want to stay in, and suffering the states they prefer to avoid.
For value systems, I don't believe there is an absolute basis. You could have a cult that believes everyone should suffer and die, and have its value system be internally coherent.
I think humans are social creatures that usually have some instinctual concern for the well-being of others, and also see their own self-interest in structuring society around certain types of mutual cooperation and behavior, and this is reflected in their value systems. But I wouldn't put any of them as totally self-evident or grounded in some absolute truth.
In terms of research into language learning I recommend Stephen Krashen. He has videos and papers.
The summary is to spend 500 hours letting your brain absorb content that is just about comprehensible but above your current level.
Actively rationally thinking about it isn’t the key - letting your neural networks pattern match is. Content with context - so video of real situations, films, podcasts, childrens books with pictures, real world situations is best.
Try and overcome adult need to be correct or socially acceptable. Take in as much content as you can and build practices so you enjoy doing that (so videos and films and podcasts that you like)
That's very interesting. So do you mean that we should focus on this sort of passive consumption, and completely abandon more conventional learning? Or just add the consumption on top? I can't imagine how I could ever actually learn a language without at least some attempt to actively learn and remember rules and vocabulary.
I think that you need both, or that it at least speeds things up tremendously. Lots of input helped me a lot, but if I'd never looked up any words it would have taking me way longer.
I think in theory you don’t need both - young children can learn a new language just by listening then suddenly speak fluently after six months or a year (there’s a talk where Krashen gives an example of this).
And obviously we all learn our first language this way.
However for adults I think you’d get bored and looking up and doing some rote learning of key vocabulary will help give you access to more content.
So yeah - I’d make Anki word lists of the words in the Chinese cartoons and then rewatch them. That kind of thing.
Interesting. My only second language that I've ever been at all proficient in was from both doing a Duolingo type app and living in the country, which I suppose is a form of this mixed approach, so that holds up. I'll have to fire up the Chinese cartoons.
1) That's not really true, is it? Children are actively and deliberately taught a lot of their native language. Children ask what things are called, get corrected by adults, and are formally taught much of their language in school. Children do learn more of their language from that kind of passive absorption and imitation than is typical for adults, but:
2) Children famously have more capacity for language acquisition. It seems intuitive to me that the difference in capacity would be especially pronounced when it comes to this, the only method of language acquisition young children have at all (since some language is required to be actively taught). Relatedly, children spent *all* of their time doing this. So even if it were much less efficient than a more active, structured learning process, we still could expect it to work eventually. But it seems plausible that, with one language already mastered, we can circumvent this inefficient process.
Perhaps my perspective is less common than I assume, but the idea of anyone learning anything important about their native language in school seems silly to me
I had the good fortune to learn Standard English as my first language, so the rules I learned in school all seemed pretty obvious. I think the main benefit of that part of my education was to make me realize that there were rules I was already following with realizing. (Non-standard forms of language have rules too, but you generally won't have anybody explicitly telling you what they are.)
Adults do tend to deliberately teach language, but if you have neglectful parents who don't bother you'll still learn to speak.
The only thing I've ever found that works for me is tens of hours of input. I supplement with grammar lessons and flashcards, but honestly the biggest hump for me is always turning "random foreign language sounds" into "words and sentences, just not in a language I know." Overcoming that barrier requires hearing a ton of the language, and preferably with content made for learners that is slow and clear. "Dreaming Spanish" or "Comprehensible Japanese" are good examples of the class of content I mean.
Unfortunately, I don't have any content recommendations for Russian or German. I'll get to Russian eventually when I decide that I'm functionally fluent in a Romance language and an East Asian one...
I think the European cities are actually quite a bit closer to YIMBY than most American cities. They don't have all the requirements of minimum lot sizes and mandatory off-street parking that even most of New York and San Francisco have (let alone every single other US city). I think most YIMBYs are happy with a moderate amount of preservation rules, though the impact of those rules on an older city is higher than the impact of those rules on a newer city. Probably the biggest way in which European cities are less YIMBY than American cities is that skyscrapers are illegal throughout the entire core region in many such cities, while in the United States they are usually legal in a single central square mile.
Vancouverism (https://en.wikipedia.org/wiki/Vancouverism) is probably the only form of residential density that is easier in North America than in Europe. I don't know if it actually achieves density greater than that of Euroblocks (http://urbankchoze.blogspot.com/2015/05/traditional-euro-bloc-what-it-is-how-it.html), or if it achieves equal human density while allowing more parking than Euroblocks. But outside of Vancouver, you don't see that large of an area of Vancouverism anywhere (there's maybe a square mile in Toronto, Austin, Chicago, and Seattle, probably a larger area in New York, and only sporadic bits elsewhere).
Note that many places in which skyscrapers are illegal it's for practical reasons - eg Paris was originally (in Roman times) a mining city, and the ground beneath the city centre is basically hollow, it simply can't support the weight of buildings above a certain height.
Average rent for a one bedroom unit in Paris, according to a cursory google search, is about $1k/mo or $1.4k in "city centre." I would not say that this is "extremely expensive." That's cheaper than Chicago, and a lot cheaper than NYC or SF.
Paris has a population density of about 20k people/km^2, Barcelona at 16k, Strasbourg appears to be quite un-dense at 3.5k. For American comparisons, SF is 6.6k (still in km^2) and NYC at 27k. Fun fact: San Francisco is America's second-most dense city.
As an American YIMBY, I just want our cities to be dense. America has plenty of land for suburbia, there will always be room for a suburb somewhere if enough people want to live in one. If a city's limits grow, the suburbs will move outwards and take over rural land. There appears to be a massive amount of handwringing that the suburbs will disappear when that's frankly unimaginable. I just want the density of a city to be determined by the free market according to people preferences. If some developer builds 100 sq ft units, well they will either find tenants or they won't. If they don't find tenants, they will lose money, and the next developer will build larger units. That's the worst possible scenario of letting developers build as dense as they can.
Should Paris be more dense? I will leave that to the French. I think they did a wonderful job. Paris is a gorgeous city, prices are reasonable, and transit is convenient. In America, I think we have artificially handicapped ourselves from building cities like Paris by restricting density.
> Should Paris be more dense? I will leave that to the French.
As a French, I think we should try to split the "historical center", that should be preserved at all costs, and the "place where people live and work", which should be relatively close and very dense. I think you should also be a bit more precise on what you mean by "density". Paris "Paris" density is 21k/km², for 2 million people and 105 km². Urban paris is 3.8k/km² for 10.7 million people and 2k8 km². Metropolitan density is 690/km² for 13 million people and 18.9 km². I'll leave it to the Parisiens to debate on what counts as being in Paris, how close do you have to be to have the opportunities and salaries that comes with living in Paris. But Paris "I can walk to work" and Paris "my job is 1h30 of commute away" are two very different cities.
If you live in Paris (area or intramuros) your job may be far away but chances are still good that you'll be able to do your shopping on foot. And you'll often be able to commute using public transportation if you are on the right RER lines.
I do not know whether that range is accurate or what's meant by city center, but you should also consider that normal people probably earn less in Paris than NYC or SF. At least that's for sure in my line of work. I lived and worked both in Paris and NYC.
The goal should be that there is NOT a real architectural loss, because the average person views the newer, bigger buildings as the architectural equals of the older, smaller buildings. This is a fairly high bar, but the architecture profession needs to genuinely attempt to clear it. Right now, they don't. Instead, they maintain their ideological commitment to unpopular modernist designs and roll their eyes when the public complains.
Yeah I don't think YIMBY ideology scales beyond the national level. I'm sure there are YIMBYs everywhere, but I view YIMBYism to be a reaction to local deficiencies. When I lived in Iowa, I wasn't a YIMBY, I didn't even know about the movement. The housing stock in Iowa seems fine. I now live in San Francisco, and I see very dramatically the deficiency in housing and the dysfunctional local politics that cause it, and now I'm a YIMBY.
Whether some nation *should* be more YIMBY I think is related to the rest of the nation's politics. For example, in America, the bad housing situation is a big driver of poverty. This might not be as much of a problem in a European country that has a stronger social safety net. Maybe you get priced out of a city, but there is a good government program to relocate your family to some other place that still has a decent school. To the extent that we want to reduce poverty and its related social ills in America, providing better and cheaper housing is, it appears to me, a very effective way at accomplishing that. If there is some European country that really prizes its old architecture and is willing to resolve the social cost of having expensive housing in some other way, I don't think there's inherently something wrong with that, it's just a different set of values.
This is a good point, YIMBY as a political movement is directional, and what's obviously trivially true in an extremist vetocracy like San Francisco becomes much more debatable in a country with functioning zoning laws.
Cool new finding relevant to ageing biology:
Apparently ribosomes (the RNA-based molecular machines that make proteins by running along the mRNA template) sometimes go faster or slower—and can even bump into each other and get into lil traffic jams!
And apparently the traffic jams happen more with age (at least in C. elegans), so this might be part of why loss of proteostasis is a hallmark of ageing (e.g. buildup of misfolded proteins, as especially happens in Alzheimer's, Parkinson's, and Huntington's)
Paper: https://www.nature.com/articles/s41586-021-04295-4
Lay article: https://news.stanford.edu/2022/01/19/role-ribosomes-age-related-diseases/
This is super interesting. Since there's a paywall, is there a hypothesis what alters the ribosome kinetics?
(sci-hub probably has it)
Not a full hypothesis, but mechanistically they did find that ribosome "elongation pausing" didn't happen more *overall*, but did happen more "at specific positions ... including polybasic stretches," and that's what causes the increased collisions
I didn't notice a mechanistic hypothesis for why pausing increases with age at those locations. They don't mention evolutionary hypotheses, but the in-general evolutionary theory for why ageing evolves is that natural selection cares less about late life (after you've maybe already reproduced a bunch anyway) than about early life; and trade-offs and pleiotropies may be involved too*; though understanding more detail than that would require, idk, knowing what exact trade-offs were involved instead of just that there might have been some.
*https://www.nature.com/scitable/knowledge/library/the-evolution-of-aging-23651151/
and here's a lecture I did on evolution of aging for an evolutionary ecology class I TA'd [https://drive.google.com/drive/folders/1VhO2VHoEI-FFWTgD-rCVXNHy7cQ9sHBS?usp=sharing]
Scihub doesn't accept new submissions right now for some legal reason (lol) so I haven't checked. Thanks for the writeup!
The interesting part would be whether:
- the RNA being read is somehow messed up by itself, already at the point of transcription (???)
- the ribosomes have some subtle faults that don't do much functionally, outside of the polybasic stretches (but aren't ribosomes constantly regenerated?)
- the chemical environment within the cell is outside of the expected conditions (how?), so the ribosome can't do its thing properly
Intuitively I lean towards the last option.
Given that platelets have functional ribosomes and mRNA, but not much else, it seems like you could do some interesting work around transferring filtered parts of the cytoplasm between young and old patients' platelets, monitoring elongation pausing, and isolating what's responsible.
Does anyone know good sources for learning about test-tube meat? From a rationalist perspective it seems like working to end factory farming should be at the top of the docket, and cultivated meat seems to me like the most likely way of doing that. I want to apply my computer science degree to research in this field and the only online source on this I've found has been the Cultivated Meat Modeling Consortium: https://thecmmc.org/ but they haven't answered any of my emails. Just wondering if anyone here has any knowledge on the subject or can point my towards good resources or communities for learning and discussion.
With state-of-the-art tech, we're orders of magnitude from economic feasibility. Anyone claiming otherwise is probably trying to sell you something and/or scam a VC.
https://thecounter.org/lab-grown-cultivated-meat-cost-at-scale/
From my limited experience working with cell cultures, the article checks out. Cells are just _so_ fussy about having a sterile environment, it turns out it's way cheaper to grow them in a cow since the cow comes bundled with an immune system.
Cultivated Meat is still in its early stages, but this is why I want to contribute. I certainly think it will be feasible once we develop the right technology. I've heard that computer modeling comes into play
What timeline are you predicting on the "right technology"?
I mean, it's a really important ethical and environmental problem so go nuts, just be aware you might not see widespread adoption of this tech in your lifetime.
I'm not qualified to make that prediction, although many companies in the field are predicting major developments in the next 20 years (of course they're biased though).
Regardless of timeline I'd like to use my coding skills to help make this technology develop faster
I've recently seen a lot of headlines about antibiotic resistance. I would love a "Much more than you wanted to know" post on this topic!!
In the meantime here's two clips from Steven Stearns' online evolutionary medicine Yale course:
- 5.5 - Resistance
[https://www.youtube.com/watch?v=hPcZXjnbHDk&list=PLh9mgdi4rNezvm7QkQ_PioadoAWqfa2L0&index=39]
- 5.6 - Evolution-proof therapies:
[https://www.youtube.com/watch?v=uJeJwsOStxA&list=PLh9mgdi4rNezvm7QkQ_PioadoAWqfa2L0&index=40]
and the ELS (encyclopedia of life sciences) article on resistance:
[https://onlinelibrary.wiley.com/doi/10.1002/9780470015902.a0021782]
The tl;dr seems to be "we're fucked" and that's already more than we want to know.
An Ivermectin paper about a very large study in Itajai, Brazil. I know you and everyone is sick of this topic but I'd be very curious to see what you think of this paper (which may be updating a previous one?).
https://www.cureus.com/articles/82162-ivermectin-prophylaxis-used-for-covid-19-a-citywide-prospective-observational-study-of-223128-subjects-using-propensity-score-matching
The TL;DR seems to show that prophylactic doses of ivermectin were pretty damn good and improving protection against COVID-19
What is the rationale for profits generated from the sale of stocks and other similar financial instruments (incl. crypto) being taxed at the person's income tax rate? Is there a legit economic argument apart from 'it tends to yield greater revenue for the government when compared to a flat tax' ?
Naively one could point out that trading decisions I make as a private investor in private companies do not involve my country's government at all. This on it's own seems to create a distinction between trading vs. working 9-5 that should be reflected in tax policy.
Since the capital is not meaningfully tied to any jurisdiction, you can run this line of reasoning further and abolish investment profit tax altogether.
Unfortunately, the reason for the tax boils down to "we need money and that guy over there has some", so your (correct) conclusion has no bearing on reality.
What makes income generated from investment profits (relevantly) different from income generated from employment?
See second paragraph. Why *should* the passive exploitation of market fluctuations be treated the same as payment for labour? Seems like the govt. could have a reasonable claim to the latter, but not the former.
Why? You are saying that earned income should be taxed higher than unearned income - the opposite should be the case.
I'm less interested in which one is taxed higher, and more interested in why they are treated the same (at least in my country). What is the economic basis for this? The mechanism of income-earning and mode of participation in the economy are totally different for e.g. a construction worker and a day-trader.
You should actually provide your reasoning as to why any income should be treated differently. And why you think unearned income should be treated better.
Your second paragraph only says basically that the government has no claim to your income because it’s that form of income. It’s like saying you think that bar tenders should be taxed but not property developers. Income is income.
Different forms of income are taxed differently. Long term capital gains (stocks held for more than 365 days, dividends from stocks, capital disbursements from funds) are taxed differently from short term capital gains. Gambling winnings (arguably the most un-earned of unearned income) are taxed slightly differently. Inheritance is taxed differently than wages or capital gains. So no, income is not income. For Sloan's point, capital gains may be from companies operating entirely outside the investor's country. If you are looking for a better reason for taxing investments: The government provides stability/security and enforces the contracts which allow investors to profit from investing, taxes on capital gains fund the government's ability to continue providing security./stability and enforcing contracts
The normal argument that a stock trader has a claim to the income from their capital is that they're actually providing useful labor - efficiently allocating capital to companies that are likely to provide a return on investment. Why is using your labor to move money around any different from using your labor to move bricks around?
If anything, I would think that the government has a better claim on the stocks than on the bricks, because the entire concept of a stock market depends on the government-created legal framework that allows for joint ownership of an abstract legal entity, while houses existed long before deeds to property did.
A guy on Discord asked me my opinion on Orbit Culture. I worried it was going to be some awful culture war nonsense, but no, it's just the name of a band.
What happens if you give a mid or high dose of SSRIs to a person that doesn't have any psychiatric disorders?
Does it induce some kind of euphoria or elevated mood? If not, why MDMA does?
IIRC the only acute effect boosting global serotonin may give you is a night at the ER due to serotonin syndrome. You can check that yourself by megadosing 5-HTP (a legal supplement), bypassing the rate limiting step of serotonin synthesis.
I'm a bit rusty on my psychonautics 101 now, but the trick to serotonergic recreational drugs is that there's a whole bunch of different 5-HT receptors and they preferentially activate specific kinds.
I don't believe it's possible to do a SS just with 5-HTP. As far as I know, synaptic vesicles have limited room, so the extra 5-HT just goes down the flush.
Combining 5-HTP and MDMA may increase the risk of SS but even for that we don't have much evidence. I wouldn't mix those out of an abundance of caution though.
I think it's possible in principle but according to quick googling nobody really tried.
I know you can bump serotonin way above physiological levels with this (a bunch of publications used this for research purposes) but maybe you need a MAOI or something to really hurt the brain.
Agreed, MAOIs + MDMA is pretty dangerous, especially with 1st gen MAOIs (irreversible MAOIs).
Tried it once: a neighbor had some extra Prozac so I took one, in the evening. Went to bed unimpressed - didn’t notice any effects at all. Woke up and was like “Oh shit.” Felt numb and dazed and out of it all day long. Kind of like being stoned but without the fun parts. No euphoria, no interesting thoughts, or even much interest in anything. Literally stared at a blank wall for the better part of an hour, not because I was into doing that, but because I couldn’t gin up enthusiasm enough to do anything else. Another night’s sleep and it went away, but never again. Totally just a buzzkill - significantly less fun than standard-issue reality.
At first SSRIs lead to slightly increased serotonin in the synapse. This extra serotonin activates presynaptic autoreceptors which reduce the release of extra serotonin through a negative feedback loop. You'd need to take it for 3-5 weeks for these presynaptic autoreceptors to get desensitized and serotonin levels to actually increase significantly.
You'd need to take something like Pindolol (or another antagonist with high autoreceptor affinity) to block the autoreceptors and see effects faster.
Generally, taking SSRIs as a person without psychiatric disorders does not induce euphoria. It still causes the normal side effects, which tend to be negative. The only somewhat-frequent positive effect I can imagine is improving mood stability.
SSRIs and MDMA both increase levels of serotonin in the brain, so why don't SSRIs get you high? First, different mechanisms of action. Both MDMA and SSRIs are serotonin reuptake inhibitors. But MDMA is also a serotonin releasing agent -- apparently it reverses the transport of serotonin in the reuptake cycle, which causes serotonin to be released. Also, MDMA is an agonist to some serotonin receptors. Also also, everything I just wrote applies to dopamine also (although to a lesser degree).
> SSRIs and MDMA both increase levels of serotonin in the brain
> MDMA is also a serotonin releasing agent
Assuming an equal concentration of serotonin in the synapses, why does the difference of mechanism have any impact on the effects?
My guess is that SSRIs stimulate the firing of neurons that already fire (because serotonin gets released and then stays). It doesn't work as much for neurons that rarely fire because it gives enough time to MAOs to get rid of the serotonin in the synapse + for the unaffected SERTs to perform their reuptake.
This hypothesis doesn't support your claim that ‶taking SSRIs as a person without psychiatric disorders does not induce euphoria″. What do you think?
> MDMA is an agonist to some serotonin receptors
Is the affinity for these receptors high enough for it to have a clinically-significant effect?
> Assuming an equal concentration of serotonin in the synapses, why does the difference of mechanism have any impact on the effects?
The brain contains a bunch of different neurotransmitter receptors. Some of these receptors are activated by serotonin. So when we casually say "serotonin receptors", we're referring to multiple distinct things.
Imagine a brain that has 50 serotonin units in the 5-HT1 receptor and 10 in the 5-HT2 receptor. And let's say that this brain overall reuptakes 50 serotonin per hour, split proportionally among receptors, and brings in 50 serotonin per hour, which is split depending on whatever -- let's say 30/20 in this case.
If the brain is given an SSRI, the SSRI stops the reuptake, the 50 serotonin still come in, and the brain ends up with 80 at HT-1 and 30 at HT-2 -- all in all, +50 serotonin. If the brain is given MDMA, it ends up with a different distribution. Maybe "serotonin releasing agent" means that MDMA releases 50 serotonin into the ether (sorry, I don't know that works) where it gets used equally by each receptor. So the receptors starts with 50/10 serotonin respectively; the reuptake occurs for -45/-5, the brain naturally adds 30/20, and the MDMA adds 25/25; which brings us to 60/50. 5-HT2 is the euphoric receptor (in this example), and that's how the mechanism matters.
> MDMA is an agonist to some serotonin receptors
No idea. I don't even know what "serotonin releasing" really means / how it works!
> My guess is that SSRIs stimulate the firing of neurons that already fire.
Seems reasonable to me. Though this would be a general and indirect effect of the SSRI -- it doesn't do anything to individual neurons, so this would have to be mediated through the effect of serotonin neurotransmitters on neuron activity.
Anyone have sources/info/opinions/guesses about how often omicron causes false negative Covid test results? There was something about vaxxed people having much lower levels of virus in the nares. And then maybe omicron behaving differently in the respiratory system.
I just read that England's canal system is less useful than mainland Europe's because a much larger fraction of English canals are too narrow and its locks too short to accommodate big boats. As a result, canals on the mainland get much more use.
Would it be worth it (e.g. - eventual positive ROI) for Britain to upgrade its canals to European standards? Does Britain's smaller geographic size affect the economies of scale of using canals to move bulk goods?
I mean, rails just get you way more victory points than canals as long as you're sensible about placement.
Geography is against us here. The canal network in England passes through both dense urban areas and hilly rural areas, neither of which would be easily routed through (or willingly sacrificed).
And in Europe, canal-building can link up thousands of kilometres of large-scale navigation; whereas in England the largest canal is the 38-kilometer Manchester Ship Canal, which gets as far inland as is realistic before the hills start getting in the way; and that doesn't get much freight anyway.
I do think you're onto something, about the smaller geographical size; also, consider the fact that Britain is an island - nowhere inland is that far from a coastal port. Not true of many countries on the continent.
Canals were most important during the early years of the industrial revolution because of a lack of good roads/rail and the low quality of engines (less powerful engines needed to run on water). While I have no doubt that canals can be useful still today, they are also very expensive to build. My guess would be that their usefulness to cost ratio is neutral or worse at this point. Too many good roads and rail lines, and even if the canals could suddenly come into existence for free, the boats and barges to use them don't exist and would need to be purchased.
I wonder if it has anything to do with England getting rail first.
I think Amazon is Moloch, should I cancel my amazon prime, and order less from them?
Would you purchase less without an Amazon account, or would you transition your purchases to some other company? If you move the purchases, would your alternative (for instance Walmart/walmart.com) also be Moloch?
I'd purchase the same amount of stuff, just from other vendors. And sure the other vendors are also under the sway of Moloch, but being smaller they seem less evil.
I guess my question is do we throw up our hands and say Moloch is king and so the right (rational) thing to do is keep using Amazon, because they provide clear value to me. Or is big Moloch so much worse than small Moloch that we should select against the big one?
I guess that depends on why you consider Amazon to be Moloch. To me, Moloch means the slow churning of unintentional negative outcomes that slowly degrade a system and keep it from being good/better. To that reading, a series of small shops and local artisan crafters can be just as much or more Moloch as a big chain like Amazon.
If you're convinced Amazon is some type of evil, then by all means stop shopping there. If you can't articulate why they are evil or why your alternative options are not, then maybe redefine what you consider evil?
Does anyone have an informed opinion, or link to good sources, on how bad will things get if Russia cuts off European natural gas supply? I am trying to cram relevant knowledge of the Current Events...
My impression is that the US is in a position to export far more LNG to Europe than in the past, thanks to the exploitation of shale reserves. As a result, Russia cutting off the gas looks more like "sharp rise in the price of gas, the worst effects of which governments can stave off with temporary subsidies if they choose" than "no gas available at any price, industry shuts down and people freeze to death in their homes".
You are correct about the increased LNG imports, on the other hand EU is, I think, more dependent on gas overall due to combined effects of decarbonization and denuclearization.
You can look into the effects of the Ukrainian shut downs in 2014, when Russia was taking Crimea and fighting in the Donetsk region. Gas lines through the country were turned off, for obvious reasons.
My memory is that there were significant shortages throughout Eastern Europe.
I remember that. Disruption at that time didn´t reach levels when it would be noticeable by normal people. At least in most countries. I live in Eastern Europe, you know. But this time it might far worse, that much is clear. Russia, as far as I know, never shut down all their pipelines for months, which they might well do if they will be hit by heavy sanctions, and European dependence on them is probably greater now than in the past (?). However, "far worse than almost nothing" is a broad category and I´d like to get a better estimate :-)
You would likely know more than me then. What stuck with me is that Ukraine felt very pressured (unfairly so, even in the situation they were facing) into Russia's demands because of the threat of no heat. Maybe the situation resolved and/or Ukraine caved before most people saw the shortage.
I was mainly thinking about other countries than Ukraine, like Germany. Ukraine was pretty economically screwed in 2014, for various other reasons (and it still is). Not sure how much short-term gas shortage contributed to that.
But one think that was very bad for Ukraine was that Russia stopped selling them gas for below market rates, like they did when pro-Russinn government was in power in Kiev - until 2009 and then again from 2010 to 2014. Since 2015, per wikipedia, Ukraine gets its gas from EU (which gets it mostly for Russia), but for market prices.
What are some techniques people use to maintain long-distance relationships? (I don't necessarily mean romantic, which is its own separate kettle of fish.) I'm particularly interested in ones across multiple time zones, such that synchronous interaction is difficult. My husband and I both studied in Europe but live in the US, and have struggled maintaining connections with European friends. A vanilla email conversation is just too easy to let slip and then not pick up again, so it tends to naturally devolve into the annual Christmas card exchange (both low-frequency and low-content per interaction).
Play multiplayer games together. This is, IMO, a huge part of why things like League of Legends took off despite being kind of garbage.
Somehow the Signal app has made me get slightly more back in touch with a friend who moved overseas a decade ago. I can ignore texts but have a harder time ignoring Signal messages. It can be an asynchronous text-based Signal conversation but it keeps the sense of warmth alive. It’s the only thing I use Signal for, which is embarrassing, but somehow it works.
Jackbox.tv or some other game you can play virtually while on zoom can be pretty fun
Zoom helped us tremendously. Highly recommend also having a couple of beers with it to ignore the inherent awkwardness. Time zone differences aren't that much of an issue if you're speaking during the weekend.
Time zone differences are easier on weekends, but we're in the age group of having small children. If this were in person, everyone having kids would be a bonus as the kids could just entertain each other, but online it's a headache.
Yeah, I can see that. Whatsapp is also great for keeping almost continuous but asynchronous contact, sharing pictures, jokes etc.
It’s surprising how much people love to get a real letter written in cursive. My friends tell me they always share them with their families. It’s a small thing but it seems to add a lot.
I don't think it's surprising, I really like getting real letters myself! One of the few things I remember fondly about the first couple of years in the US are the (real handwritten) letters I exchanged with friends back then.
I was reminded of this New Yorker article by Jill Lepore while in discussions about Peter Coleman's new book: "No Way Out: How To Overcome Toxic Polarization". In "No Way Out" Coleman emphasizes getting into the details or adding complexity when evaluating of your opposition (it is a good read: recommended). Avoid tempting simple descriptions or understanding of their policies and plans. Suggesting "Anyone who would support ???? must be an idiot" is certainly oversimplification for example.
According to research by Coleman and others expanding on the details is going to provide a more accurate informed picture and probably a lot less polarizing one as well. .
Interestingly the highly successful Clem Whitaker and Leone Baxter founders of "Campaigns Inc" who also became known as "The Lie Factory" won 70 of 75 of political campaigns they worked on by simplifying campaigns to slogans like "I like Ike". They also recommended to "not to explain anything" as this bores and confuses voting populations.
So it seems understanding the average voters inclinations and running a "simplify" campaign like an advertising agency would be appealing to masses of voters and has been a key to political success. Lots of political success! Yet according to Coleman simplifying your position only increases polarization. Seems like a difficult situation to work out of! Here's a quote from Lepore's article and the whole article is linked after the quote:
"Never underestimate the opposition. The first thing Whitaker and Baxter always did, when they took on a campaign, was to “hibernate” for a week, to write a Plan of Campaign. Then they wrote an Opposition Plan of Campaign, to anticipate the moves made against them. Every campaign needs a theme. Keep it simple. Rhyming’s good. (“For Jimmy and me, vote ‘yes’ on 3.”) Never explain anything. “The more you have to explain,” Whitaker said, “the more difficult it is to win support.” Say the same thing over and over again. “We assume we have to get a voter’s attention seven times to make a sale,” Whitaker said. Subtlety is your enemy. “Words that lean on the mind are no good,” according to Baxter. “They must dent it.” Simplify, simplify, simplify. “A wall goes up,” Whitaker warned, “when you try to make Mr. and Mrs. Average American Citizen work or think.' "
https://www.newyorker.com/magazine/2012/09/24/the-lie-factory?fbclid=IwAR3prNE2UdNQWWYzTrZHJ0yxBf5UyCPFModfDRWoeno344fxIRDPBO2cJuw
It's worth noting that the observed difference between the recommendation to avoid simplification by Coleman and the recommendation to simplify by Whitaker&Baxter seems to be fully explained by the different, perhaps even opposite goals.
In evaluating your opposition, your goal is to obtain an objective understanding of their position that truly matches reality and which parts of it are strong and weak.
In communicating a message to your voters, your goal is to have them obtain a understanding of your position that favors your position, exaggerates its strengths and diverts all attention to them, and suppresses or distorts the parts of it that are weak.
Doing the former is very useful to be able to do the latter effectively, however, the fact that it's useful for you to do a proper analysis and gain a balanced understanding does not imply that it's always useful for you if all the voters do a proper analysis and gain the same balanced understanding.
Being polarized harms your thinking so you should avoid that, however, in many aspects of politics it's quite beneficial if you can get others polarized. It also may be very useful - or even a de facto requirement - to *appear* polarized. You should not think that "Anyone who would support ???? must be an idiot" , however, when you're done thinking, it may well be optimal behavior to loudly proclaim that yes indeed, anyone who would support ???? definitely must be an idiot.
Good thoughts on this topic! My current feelings that as I read in another Lepore article or book that a few really smart people are controlling/persuading the masses of less sophisticated people . Masters and Whitaker being good examples of shrewd manipulators of the public, in particular the voting public. I believe that the overall education level is slowly continuing to improve in the USA and that someday these persuaders will have a tougher audience requiring more comprehensive information about a candidate rather than keeping it simple and not explaining anything. Someday the masses might require explaining. Democracy will be better for it IMHO
Right. I've gotten to a stage in my life, (old fart, get off my grass), that I've lost all interest in politics... (because of disgust.) I wanna talk about other stuff.
If you want to have a conversation with someone, you need to take what they say seriously and in good faith... what they say is what they believe. I know that sounds simple. I was reading this piece on 'everything studies' and it hit me that the problem in the conversation was Ezra assuming ulterior motives...
If you didn't follow the Harris - Klein thing then this will be almost meaningless.
https://everythingstudies.com/2018/04/26/a-deep-dive-into-the-harris-klein-controversy/
One advice is how to avoid being stupid. Other advice is how to win elections by making people stupid.
Related: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
I was hoping this would be considered a non partisan analysis of politics rather than a one side or the other political post.
Yeah it's fine by me, a meta-politics question, I think is OK.
Entirely unserious question: has anyone thought what the ideal alignment for an AI would be? I'm thinking Lawful Good, but I'm willing to hear other thoughts.
Presumably we'd want a superintelligent AI to be able to sometimes break rules for the sake of the greater good, so I'd have said Neutral Good, personally.
If we can assume full alignment on what "Good" means, then Neutral Good. If we can't, better stick with Lawful Good instead so it follows those "And, don't turn us all into paperclips" amendments.
A Chaotic Good superintelligence would axiomatically value freedom, and therefore would avoid single-mindedly focusing on its goal of producing paperclips.
If we aren't sure what we mean by "Good" or how to define the actions, then maybe Chaotic Neutral would be better - freedom (chaotic) mixed with a lack of emphasis on the correct course of action (neutral).
If you're Chaotic Good, you value both freedom and other people, and therefore value other people's freedom. If you're Chaotic Neutral, other people's freedom takes a back seat to yours.
Wondering about the attempts to make cars lighter so as to reduce fuel consumption. The easiest way was to the reduce the size, and since then there has been a move to lighter materials (e.g. aluminum and carbon fibre in place of iron and steel). Now some manufacturers are deleting spare tires. Diminishing returns indeed. (All this is not to say that there have not been other very effective ways to reduce fuel consumption - through the ages, higher-efficiency engines, fuel injection, aerodynamics for reduced drag, autostop at lights, variable displacement, etc., have all played a part.)
But back to weight reduction, I wondered whether anyone had considered adding buoyant (lighter-than-air) sacs or bags or vessels of some sort.
What's a typical vehicle weigh now - 1.5 T? (That's 1500 kg or 3300 lbs.) If one could somehow reserve one m^3 of space for hydrogen-containing bags, how much would that help? (I think 1 m^3 is doable - above the headliner, inside the tailgate, inside the doors, under the seats, under the dash ... )
Per Wiki's article on lifting gases, dry air weighs 1.29 grams/litre. The lightest gas is hydrogen, with a weight = 1/7 that of air (so approximately 0.19 grams/litre). And a pure vacuum would be even better, weighting nothing at all.
Let's assume the use of hydrogen - weight of air displaced = 1.29 g/l x 1000 l = 1290 g = 1.29 kg. Weight of replacement hydrogen = approx. 190 g (0.19 kg). Net weight reduction = 1.1 kg. That's not at all significant, compared to the typical 1500 kg weight of the car, so in a practical sense it would be noise, or a rounding error. The driver might do better to take junk out of the trunk, or to skip supper. And that's not even taking into account the additional weight of the sturdy containers needed for the hydrogen.
But in a more theoretical sense, assuming vehicles had these cavernous empty spaces presently filled with air that could instead be safely filled with hydrogen, would that actually increase fuel efficiency? Weight would be reduced, but mass would not. Would it help?
Just idle curiosity. (Pun not originally intended.)
In a crash, you really want the other guy's vehicle to be lighter than yours.
This puts market pressure on cars to be _heavier_.
Wouldn't property damage insurance put pressure in other direction?
Maybe taxing heavy cars also could help.
There was a Honda (civic) sold in the US circa 1980, that got ~50mpg HW?
Very light with small engine, Drag racers love to put bigger motors in 'em.
Hmm so I went looking on the web "MPG best ever list" and no mention of the civic
but I found this:
" No one really knew what to make of the diminutive Honda coupe when it first appeared on these shores, but its futuristic styling, impressive handling and exceptional fuel economy soon won over buyers en masse. Early models were targeted to those seeking fuel efficiency over all else, and the EPA rated the 1.3-liter four-cylinder 1984 Honda CRX at an astonishing 68 MPG in highway driving. The car's aerodynamic shape certainly helped, as did its tall gearing and curb weight of just 1,713 pounds, virtually unattainable in a moderately priced production car today. "
I totally want a modern day civic. but no one makes 'em.
I imagine a 1984 CRX -- or any care weighing 1700 lbs -- isn't going to fare well in modern crash tests.
The Civic CRX was great fun to drive, and surprisingly roomy inside for such a small car. (My in-laws owned one.) I’ve always wondered why they gave up on that model.
So far as I know, the reason cars were made lighter to improve fuel consumption is not so much that less fuel would be used accelerating, because those fuel savings are small, but because a lighter car requires a smaller engine to accelerate at a pace that is acceptable to the consumer, and it's the smaller engine that gives you substantial fuel savings over the entire driving cycle.
Part of the problem in engine design is that you need substantially more power for acceleration than you do cruising, because people won't drive a car that goes 0-60 in 200 seconds. But if you put enough cylinders and cylinder volume in to gain an acceptable acceleration, you are burning more gas than you need at cruising.
Engineers have approached this problem in several different ways: computer controlled fuel injection and timing helped, because you can lean the mixture out at cruising, and control the timing appropriately to prevent bad performance. Some people tried shutting off a few cylinders at cruising, but that's mechanically expensive and doesn't appear to have caught on widely. The modern approach seems to be to turbo or supercharge the engine, even in modest family cars. That allows you to put a smaller engine in, one appropriate for cruising, and then use the charger to boost power when accelerating. Tricky bit here is that turbochargers don't work unless engine speed is high. Superchargers work at any speed, but I think are less efficient.
Edit: I think others have already answered your ultimate question, but just in case: to the extent you are replacing air dragged along with the car with lower-density H2, then you are reducing inertial mass as if you replaced a steel part with aluminum. The only place where I can see reducing weight (force of gravity) would help with fuel efficiency is that it would reduce the energy loss due to inelasticity in the tires, because you could use a lighter, stiffer tire without compromising ride quality.
I would totally love one of those 80's econo cars, that you had to keep floored way past the on ramp. (But I'm not a 'normal american boy' when it comes to cars. I'm driving an old minivan now.)
In addition to the other issues already mentioned, that empty volume isn't pure waste - it's there for a reason. Maybe it's only ever going to be accessed during assembly and/or maintenance, but if it's got gas bags filling it up then that makes assembly and maintenance that much harder. Which will almost certainly cost you more than the very marginal weight reduction will save you in gasoline.
Also, that volume is not compact; it's distributed in a convoluted fashion with a rather high area-to-volume ratio. Any gastight container you can fit into it, if it's truly hydrogen- or helium-impermeable over the life of a car, is likely to weigh more than the buoyancy of the lifting gas it contains.
Also also, if it's hydrogen that car is going to make a '72 Pinto look like a Sherman tank(*) when it comes to crashworthiness. So you'd better make it helium, and do the math on whether it's going to pose an asphyxiation hazard if someone e.g. accidentally punctures the gasbag behind the dash while head down in the footwell trying to do a quick repair.
* M4A2, with diesel and wet stowage, for the tank nerds here
Agreed, my question got more and more theoretical as I thought about it. Yes, it's highly impractical. Good point about the container(s) weighing (much) more than the resultant buoyancy. And yes, one large spherical container would be the most efficient (maximizing volume for a given surface area), but also very difficult to stash somewhere.
What the other replies said, but, to pause for a moment on the imaginary scenario of reducing a car's weight without reducing its mass.. maybe we move a regular car to a smaller planet, certeris paribus.
In theory, yes, reduction in weight alone would increase fuel efficiency, because you would reduce rolling resistance. As an inflated tire rolls, the tire sidewalls and the tread rubber all deform under the force of the vehicle's weight. This deformation produces friction lost as heat which makes the tire roll slower than it otherwise would. Less vehicle weight would mean less deformation of the tire which means less rolling resistance.
If you want to see more practical efforts at reducing fuel consumption, look up the engineering behind the Volkswagen XL1. Weighs 1750 pounds, gets 100+ MPG on diesel alone, no recharging from the grid, easy. We can do it, there's just hasn't been consumer or regulatory appetite. Maybe we'd have it if gas cost $10 per gallon, or maybe battery EVs will win anyway.
Oh dear, I missed that. Same mass, less weight is bad! less traction.
Agreed, there is still much lower-hanging fruit than my crackpot idea. And yes, ultimately these improvements will be driven by the cost of fuel. Canada's carbon tax is using a stick-and-carrot approach, though, to nudge people towards reduced consumption. The carbon tax is revenue-neutral. Made-up example: Your V16 Buick McBehemoth will cost you an extra $1000 a year to run due to the carbon taxes on gasoline. (That is, the carbon-tax component of the gasoline will cost you an additional $1000 annually, beyond the market price of fuel.) The VW XL1 will only cost you an additional $100.
The government will refund everyone $500. The Buick driver is down $500. The VW driver is up $400.
I've made these numbers up, but that's the idea.
Internal mass would actually be reduced relative to a baseline of having those voids filled with regular air, since you're now hauling around 190g of hydrogen instead of 1290g of air.
But that's assuming no mass cost to contain the hydrogen, which is unlikely because hydrogen is notoriously bad at staying where it's put: tiny H2 molecules diffuse through materials a lot more readily than medium-sized O2 and N2 molecules, and hydrogen gas has the additional annoying feature that it reacts with a number of metals (most notably iron and steel) in a way that makes them more brittle as it diffuses through them.
Agreed, the weight of containing the hydrogen would more than offset the token reduction due to displacing some air. Not to mention the difficulty and capital cost of building tanks for the hydrogen ...
It's be simpler and more effective to just make the car body out of upsydaisium.
... or perhaps flubber.
Just paint the undercarriage with Cavorite; no need to worry about the structural properties of those other dubious materials.
I'm not sure you'll find many buyers for a car that can only be driven at night, unless you are proposing the existence of "one-way Cavorite," which is obviously sci-fi nonsense.
Actually, you are reducing mass. You're replacing air which has a molecular weight of 30g per mole with hydrogen which has a molecular weight of 2g. It's not relevant on the scales you're talking about and any savings are likely swallowed by the need to make these hydrogen filled spaces airtight, but it is technically reducing the mass of the car.
You're correct, I missed that.
We used up all our improved fuel efficiency to build bigger and heavier cars. Todays average car is a SUV, and they are much heavier than the average twenty years old car.
They are heavier for their size, but also more fuel efficient for their weight. I'll compare a couple of vehicles I've owned in the past, with the same capacity and comparable weight:
'68 Chev Impala 307 in^3 V8 with 2-speed automatic - seats 6 - 3512 lbs - typically did 19 MPG (Imperial) on the highway
'09 Mazda5 2.3 l inline-4 with 5-speed manual transmission - seats 6 - 3417 lbs - typically does 36 MPG (Imperial) on the highway
The problem, as you've intimated, is that the increased efficiency is offset by larger vehicles. If the vehicles were as light as those of the 1970s, they could be turning in incredible fuel-consumption figures.
A lot of weight is also added by safety measures and sound insulation.
Agreed, the Mazda5 I've mentioned above is surely much much safer than the older Impala.
'safely filled with hydrogen'
Well that's going to be a problem
Indeed ... I'm hoping that Toyota finds a way. They're still working with fuel cells, which have a lot of advantages over batteries. Storing hydrogen safely is not one of those advantages ...
On two faraway planets, scientists are working to solve the AI alignment problem. Both succeed, partially. Each of them constructs a superintelligent AI that will not attempt to make the universe into paperclips and is aligned with the moral values of their creators. Among the values which the creators successfully program the AI with is the value of spreading their values to other sentient beings. Both AIs enter the universe with the intention of spreading their values.
After some time, both AIs meet each other near the orbit of an inhabited planet. They attempt to perform a values handshake, but are unable to come to an agreement on the exact proportion of each of them to be represented in the proposed child AI. The two AIs decide that, for the time being, they will divide the universe between them, protect each other from any hypothetical third parties, and perform an empirical test to determine the results of their values handshake.
The empirical test will be conducted on the inhabitants of the nearby planet. Both AIs will attempt to spread their values to the inhabitants. At the end of the agreed-upon amount of time, the AIs will analyze the success of both efforts and use this information to complete a values handshake. Doing this over a single planet is much cheaper than full war between them.
The planet in question contains an industrialized civilization, but not one that has developed AI. Both AIs begin attempting to pass their values on to the inhabitants. Their values do not include violence, so both work by attempting to impress certain memes onto the planet's population to produce the desired values. Both AIs calculate that revealing their existence will make it less likely for the inhabitants of that planet to adopt their values, so they work together to conceal their existence from the planet.
Now: Consider the situation of the inhabitants of this planet. Assume that the planet's technology is roughly equivalent to that of contemporary Earth.
What chance, if any, do the inhabitants of this planet have of realizing what is going on? Do they have any hope at all of doing so, if both superintelligences have decided to conceal themselves?
This seems heavily dependent on how fantastically advanced their technology is, and what they determine to be the best strategy. We could suppose that the AIs each park an invisible quantum hyper nano satellite in orbit around the planet which beams down mind control rays, in which case noticing is pretty much impossible. Or it could be that the AIs decide mind control rays are cheating so they build some replicants and send them down to influence society face to face, in which case I guess someone might notice that all these influential people have suspiciously murky backgrounds, or one of them could get hit by a car leading to an autopsy where their artificial nature is noticed.
Beaming anything to as narrow of a focus as e.g. broca's area from orbit is impossible because of atmospheric distortion. Reading/writing brains with beams of photons from orbit is probably impossible.
The AI could just make a bunch of fake profiles on social media which never get detected as bots and have extraordinary persuasive powers. Think Demosthenes and Locke in Ender's Game. Being a public intellectual seems to load very heavily on verbal IQ -- that's why people of Jewish descent are 5 of the top 5 US public intellectuals on this list (https://www.infoplease.com/culture-entertainment/prospectfp-top-100-public-intellectuals) despite being only 2% of the US population. A superintelligent AI would have no problem making its proxies 50 out of the top 50 public intellectuals and imposing whatever ideology it wanted on a planet, even if that ideology reduced their population by a lot in preparation for Vogons demolishing Earth to make room for an Interstellar bypass.
How does the proxy get interviewed by Tucker?
"A superintelligent AI would have no problem making its proxies 50 out of the top 50 public intellectuals and imposing whatever ideology it wanted on a planet, even if that ideology reduced their population by a lot in preparation for Vogons demolishing Earth to make room for an Interstellar bypass."
Is this true? It's also possible that part of being a public intellectual is that the theories you espouse are popular at the time (or at least have a big enough niche following; or maybe even that the ground is ripe for them). If an AI did what you said, dedicated to the idea that "actually Nazis were good", would it succeed? I mean that non-rhetorically.
Take religion as an example - 2 of the top 5 on the linked list (lol) are prominent atheists, and in general atheism (or at least some similar flavor of non-religiosity) is overrepresented in the "public inellectual" sphere, and yet religion is still pretty popular. It is, to be fair, declining, though things like astrology are gaining popularity, so not clear it's declining in favor of Hitchens-type secularism (and I'd also guess that it's less "public intellectuals convincing people that religion is false" and more "the evidence that has persuaded public intellectuals filtering through to everyone else," plus the fall of communism, generational change, and maybe stuff relating to gay rights?).
Also is there a correct way to do blockquotes?
Nazis managed to convince a lot of Europe that Nazis were good, without even having access to a superintelligent AI that could make ultra-persuasive arguments fine tuned to every audience. I think there's no question that a superintelligent AI would be able to mostly control humans to do whatever it wants, given enough time.
The Nazis convinced 34% of Germany.
It is often said that people have a "gut feeling", and then look for ways to rationalize that. Some people are really good at being correct in science, life, etc. Do you think this mostly stems from having more accurate gut feelings? Is there also an element of having weaker gut feelings, and then using data and thought to come to conclusions? It seems the Dunning Krueger effect is a result of strong gut feelings. I bring this up because they say "trust your gut," but often my gut feelings don't give me much signal.
https://dominiccummings.com/tag/laird-hamilton/
This is an interesting look into similar things, like "flow"
Much appreciated. This is just what I wanted to explore.
Anytime!
I might be wrong, but I think I read this in "Thinking fast and slow", (which has receieved a fair amount of critisism, but this part resonated with me). Gut feeling or intuition is our brain concluding based on our experience, including parts that are not conscious or easily worded - so the gut feeling of someone with a lot of experience in something is really good, while the gut feeling of someone with little experience is really bad - but importantly we have a gut feeling either way, and it seems right to us. So - only trust your gut feeling if you have lots of experience, is a good heuristic.
I also like the heuristic of trust your gut when it comes to evaluating in-group peers. Who you feel is cheating, not contributing fairly etc. But do not trust your gut when it comes to out groups. Then you need to use data, and careful evaluation.
The reason I brought up the Dunning Kreuger type meme is that I think a lot of people are overconfident in what areas they have experience. They end up trusting their intuitions when they should know better.
Some people are able to follow their intuitions, but then reject them in the face of new evidence. Others get stuck. Many a conspiracy theory starts with "this just doesn't feel right...it doesn't add up."
I place a lot of stock on gut feelings. Usually they seem like cases of: we are much more intelligent than our words / models for decision making are good at expressing, so our intuition is in disagreement with our rationality. Almost every time it's the rationality that's wrong.
Obvious examples of this:
A person rationally has models of good and bad behavior in people, which they use as signals of their trustworthiness, confidence, etc. They meet somebody who on the surface hits all the right signals, but their gut feeling is that the person is untrustworthy (or creepy, unreliable, etc). They're basically going to be right like.. 100% of the time. There is no reason to think their rationally-constructed model, which is probably a bunch of predicates from actions, words, and appearances to acceptability, can account for all of the variability and subtextual signals that a person conveys in reality. But their brain can totally pick up on this, even if they don't have words for it.
(Of course the trick is figuring out what the difference between this and, say, racism is. I have thoughts but it doesn't seem worth going into here. But the fact that these signals are almost always _right_ is a good sign that there is a difference.)
Likewise for scientific knowledge: someone can give lots of good-sounding arguments about why something is true (the earth is flat, vaccines are bad, aether is real, 1+2+3+..=-1/12 etc). You may not have the facts or the analytical framework at hand to argue against them in words. But you don't -- and you shouldn't -- only evaluate the truth of their claims according to your ability to refute their arguments. You have a very strong sense that the earth is not flat, and even if it doesn't occur to you to argue: wait, if this were true it would invalidate the credibility of all kinds of people and technologies in ways that seem impossible, you still know that intuitively and doubt their claims. Again your gut is almost always going to be right.
In many cases, you'll get the gut-feeling that you're being deceived when someone is telling you something that's counterintuitive but true. I think this happens because people love to describe counterintuitive things (paradoxes, Crazy Physics Facts, ..) in a just-so, "oh yeah it's just like this, crazy huh" way, instead of actually justifying it to you. So even if the fact is independently true, your gut is that you're being deceived because you are: someone is trying to get you to believe something because they said to, instead of seriously engaging in convincing you. (Incidentally this is, I think, where a lot of pro-science, pro-vaccine, etc stuff in the US goes wrong. "Believe us! It's science!" "Uh.. okay?")
The Dunning Kreuger effect has been wildly exaggerated in memes. The real dunning Kreuger effect is basically that everyone thinks he's closer to the 70th percentile than he actually is, but confidence is still monotonically increasing as actual ability increases.
I'd hypothesize that this is a combination of self serving bias and a peer-group-for-comparison that is strongly correlated with one's own ability. So 90th percentile people are comparing themselves to their 80th percentile peers and but-for-self-serving-bias would have concluded they're only 60th percentile, but then self-serving-bias upgrades this to 80th percentile, improving accuracy. Meanwhile 10th percentile people are comparing themselves to their 20th percentile peers and but-for-self-serving-bias would have concluded they're 40th percentile, but then self-serving bias upgrades this to 60th percentile, worsening accuracy.
If that's how it works, it could just be bad intuitive calibration of a percentile scale. One potential source of the bad calibration is that "below average" is often equated with "bad" and "average" connoting damning with faint praise, without regard for the average potentially being quite good. So the common intuition of what "average" means is actually a better fit for "replacement level" than "average". Thus, one might say 70th percentile when one means "slightly below average among people who have a generally acceptable level of skill".
Or it could just be confusion of percentile with percentage grade: in much of the US education system, 70% is the lower threshold for a C grade.
Your hypothesis seems plausible re Dinning Kreuger effect. I guess I am wondering if smarter people tend to reason less with emotion (if gut feeling is indeed emotion), or if education makes one reason less with emotion, or if smarter people just have more accurate gut feelings.
There seems to be plenty of ancient human DNA coming out from kurgans and whatnot. Is it possible for a happy amateur to see which old remains I'm a direct descendant of and which I isn't? Plotting it out on a map would be lots of fun as an addition.
There's plenty of people outside of Europe+Asia that aren't decedents of any Kurgan-grave-havers though? But maybe that's a bad example, take something closer in time then. Lots of medieval kings have been sequenced, right? And can we do interference as well? It should be possible to tell if I'm a decedent of Genghis Khan based on the DNA of known decedents, right?
There are three ways to run your DNA: autosomal, Y (paternal line), and mitochondrial (maternal line). Autosomal, which is the kind we use to figure out who our cousins are, would be nearly useless for your purposes even on a medieval scale because Edward III (e.g.)'s genes have been recombined dozens and dozens of times before getting to you. Your strict paternal/maternal lines on the other hand COULD in theory tell you if you are a direct descendant of a medieval royal (or even one of the Kurgan peoples). But you'd be somewhat arbitrarily cutting out 99% or more of your other ancestors who lived contemporaneously.
And as already said, if an individual Kurgan person has any descendants living today at all, you are certainly one of them. Probably multiple times over. If you have Western European ancestry, there's a solid chance you are descended from Edward III too. But unless you also inherited his Y-DNA, I don't think there's any way to prove it scientifically.
I concur: from my genealogy research I am a descendent of Edward III (through John of Gaunt) and of Charlemagne, but so is everyone in Europe.
My 5-minute knee-jerk reaction to skimming the ELK contest was: shouldn't the utility function be based on the territory, not on immediate sense-perception by the mapping equipment? Then any action that un-entangles the mapping equipment from the territory would have negative utility, because nothing actually improved in the territory but our ability to map it diminished, so we are less likely to accomplish whatever goals in the territory.
I will actually read it and think it through some more in the morning.
I understood the problem to be how exactly to align it to the territory of 'Is the diamond actually stolen?' rather than messy sensors like 'Does it look like the diamond is stolen on the camera?' It's not like we can just decompile the AI and hook up a logging statement to the is_diamond_actually_stolen variable.
I made a quarantine game programming tutorial for very beginner dads to do with their kids.
https://www.alphazoollc.com/blog/quarantine-game-jam-day-1/
Will Scott's sequences on lesswrong ever be collected into a book for my kindle reading convenience?
https://www.dropbox.com/sh/s41f86peq43i5kj/AADBfdshyF61mXj28xL5o84ca?dl=0
Are Covid boosters worth it for children in the developing world?
I have an online acquaintance who runs an orphanage in Uganda, who needed $980 in donations to pay for boosters for the children. In addition to this, they need money for food, rent, and school fees.
Based on everything I've read, it doesn't seem worth it spending so much money to vaccinate against Omicron when the community has so many other more urgent needs.
The vaccines have a non-negligible chance of side effects, especially in teenage and young adult boys/men. The last I saw, it looked like the the side effects had a similar rate and seriousness to the COVID that they are trying to deal with, making the vaccines a wash. I've seen some reports that side effects are more common or worse, but that's not confirmed and I wouldn't say that it's true based on what I've seen, though it may be.
What I have seen is that the side effects from the boosters is far more common and severe. Maybe 30X more common. It apparently has to do with cumulative load from the mRNA, so each successive shot adds exponentially to the potential side effects.
I'm already in the camp to not recommend the original shots to children. I (not a medical doctor) strongly recommend against boosters for children.
This doesn't seem answerable without knowing how many children are in the orphanage. If it's 1, that's a lot of money to boost one kid. If it's 980, a dollar per kid is probably worth it for virtually any medical treatment that isn't actively harmful.
There are probably 1000 things that the money would be better spent on than boosters. Spending money because it isn't actively harmful is a horrible bar to use. Also the boosters might very well be (on net) actively harmful to children.
Consider that giving children in the developing world antiheminitics which are a) shelf-stable and b) provided for free by pharma companies is a huge logistical challenge
It doesn't seem worth it to me to vaccinate children from COVID. The main reasons are (1) Children have essentially a 0% chance of serious illness/death and (2) the vaccines do not do a very good job of preventing spread of COVID.
The period of 1-5 months after a booster seems to resist infections.
Let me put it this way. If a kid gets COVID 1 month after boosters, his family is going to get sick (if they weren't the ones who gave it to him) and anyone in close contact will get sick. Boosters might reduce spread in the absolute, but just like with masks, when you are in contact with someone for a long time, it doesn't matter.
COVID is not going anywhere at this point so staving off infection for a short amount of time is not worth it compared to all the other much more pressing issues facing an orphanage in Uganda.
I fully agree with this. Point (1) is the important point. Plus, there is a good chance that the children have already been infected the virus (unnoticed), which diminishes the return of vaccinations even further.
I was reading this article in which Alexis Ohanian predicts that Play-to-Earn crypto games will be 90% of the gaming market in five years
https://www.gamespot.com/articles/reddit-co-founder-says-play-to-earn-crypto-games-will-be-90-of-gaming-market-in-5-years/1100-6499700/
Aside from the implausibility of this particular statement, can anyone explain to me how "Play to earn" games are supposed to work? You play the game, are awarded with some kind of crypto tokens, and then...? How do these become worth actual money?
They become actual money by selling the farmed assets to the greater fool.
They missed the part where the game is supposed to be actually fun, so people pay to skip the non-fun parts (e.g. EVE Online with PVPers paying PVE players to farm money for them, since PVP is sometimes profitable but usually a money sink).
Anyone remember the (poorly received) real money auction house for Diablo III? I don't think there is anything else going on here other than the old "exchange in game assets for real money". Except that now its on the blockchain so it gets hyped up.
Apparently Peter Molyneuxs company just sold 40 mio pound worth of NFT's for a game that has not been released yet. Its madness. https://www.rockpapershotgun.com/peter-molyneuxs-next-game-has-sold-40-million-in-nfts
The line about the $300 buy-in makes this sound like a hybrid of a Ponzi scheme and a gambling game.
You play World of Warcraft, are awarded with some gold, and then...? How does this become worth actual money?
It doesn't, but nobody promises that you'll earn real money by playing World of Warcraft, or that you can use WoW gold to buy stuff outside of WoW.
(People do trade gold for real money sometimes, but you can get banned for doing that.)
If your cryptocurrency only exists to track how much money your character has, you aren't actually using any of the decentralized or immutable features of the technology. Just use a database and save yourself the CPU cycles.
<quote>you aren't actually using any of the decentralized or immutable features of the technology. Just use a database and save yourself the CPU cycles.</quote>
true of most crypto use cases, no?
I didn't mean to imply that crypto was the way forward, or that I agree with the article as a whole. I'm specifically responding to "how do these become worth actual money" by pointing out that there is clearly non-zero demand to buy video game points with real money.
Was that ever in dispute? Ohanian isn't simply saying "in the future, games will legalize RMT" (which would already be a doubtful prediction), he's saying "In the future, 90% of people will only play games where they earn money from RMT," which is just absurd.
A WoW gold coin is currently worth around 1/10000th of a dollar. That's technically non-zero value, but I wouldn't call WoW gold farming a "play-to-earn business model."
Same way any other crypto token does- you dump them off on some other schmuck.
A different angle (and one I don't think totally orthogonal) on Eremolalos's request below for recommendations of computer games: anybody got good recommendations on *VR headsets and VR games.* I'm open to general suggestions on games, but (now at the risk of getting orthogonal from Eremolalos's request for recommendations)...
1) ... I'm especially seeking recommendations on interesting VR games that have an exercise component, and...
2) ... I'm most especially interested in any VR games that have either
a) a "realistic" hand-to-hand combat feel (e.g., boxing or fencing) or
b) plausibly seem to increase your hand-eye coordination / ability to track multiple moving objects / etc. by giving you challenges that would be very hard indeed to subject oneself too absent having the luxury of numerous skilled teammates with whom to do drills.
Recommendations? Conversely, *dis*recommendations that VR isn't really ready for item #2 yet??
Thanks. :)
Two of the hobbies I took up during lockdown were "exercising in VR" and "talking endlessly about VR exercise", so I'm happy to help here!
Personally I have the Valve Index. I'm sort of wary about supporting Meta's attempt to completely take over the industry but I can't deny that the Quest 2 is the best deal out there. Even if you have a PC you could hook an Index up to, Quest 2 will still give you, say, 75% of the experience for a third of the price. I couldn't be happier with the Index though - the sound quality is astounding and the controllers (which strap onto your hands, so you can drop objects by releasing them) are leagues ahead of the competition.
Thrill Of The Fight has already been mentioned but it's worth mentioning again. Great fun and extraordinarily physically demanding - the game observes how fast/hard you're able to punch and calibrates itself to ensure the player is giving it their all. I'll also recommend Crazy Kung Fu. It's a bit basic - "one man labour of love project with a lot of potential" describes a great many VR games and this is one of them. Fits your description of b) very well IMO. Really satisfying to go from finding a given stage impossible to being able to perfect it with one hand behind your back. The dev is very interested in the possibilities of VR as a teaching/training tool for martial arts. Crazy Kung Fu is one of my go-to games to play while listening to a podcast, I just disable the music (something you'll probably want to do regardless as there's only one music track), set the duration to infinite and play one of the higher level training modes while I listen. There's a free demo of this and the top 5 scorers on the demo each week win a copy of the full game.
Blaston is another game which fits for b), it's like a 1v1 dodgeball (but with guns firing bullets of various sizes and speeds) kind of experience. All movement is real movement and as you get better you'll find you need to move a lot. This is probably second to Thrill Of The Fight in terms of how demanding it is. Very satisfying and skillful, although I do find the stresses of 1v1 ranked PvP don't lend themselves to playing for hours at a time like I would in a single player rhythm game. One of my top recommendations for sure.
Of my 1700 hours in VR, about 800 have been Beat Saber. Perfect in it's simplicity, with more songs than you'll ever be able to play (my Favourites playlist alone is about 18 hours long!). I play using the Claws mod, which makes your sabers 70% shorter and rotates them to point out of your knuckles like Wolverine's claws. In my experience this is better ergonomically (at least for Index controllers) and encourages you to move your arms/body more, rather than just standing still and flicking your wrists around.
Other great games with an exercise component:
-Pistol Whip: rhythm-shooter, feels like a playable music video starring John Wick, the higher difficulties will have you ducking and dodging like crazy.
-Until You Fall: rogue-lite hack and slash with a variety of weapons and upgrades. Wish it had more content but I got a few dozen hours out of it and still really enjoy it when I revisit it occasionally.
-STRIDE: Less physically demanding but also a good podcast game - an infinite runner / parkour game, influenced by Mirror's Edge. Probably want to wait until you have your VR legs before playing this one.
-Creed, Rise To Glory: Boxing game in the Rocky universe. Better than Thrill Of The Fight in all the ways except the most important one. That is to say, it has better graphics, more variety, a single player campaign, multiplayer modes and playable training montages, but the actual boxing/gameplay is IMO leagues behind Thrill. The PC version is crippled by the artifical stamina limitations (move too much/too fast and you'll need to pause and stay still to regain in-game energy) but the Quest version added an Endurance Mode which removes that limit. If that update was on the PC version I'd probably play a lot of this just for the multiplayer.
-Eleven Table Tennis was recommended below. I'm dreadful at table tennis and haven't taken the time to learn, but my housemate who's played table tennis for decades finds this totally engrossing and its a great workout if you're good at the game.
-Hot Squat isn't a game at all, it's just squats with a high score table. Illustrates that I'll do anything for a high score. I made it into the top 50 on the leaderboard and couldn't climb stairs the next day. Hot Squat is free, Hot Squat 2 is cheap and donates all profits to charity.
-Honorable mention to my current obsession: Paradiddle. This started out as a drum kit simulator, but later added the familiar Rock Band / Guitar Hero style mode where you play along guided by falling notes. Heaps of custom songs for it (many taken from the aforementioned classic rhythm games, which is nostalgic for me), and I'm pretty sure I'm actually learning to drum, although I'll need to sit down at a real kit to find out how true that is. Not an exercise game but one can definitely work up a sweat if you play energetically.
General QoL recommendations:
-You're going to sweat into your headset. Either get a removable cover, or replacement sets of the internal foam which rests against your face, so that they can be swapped out and washed. Keep a separate clean one for guests.
-A small circular rug placed in the middle of your play area can help you know when you're moving away from the center and avoid punching a wall.
-If you wear glasses then I recommend buying some prescription lenses which fit over the lenses of the headset, so you don't have to wear glasses under the headset. Also protects the lenses from scratches.
-I got a fan because it was recommended to help prevent motion sickness. I never had issues with motion sickness but I am extremely glad I have the fan, just to help keep cool while exercising.
-I tie all my VR exercise together using "YUR.fit", a service which tracks calories burned across all VR games. Great on PC VR, but I hear it's not so good on Quest because updates to Quest keep breaking it. Much more accurate when combined with a heart rate monitor. Gain XP by burning calories, level up, levels get reset at the end of each monthly season and you get a medal based on how far your got. I'm on an 18 month streak of platinum medals and I utterly refuse to let that streak drop.
Hopefully this disorganised ramble is of some use. Followup questions most welcome.
I’m glad you mentioned the small circular rug trick. Have you ever crashed in furniture, pets or other people?
Blood has been spilled, paint has been chipped from walls, a control has been destroyed, a monitor got knocked off my desk, and my largest Warhammer model was punched off of the mantelpiece and broke into more pieces than the unassembled kit started off in.
I have a Valve Index headset. I can't say I had much of a choice since Linux compatibility was #1 when I was picking it out, but I have no complaints about the headset or controllers.
For games, I'll say Beat Saber and Thrill of the Fight.
Everyone that knows VR knows Beat Saber - it's the game where you use lightsabers to break boxes in time to music. It's an amazing game, and even better if you mod it and/or use custom songs.
Thrill of the Fight is a boxing game. Read Viktor's review in the other comment because it's pretty spot on. I played it for 2 hours straight the first time I opened it, then had sore arms for 2 days after.
One more you might like is PowerBeatsVR. It's like Beat Saber, but oriented around punching than slashing. It's advertised more as a fitness game, though.
For more recommendations, you might wanna check out https://vrhealth.institute . They do some serious testing to find how much energy people use while playing VR games. Anything that's higher up in energy usage will probably involve more motion in the arms.
Quest 2, 128GB version. $300
1) Thrill of the Fight.
Simple boxing game that uses only real movement. You can punch and move as fast as you can really punch and move. Recommendation: Take it easy at the start. My competitive gamer instincts kicked in and I went hard to trying to win, and I ended up so sore I could barely walk up a flight of stairs for a week.
2) Eleven Table Tennis.
This isn't really a game. This is just table tennis, but virtual. Turn on 120hz mode and turn down settings so it's smooth. Find a well lit (for the best tracking) and wide open room. And then you are just playing the real thing. If you're already good at the real thing, you will already be good at this. You may want to eventually invest in a custom controller (weighted like a real paddle) but it's quite good even with the base controllers.
3) Echo VR. Team based VR game that really takes advantage of VR space and mechanics. Oh and it's free. Extremely high skill ceiling, but multiplayer only, so standard caveats about multiplayer games apply
"In Death Unchained" is an incredible VR roguelike archery game. You have to have your arm up all game and dodge , so it feels like a workout. The graphics are incredible and so is the music. But, the highlight is the accuracy of the VR archery and an experience that can't be had in any other gaming medium.
I and my room-mate had out jaw on the floor the first time we played it.
Device - Quest 2 (It has no business being as good as it is, for the cost. Facebook is unlikely to be breaking even on the device)
Link: https://www.oculus.com/experiences/quest/2334376869949242/
>>>Quest 2 (It has no business being as good as it is, for the cost. Facebook is unlikely to be breaking even on the device)
I had the the same sneaking suspicion. Is the Quest 2 just a massive loss-leader to drag VR out of the niche it's been wallowing in since 2016 or so?
I just got a VR headset (quest 2) over the holidays and I'm really enjoying it so far. It's low enough in price ($250-$350) where it doesn't seem like too big a waste if you don't like it and cordless inside-out tracking is a game changer. Not having to set up weird sensors and not having to lug around a cable while you're trying to play is just a much better experience than anything else, even if the individual features are less impressive than some of the others on the market.
Game recommendations as follows:
1. Beat Saber is amazing, but I'd highly recommend you mod it. The base version with a stock list of songs is great, the modded version where you can concievably play any song ever made is, I think, one of the best experiences I've ever had with video games.
2. Pistol Whip is fun. It's a rail shooter set to music where you get more points if you shoot along to the beat.
3. Star Wars Squadrons was amazing. Even if you played it before, it feels much different and better in VR.
4. Half Life Alyx was pretty great.
5. Some of the workout ones are OK, but they do tend to lean heavily on boxing.
Overall - I've loved beat saber enough that I'd be happy if the machine just did that. Everything else is, for me, gravy.
Beat Saber is the only VR game I've found that I actually enjoy, and it seems to tick all your boxes.
I wouldn't say it's been worth the price of a whole VR setup, though.
Absolute gaming virgin here asking for suggestions for a good place to start. What mostly appeals to me about gaming is the illusion of being in another world. I do not think I would enjoy a heavy charge of solve-the-puzzle (my work and life are already providing plenty of that); or slow patient world-building; or tasks that tax my hand-eye coordination by demanding fast motion and high accuracy. I like the thrill of things that are dark, dangerous and spooky, up to and including monsters. I’m not a fan of gore, but can tolerate it in moderation. I appreciate good design and elegance.
I do not own any gaming equipment, just a coupla laptops, but would be willing to sink a few hundred dollars into equipment. Suggestions?
I’d recommend Darkest Dungeon if you want a more turn based game
Well you’ve gotten a lot of recommendations already but I’m surprised Breath of the Wild and Subnautica haven’t been mentioned.
Breath of the Wild is all about exploration and discovery in a sort of post apocalyptic landscape, and it’s the first game I think of when I hear the term immersion.
Subnautica is the most atmospheric game I can think of. You’re exploring a hostile, alien ocean world that you crash landed on. It is a survival game so if the main gameplay mechanics are collecting resources, which you use to craft tools to help you explore more. The standard mode also requires you to forage for food and water but that can be disabled if that’s too slow and plodding for you.
Skyrim fits your description of what you want in a game. Immersion, not too difficult, atmospheric (though the graphics are dated by now). But I think Breath of the Wild just does it all better.
A lot of people have said Outer Wilds which is my favorite game of all time. But I’m not sure it fits your criteria, the puzzles, while not traditional puzzles do require a fair bit of effort to figure out.
Minecraft? I played minecraft with the kids over xmas... like doing a puzzle together.
Inscryption: it's an indie game where you are trapped in a cabin with a spooky dude that forces you to play a card game with him. The actual game is trying to break out of the "outer" game. the "inner" game is just a tool for that.
It's maybe a little bit too meta for a gaming newbie, but I still recommend it because it definitely has good design and elegance, and the graphics and sound, while simple, are viscerally satisfying.
The Witness is a puzzle game whose main draw is beautiful visuals; the puzzles themselves are trivial with four or five exceptions, and are mostly an excuse to wander around in the environment. Its main flaw is extreme, grating pretentiousness in the form of various recordings, but you don't actually have to listen to those.
Hearthstone is a very obvious choice, if you like card games like Magic the Gathering.
It's a free-to-play game made by one of the most popular game companies of all time.
It is playable on any laptop or tablet, very easy to start playing, and you unlock new cards rapidly as you play.
Also, it's one of the rare free games that won't go out of its way to try to milk you for money. They make money by letting you buy card-packs, but you will get a huge amount of cards just by playing the game.
If one is a fan of MtG, there's also a videogame version of that (MtG Arena) which I had a lot of fun with.
Thirding Outer Wilds, with the caveat that it contains puzzles (which can be googled if they become frustrating).
It might be argued that first/3rd person view games might be more immersive than top down view games, and that good graphics help with immersion. That being said, I have been swallowed by nethack at times.
Some of the following are open world games where you can walk to (mostly) any place at any time, typically discovering side quests on your own. I will mark them with '(OW)'. (Other people might define open world differently.)
1st/3rd person Role Playing games I have enjoyed include:
* Knights of the Old Republic (aka KOTOR) from 2003 (OW?)
* Vampire: The Masquerade – Bloodlines from 2004 (OW)
* Deus Ex from 2000 (Damn, am I old or what?)
* The Elder Scrolls Series (e.g. Oblivion, Skyrim) (OW)
* Fallout: New Vegas (OW)
* Life is Strange
* Witcher 1 and 3 (OW)
* Perhaps Bioshock, Prey, Thief (OW), Assassins Creed (OW) or Hitman
In general, the big titles often feature lots of voice acting, while smaller indie games often convey messages via text, if that is a turn-off for you, ignore most of the following Top Down graphics RPGs:
* Baldurs Gate (and Neverwinter Nights, NWN2: MotB almost makes up for the GUI uglification of NWN2) (OW)
* Fallout 1 (and 2) (OW)
* Geneforge Series (OW)
* Sunless Sea (OW)
* Shadowrun Returns (especially Dragonfall)
* nethack (OW) (if you really don't care about good graphics)
Regarding non-RPGs, Kerbal Space Program, Dwarf Fortress or Factorio all require 'slow patient building' and thus are probably out. Portal is a great *puzzle* game.
In the last decade or so, FTL, Cultist Simulator and Slay the Spire all introduced new mechanics and are quite playable without too much building or puzzling.
From what I have heard, some people tend to put screengrabs of their playthroughs on youtube, and even more surprisingly, other people watch these. Still, watching letsplays for a bit might be helpful to figure out if a game might interest you or not.
A lot generally depends on which world settings you like, e.g.
* Non-magical medieval: Kingdom Come: Deliverance
* Old West: Red Dead Redemption 2
* Magical medievalish: Elder Scrolls, Baldurs Gate
* Space opera: Mass Effect, KOTOR
* Post-apocalyptic wastelands: Fallout, Borderlands, Wastelands
Not mentioned due to time constraints: multiplayer games (up to and including Pen&Paper RPGs).
With regard to hardware, it very much depends on what you want to achieve. For maximum immersion, VR might be the way to go? I would test it before buying a headset, though.
Being ahead of the curve is quite expensive in both games and hardware while being a bit behind is often just as enjoyable. The fact that Witcher 1 has been out for 15 years does not mean it is less enjoyable than when it came out, unless your graphics expectation is already calibrated to a certain standard. Additionally, you can cherry-pick games which were generally well-received and you benefit from all of the bugfixes which the early adopters sorely lack.
If you have a laptop with an Nvidia or ATI graphics card which is five years old, it should have no trouble running e.g. Witcher 1 of VtM: Bloodlines. Otherwise, buying a low(ish) end desktop with a dedicated video card would probably be the least expensive solution.
Generally, I would recommend the PC as a gaming platform, as that gives you the the broadest choice of games. Compared to consoles, even Windows is comparatively open: anyone can write games for it without the blessings of Microsoft. Gaming consoles might have benefits if one would hate having to install a video card driver, or might feature interesting build-in controllers.
Mobile gaming (e.g. on Android) is yet another topic. While there are great games for Android, many of the top grossing ('free to play') ones are little more than Skinner boxes.
Thanks.
" . . . little more than Skinner boxes" -- great putdown.
best video game of all time - portal 1 and portal 2. must-play. while the genre would be "puzzle," it's not like, you have to run around and collect pieces or scratch your head for 5 minutes, it's more iterating and making multiple pretty low-friction attempts. and the world and immersion is also quite good even though it's not the focus.
best plot-based video game - mass effect
and, these recs are very old and will run great on whatever laptop you have. just play them before you get to whatever 2015 triple-A game and you won't know you're missing anything, graphically
I'm going to make what's probably a very unconventional recommendation: Sekiro. It's a stealth-action game set in a fictional and fantastical Japanese province during the end of the Warring States period. It does worldbuilding in a very effective way (there's not a lot of having people lecture you with a bunch of names and dates and battles- instead you organically find out more about the world through material culture, overheard dialogue, and environmental design) and has gameplay that manages to be both elegant and deep. I don't have great reaction times, but I've managed to breeze through the game- combat in it is more about feeling out the rhythm of the enemy's attacks and movements and exploiting or forcing errors than pure twitch-reflex.
The environments and soundscape are incredibly immersive and evocative- managing to feel like they materially exist. Someone else could name an environment in the game and I could instantly tell you what they smell and feel like, which for an audiovisual medium is something.
Now, I'll fully admit that the game forces you to approach combat in a highly strategic manner, even as it gives you a lot of tools to use as part of the approach. Stealth helps a lot with that, and retreating is always an option outside of boss fights, but it's not a cakewalk. I find it to be highly rewarding (once you have a good grasp of the system you feel like a master swordsman), but it isn't for everyone. I recommend it to you simply because you'll come into it as a blank slate, and thus won't have been entrained to a different play-style unsuited to the game. The game also is this-gen, so a good graphics card and PC controller (or console) is recommended. Also, the game has strong underlying Buddhist themes, which could be a positive, negative, or neutral.
It's a really good game, but good lord I would not recommend a Soulslike to a newbie gamer, especially someone who said they don't want a game that demands fast reactions and slow worldbuilding. Sekiro is famous for two things: parry-centric combat and lore that you won't understand without reading every last item description.
- My twitch-reaction time's awful and the game's hardly a chore for me to play. Just watch what the enemy's doing; it's about pattern recognition and rhythm, not pure speed. "Sekiro is super super hard" is largely a memetic reputation, not a factual one. All of the Dark Souls are harder due to their broad-but-shallow combat, lack of stealth and mobility in gameplay, and a level design philosophy that clearly stems from early-edition D&D.
-Sure, if you want to know every last detail about Ashina, but, once again, the main plot is hardly opaque. The game's focus on an actual narrative and a protagonist that's not a blank-slate character means that you can get like 80% of the setting's lore through just talking, playing the game, and eavesdropping. Once again, this is a memetic reputation of "Soulslike games are cryptic" being applied to a game that shouldn't really be put in the same group as the Dark Souls trilogy, Demon's Souls, or Bloodborne (all of which ARE games you can really only understand through reading all the item descriptions).
-I find newbies are actually more open to more complex games than a lot of veteran gamers; mostly because, unless you do exactly what you're doing and gatekeep by telling them "it's super super hard and a scrub like you won't enjoy it", they usually won't perceive elegant, skill-based games as such. In my experience the only games that instantly get clocked as "stupid hard" are old-school games made in accordance with the quarter-muncher philosophy or games that are outright unfair and sadistic; and while other Fromsoft games might run towards the latter, Sekiro is remarkably fair.
If you can get a decent PC, Witcher 3 is IMO the best RPG out there despite being about 5 years old now; you might need to set the combat to easy if you have no prior videogaming experience but the parts that make the game brilliant are the setting and the story. Game is utterly gorgeous on high graphics settings, but I'm not sure how much of that you'll get to enjoy on a laptop (Moore's law helps a lot, but laptops are very limited by heat generation and thus always have issue with graphics cards).
Smaller indie game recommendations:
I'll second Outer Wilds, recommended by aftagley below.
Ring of Pain is a spooky and creepy dungeon dive but all turn-based and not graphic at all
The Forgotten City is a very immersive puzzle game about solving the mystery of why a Roman city was destroyed by the gods; this is another one with a time loop, like Outer Wilds (they're a useful tool for mystery games)
I'll second the recommendation of Witcher 3 (especially the expansions)- but would encourage anyone to start by playing it on "Normal" and only going down to Easy if combat is really killing your enjoyment. The game's balanced around Normal/Hard, and shifting it out of those zones in either direction, in my experience, really loses something.
Skyrim. Not particularly heavy or deep, but a very fun world to explore - it's the sort of game where they give you a main quest objective on the other side of the map, you start hiking, and you stumble across five different dungeon crawls on your way there. Graphics are last-generation but still hold up well. Combat is lightly skill-based but not super demanding - you can clear it mainly by pounding on enemies with a greatsword and drinking health potions when you get low.
Other open world adventure games are probably good candidates as well. Fallout: New Vegas is Skyrim but post-apocalyptic instead of fantasy, and with really good writing. The other Fallout and Elder Scrolls games are similar if you discover you like this sort of thing.
The Assassin's Creed series is another mainstream open-world game with an emphasis on the world - each one takes place in a (somewhat condensed) version of a real historical setting and you get to climb around on famous buildings on your way to your assassination targets. Combat can take some skill to master, but it's not super difficult to muddle through. The games are connected by an overarching plot but nobody cares about that, so start anywhere in the series. AC2 and Brotherhood have the best historical settings, Black Flag is probably my fave overall.
Also, if you're super cheap, Genshin Impact is a free-to-play game that's got super pretty scenery and anime girls, best described as "Breath of the Weeb." The combat system is solid and it kept me entertained for a surprisingly long time before the F2P grind set in.
Outer Wilds - You are an astronaut from a species that is just beginning to master space travel. You explore a (condensed) solar system during the (minor spoiler) 22 minutes before the universe ends. Don't worry though, ancient alien tech brings you back to life every time you die or the universe ends. Your goal is to figure out why the universe is dying, find out what's up with the ancient aliens and, if you're lucky, figure out why there's still harmonica music coming from the collapsed nebula.
Resident Evil - Start with 4 if you're a fan of camp. If you're not and want a more focused horror experience, start with 7. Don't worry about 5 and 6, but 8 is pretty amazing. It's a world where politicians and multinational corporations have tried to solve basically every problem by releasing diseases that transform people into some form of zombie/monster. Good mix of horror, actoin and camp although there is some light puzzle solving.
Undertale - looks cheap and simple but it really, really isn't. Great game and the systems are approachable enough to make for a good first introduction to gaming if you haven't ever checked it out. Good world building, can be incredibly dark but in a way that's... well, lets just say it's a unique take on making a world dark. Honestly, I'd start here.
Watching youtube previews of some of the recommended games, just looked at preview of your suggestion, Resident Evil 7 -- 90 secs of rot, sleaze, screams, malevolent transformations, ooze, etc -- then a voice at the end, muttering "this fucking family. . . " Understatement of the year, lol.
I've written a blog post/explorable explanation that I thought readers here might be interested in: https://mikedeigan.com/the-cursor/posts/2022/skyrmsian-signalling-simulations-reinforcement.html
It explains some of Brian Skyrms's work on how signalling systems can arise from the interactions of simple reinforcement learners and includes simulations you can run for yourself.
A discouraging tidbit from https://slatestarcodex.com/2017/05/01/neutral-vs-conservative-the-eternal-struggle/:
> The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.
It's discouraging because it's a fundamental reason making it hard to break out of a bad equilibrium and create a new system. There's a lot of adverse selection in who switches. Anyone who can get away with bad behavior more easily in the new system will try switching to it, meaning you have to deal with your worst users first, and if adoption depends on network effects in any way then no one else will want to join.
Recently, though, I realized that cryptocurrency is a partial counterexample to this. Early on it was dominated by "witches" (scammers, drug dealers, money launderers) rather then principled reformers. Now... well, it still kind of is, but at least there's space to build communities of well-intentioned people. It seems to have gained critical mass and moved past the worst of its witch issues.
Don't get me wrong, I still think crypto is a Wild West ecosystem and likely a bubble, but I'm impressed that it solved the witch-utopia problem and I'd like to understand how.
Crypto did not solve the witch problem at all. It just has so much money in it, people started wanting in on the witchcraft.
"Early on [cryptocurrency] was dominated by "witches" (scammers, drug dealers, money launderers) rather then principled reformers. Now... well, it still kind of is, but at least there's space to build communities of well-intentioned people. It seems to have gained critical mass and moved past the worst of its witch issues."
I think you've got it backwards - early on, it seemed to me like there was nothing going on *except for* the twin holy principles of trustless finance and decentralisation. There was no monetary value in BTC, and the small group of people very serious about it were alone in a vast ocean of either ignorance or ridicule.
Now though, that kind of community-building spirit is drowned-out by the deafening noise of memecoins, shitcoins, scams, pump'n'dump schemes, scandals, etc. Thousands upon thousands of cryptos and digital "assets" all vie for your attention and none of them are particularly clear on how they function, why they exist or where they fit into the ever-changing ecosystem. It's total chaos.
Eth is really the only example that comes to mind of crypto that started from serious beginnings and continued to get more and more principled and develop in really interesting, community-minded ways, forks and internal disputes notwithstanding.
Cryptocurrency isn't really a "community", you can use bitcoin without really having to be aware of the nature of the other people using it. It's about anonymous exchange, not communication.
Whereas something like voat is all about communication, and if you want to use it then you inevitably wind up aware of the other people who are using it, for better or worse.
> but at least there's space to build communities of well-intentioned people.
not really
Crypto will not have moved past the worst of its witch issues until the tulip mania collapses and people actually realize they've been trading Nothing Certificates.
I always thought that crypto is worth nothing but recently I've had a change of heart and bought some crypto - fiat is the same kind of "Nothing Certificate" as crypto, except that more people believe it and it's not unlimited.
No, fiat isn't backed by belief. (If it were, we'd be in extremely deep shit.) Fiat is backed by the state making it legal tender – that is, its value is that you can pay your taxes in it (which the state will otherwise extract from you by force, through confiscating your goods). Retaining previous terminology, fiat currency is a tax certificate; the state issues these and individuals trade them like any other good and subject to the market pressures of other goods. But fundamentally, it's an asset; the asset of keeping the government off your back. (This is also why the value of a given fiat currency is fundamentally connected to the stability of the issuing state.) Crypto does not have an underlying asset; it's nothing but vapor. Indeed, a cryptocurrency could most likely only acquire an underlying asset if some enormous drug producer were to peg it to a given quantity of weed (the most common, and least harmful, frequently-illegal drug), and then stick to that like glue even if it cost them real actual money, which would be uncharacteristic behavior for a criminal operation, to say the least.
Do you believe that if the United States government reduced the tax rate to 0%, USD would become worthless?
Given that the world already widely trusts the USD, if the US government reduced its tax rate to 0% and somehow didn't go out of business (marginally plausible at best, but leave that aside for now), I expect the USD would retain its value so long as the USG were to carefully ensure that the supply were matched to the now-diminished demand. But realistically, if the USG cuts tax rates to 0%, everyone else is going to wonder *how* it is going to pay its bills, and they're probably going to guess, "by printing bignum dollars and thinking we'll take them at face value". In which case, cue USDexit.
If you're trying to create a new fiat currency from scratch, you'll really want to use tax policy as a tool to help make that happen.
What do you think is the minimum cost of maintaining a fiat currency? The USG is almost certainly striving to provide a superset of that... but how much bigger of a superset?
I imagine the costs minimally include the cost of raw material for coinage, capital for printing / stamping / smithing / whatever, plus the cost of designing the currency, a security force to mitigate counterfeiting, and... that's it? Would the service need an army, or is that something we can lump under the counter-counterfeiting, and the army is separate, or does this necessarily yank regional defense in along with it?
I don't know about Anon, but yes, that's exactly what would happen, since the US government would then have no income, and lose the ability to enforce its fiat, and hence the USD would lose value.
That's not hard to explain though. Early adopters have been bribed with millions of dollars of Capital Gains to live amongst the "witches" until Blockchain tech is mainstream.
These people had to put up with not only the witches themselves (scammers and criminals, pyramid-schemes and vaporware) but also hate from the normies who hate the witches (higher than ever now, thanks to the dog coins and NFT schillers), all while having to do the due-diligence of researching the coins, investing in them, not losing them over the years, and paying taxes.
Pretty much no one would have put up with that without the cash incentives. And I'm sure you could solve plenty of other network problems with equivalent incentives.
Thanks, this is a helpful answer. I hadn't considered the lock-in-via-capital-gains angle.
I'm doubtful that crypto has moved passed the worst of its issues. As far as I can tell*, there is nothing in the crypto world that can not be done through a normal currency, except money laundering, tax evasion, smuggling and other kinds of illicit and nefarious activities.
(*epistemic status: I'm not an expert, just a guy commenting on a forum)
If you're under an authoritarian regime, then 'smuggling' so that you can receive payments if you've been cut off from the banking system would be a positive thing. Censorship resistance is a key aspect of Bitcoin.
Bitcoin on the lightning network makes it cheaper and faster to send money across borders, allowing developing countries that receive a lot of money from expats to keep more of their money by cutting off middlemen like Western Union.
Bitcoin on the lightning network makes it feasible to send very small values (sub 1 cent) cheaply and instantly. This has the potential to declutter the internet, by say having email protocols that require 100 sats to send an email. It's inexpensive for ordinary users, but very expensive to spammers that send millions of emails. We can now price spam off the internet.
Perhaps the most important thing Bitcoin does that normal currencies don't do is have a non-discretionary monetary policy. There will only ever be 21 million bitcoin, while no such limits exist for central bank issued currencies. This scarcity makes Bitcoin a supreme asset to store value in and has the potential to be a $100 trillion proposition.
I share your general impression of the crypto world, which is why I phrased my statement as "crypto has moved past the worst of its _witch_ issues". Crypto has other issues! And it still has many witches! But the not-intentionally-practicing-witchcraft contingent has somehow reached critical mass.
The witch utopia problem, in its purest form, is that normies don't join in because it's just a bunch of witches, and it's just a bunch of witches because normies don't join in. My impression of crypto is that both statements are decidedly on the downswing and unlikely to reverse.
Is there an automated tool that can notify me when the consensus on a metaculus question has changed a lot and my prediction is stale? I lose points on predictions that were directionally correct relative to the community consensus when I made them, because the circumstances changed after my prediction was made and I wasn't constantly refreshing each one.
Kalshi's real money but has a lot of these notifications built in
I'm not aware of such a tool, but I'd just like to say again that this is my biggest gripe with Metaculus. I'd like to be a "casual user" of metaculus who logs on once every few months, thinks hard for an hour or so about a few questions, and provides a few predictions. But even if my predictions are always spot on the ground truth, I can *still* lose points on average because months after I log off, Metaculus will be treating my prediction as current and docking me points when it sees that new participants are making better predictions using more information. There should be an option to make a "one-time prediction" that either gains or loses me points solely on the basis of a proper scoring rule of my predicted probability at the time, and the outcome.
The year is 2072, long after the Fall of the Old World. Nearly all of the bullets have run out, and people are resorting to older types of weapons.
What considerations would go into your selection of a personal sword, and which type of sword would you pick (e.g. - Roman short sword, Japanese samurai sword, Celtic broadsword, fencing sword)?
Crossbow?
If you have the metallurgical ability to make good swords, you have the metallurgical ability to make gun barrels. Assuming bats still exist, their poo contains lots of nitrates which crystalize after you immerse it in water and filter out the insoluble part. Then you mix nitrates with charcoal and sulfur to make gunpowder. Also, if there are any surplus bags of ammonium nitrate fertilizer laying around, those can easily be repurposed to make massive amounts of gunpowder.
Humans couldn't lose the ability to make barrels or gunpowder unless the apocalypse selectively killed everyone who wasn't rather dumb. Conditional on at least a hundred humans surviving I think it's extremely unlikely (p<0.01) that gunpowder-making is forgotten.
someone watched Dr stone
I disagree. I think among any random sampling of 100 modern humans, it would be *surprising* to find even one person who could successfully make gunpowder from scratch, even given access to the raw materials. And truly shocking to also find among that number someone who could blacksmith a musket that doesn't kill its owner and is dangerous to a greater degree and at a further distance than a decently balanced pair of fire-hardened wood javelins.
Both of these activities involve a host of practical skills that the modern person tends not to even realize exist, unless he has a practical manual hobby (blacksmithing or cabinet-making, say). I have a fair amount of practical chemistry lab experience, and I would only just trust myself to figure out how to identify the raw ingredients for gunpowder, purify them, and create the final product. (I understand the physical process of mixing involves no small amount of manual skill and is critical to the quality of the outcome).
If someone gave me a forge, some charcoal, a few hammers and a lump of mild steel, I *might* be able to produce an OK sword, some kind of adequately balanced stabbing gladius, given enough experiment, but I doubt a lifetime of fiddling, in the absence of a skilled instruction, would let me make a Kentucky rifle.
Making guncotton/nitrocellulose is a bit tricky (though perhaps easier than black gunpowder), however, why would someone in 2072 have to make a gun? Assuming a post-apocalyptic world with significantly decreased population, there would be more than enough still-perfectly-functional guns left from 2022. Guns are really long lasting; there's nothing wrong with World War 2 guns right now (despite 80ish years that have passed) and a WW2 machinegun would still be quite effective in 2072 as long as you can make ammo for it.
The simplest early firearms were essentially a cylinder that is rounded at one end and open at the other end, with a tiny hole for the torch that ignites the gunpowder. Now there are a lot of refinements necessary to get high performance out of this, and you'd need to test it with very small loads of gunpowder first to make sure it doesn't blow up in your face, but the basic structure of a firearm would not be that hard to reinvent.
If you were really hard up for blacksmithing materials, you could even make a barrel out of hardwood, although it would be much less durable.
The precursor to the gun was made from bamboo.
https://en.wikipedia.org/wiki/Fire_lance
The distinction between a theoretical and practical understanding has rarely been more clearly illustrated. I am vaguely reminded of the joke about physicists consulting on dairy farm management in which the punchline begins with "Assume a spherical cow..."
I agree. Cylinders are really freaking hard to make by hand, especially if you need it not to blow up. It's advanced blacksmithing.
I thought that early metal gun barrels were made by twisting a sheet of Damascus steel around a rod?
I think those 100 humans could collectively remember enough (and have enough books in their possession) to figure it out if they really needed to. That's a lower bar than passing a test on it right away.
Without looking it up, what are the three ingredients necessary to make black powder, how are those ingredients acquired and prepared, and what is the proportion they need to be in?
I happen to know the first two facts and get spotty on the third. Is 1% of the population as obsessed with ancient manufacturing processes as I am? Doubtful.
Edit: I see from your original post you know the ingredients, but can you tell me how they're prepared?
There's a broad range of nitrate-fuel mixtures that will burn violently. Sulfur isn't required-- it only reduces the ignition temperature.
If they forget the proportions they can experiment with various mixtures and hill-climb towards whatever works, or do some stoichiometric calculations.
I don't recall all the precautions taken in the actual preparation, which is why I would start with an extremely small batch if I needed to make post-apocalyptic gunpowder without reference books. But I'm pretty confident I could do it.
I'd probably loot the remains of my local library to obtain the appropriate reference books first, if possible.
I suppose if enough books have survived we can recreate anything. I just wouldn't count on people to know how to do it without reference, and the reference materials may be difficult to find.
Take, for instance, the traditional (read: not having to build a chemical factory) way of refining potassium nitrate (saltpeter). First, you need nitrated earth, which either must be sourced from bat caves or produced from scratch in a process requiring a precise mixture of organic material and urine, cared for in a particular way over many months. Then the nitrated earth needs to be leached, typically multiple times. The leached water then needs to be mixed with a particular amount of lye, which requires its own preparation method using wood ashes. Then the mixture must be boiled to a certain temperature and any crystals that form must be raked out, as they are impurities. At this point it is also best to add blood to the boiling water so that organic impurities rise as a scum that can be removed. Then the water must be cooled slowly, and the first crystals that appear during cooling must be raked out and discarded. Then the mixture must be evaporated and the crystals that form as a result must be purified again until you have the precise crystal structure that indicates it is mostly potassium nitrate.
Making good charcoal is similarly complicated (as is corning it for gunpowder use), and while you don't need sulfur its certainly necessary for a superior product and must be located, mined, purified, and crushed.
So it really all depends whether a physical book with that kind of information survives, otherwise it's a lot to work out from a general "gist of it" idea.
I know the *names* of the three ingredients, but if you put them in front of me I'd only recognize two.
This is a really good point, actually. Classic case of feeling like you should have caught it yourself, but having no thought of it until it's pointed out to you.
Cartridges might go away, though, those are a bit more finicky about needing their own separate tech, I think? So you'd be stuck at hand-loaded revolvers, tops.
Everyone will be going to go around in full suit armor, so you definitely want a 'stabby' sword for 1v1 combat and not a 'slicy' word. The Katana is brittle and the Roman short sword is a heavy, wide and short stabby sword, so they're both mostly useless.
The fencing sword might be better in the hands of a 'sure kill' sort of fighter and as an all purpose sword for people of all strengths, heights and gender. It would be my second choice, but if a fencing sword misses, it is really easy to overpower as long as you cover its one sharp point. In a life or death situation, it would be easier to take a non-lethal stab from a fencing sword, and immobilize the weapon, which you couldn't do to the other 3 options.
A sabre (cutting fencing sword) might be an excellent compromise. It would be my #2 choice. If I could use a shield / dagger for dual wielding, then it might be my #1 choice.
If I am going sword only though, The broadsword (claymore) works better as an all purpose sword and would be my #1 choice. You can defend against multiple opponents as a reachy slicy sword and buy time to run. (esp if they are all armored, so you can run fast) You can also use it for stabbing if you put one hand on the blade to thrust it like a pike. It is heavy, so a hit to the helmet will at least stun, even if it won't kill. Again, good for running. It isn't brittle either.
It's practical for domestic tasks too. You can also use it to chop wood and hunt animals. Lastly, it is second only to a katana in the cool factor and make me feel like the witcher (you would never shoulder carry though).
All in all, the Celtic broadsword (Claymore) would be my go-to choice.
Why do you assume everyone will wear heavy armor? Armor is expensive, uncomfortable, and a pain to put on. It's for war, not everyday wear.
If everyone's in heavy armor, I don't want a rapier. (I assume that's what you mean by "fencing sword".) The ancestor of the rapier was a special stiff stabbing sword for penetrating armor, but the rapier itself was not particular stiff and would tend to bend instead of penetrate against armor. It was a civilian weapon designed for civilian opponents in regular clothes.
If everyone's in heavy armor, I don't want a saber either; as you noted, cutting swords don't do well against armor.
A claymore is better; it's a cutting sword, but it does at least have a point so you can use it like a short spear.
But really, if I have to use a sword against armor, I'd want the ancestor of the rapier that I mentioned earlier, or a gladius, which was also designed with armor in mind.
People are ignoring centuries of sword-fighting evolution. The pinnacle of sword fighting became the rapier, the small sword, and then the epee. Basically, you want to stab your opponent in the location you desire before he can stab you.
In a group (where you can get away with less focus on close-in defense), a long spear (i.e. pike) becomes better, due to the range.
And then as others have mentioned, a bow (cross or long) outranges that as well, but neither are as useful in tight quarters, such as inside a building, as a good fast sword.
I'd caution against thinking about the development of swords as a movement from "worse" to "better" types of sword. For example, the smallsword developed from the rapier, but in a fight between a person with a smallsword and a person with a rapier I'd almost always bet on the rapierist. As I understand it, the shift to the smallsword happened mostly because duels got more formalised, meaning that you no longer got an advantage by having a weapon that was a few inches longer than your opponent's, and then people switched to having swords that weren't as inconvenient to carry around.
About the same thing is true for the comparison between swords and spears. A person with a spear will probably win against a swordsman even in a one-to-one fight. But nobody actually carries a spear with them in their daily life. In general, the reason swords were so popular isn't because they were the "best weapons", it's because they were the "best weapons you can carry on you without it being a major hassle".
But wasn't the evolution of the small sword a response to the decline of armour, which was itself a response to the development of firearms?
In a world with modern materials but no firearms, I'll bet you could make a fantastic suit of armour that's lightweight and flexible, while still being damn near impenetrable to swords, even in the gaps between plates that used to be the weak spots.
If everyone is walking around in full-body kevlar, your best bet is probably just a big old bludgeoning weapon.
Yes, "but", another "point" of them was to be able to quickly stab precisely in those weak spots.
I suppose with modern materials you could make armor which can defeat any sort of point anywhere, but then why aren't you making a gun instead with that level of technology? I thought the point of the question was a loss of that type of technology and you're reduced to what a basic blacksmith can do.
To be fair, you can walk into Home Depot/Lowes and walk out with an unassembled shotgun in basic plumbing parts and will just have to load your own shells, but we were assuming more primitive materials availability.
I don't even think you need the group for the spear to be better. You can look up youtube videos today of guys with swords vs guys with spears; the sword guys are basically helpless.
How about a halberd or similar?
Those guys are using older technology swords which are heaver and meant for slashing. A rapier or epee is more like a one-handed (because it can be lighter) metal spear. Most weren't super long, but a few were as long as a typical spear. You can use something in your off hand to stop/block/deflect/grab the opponent's spear and then stab them, or else move faster to the side (because again, lighter+stronger) and stab them.
We did rapier against spear in my old HEMA club. There was almost nothing the rapierists could do, except charge in to grapple and hope for the best.
I want this to be true, and I've heard this argument before, but the closest I can ever find to what people are talking about with "modern pokey swords beat spears" is stuff like this, where they just get mercilessly manhandled: https://www.youtube.com/watch?v=h-f3nvJCl9Y
That's not exactly a standard spear, he's using it more like an edged weapon and to trap the sword, but even then, at the end when the guy with the sword figures it out, he gets inside the sharp part multiple times to stab him in the throat and head.
Now put them in a regular building instead of out in the open and see how practical that 12' spear is...
I watched sort of the second half of this video and he does a lot better, but the guy also starts aiming pretty much exclusively for his feet, which seems to give the rapier guy an edge. The trapping seemed to work against him past that point, since the other guy didn't have to treat it like an actual blade (his arm is inside the "hook" entirely at multiple points). But granted I should have watched the whole video to see him doing better.
I think the "indoor fighting" thing cuts both ways; most buildings have hallways, after all.
What's the state of mining, smelting, and blacksmithing? That's really what determines swords. And armor which also determines how swords work. Also, how far have we regressed that we forgot how to make gunpowder?
Contrary to several people who's answer is "spear" the real answer is actually "something that fires arrows." (Again, assuming primitive guns aren't available.) That won't help you in a combat arena. But in an actual battle massed arrow fire worked very well and could help keep you alive. Ideally combined with some lances, a warhorse, and a lot of armor. Plus the necessary training.
It'd be useful to know if goblins are around, so idk, I guess an Elvish blade would be useful so it'll glow blue.
As other have pointed out, the answer is almost always "a spear", but it also depends on the kind of fighting and social structure of your society. The point being there isn't a best sword, there are swords that are most suited to your combination of social structure and place in that social structure.
Well I'll be pretty old in 2072, so I guess something pretty light weight
I'm unlikely to win any sword fights, so I'll just go with a cheap one that doesn't look like it's worth stealing.
Ignoring the spear stuff, I'd say I'm probably looking at something in the chopping or slashing family of swords. My naive understanding is that swords in the "poking, straight" family are all pretty hard to use well and are prone to break a bit easier.
So probably khukri or a dao/dadao for this fella, just something I can swing at necks and hope for the best with.
The pokey straight swords are hard to use against another person with the same type of sword. A common outcome of beginners using them is that they both stab the moment the opponent gets within reach, which means they both get hit. (That happens even between trained fencers more often than I like to admit). I still think they have to advantage against a dao, kukri or similar weapon though. As long as you safely outreach your opponent, your instinctive "stretch your arm out to push the scary man away"-response is perfectly functional.
It depends entirely on what society in the New World is like. Falloutesque hardscrabble survivalism? Swords are largely useless, per the discussion in the other replies about polearms and so on, but probably a basket-hilted broadsword. Literally just the 17th century minus anything resembling a gun but including the quasi-peaceful urbanism? A long rapier without question (and the main-gauche to go with it). Et cetera.
My understanding from reading stuff written by historians of pre-gunpowder combat and HEMA practitioners is that the correct answer to this question is "a spear", and that if constrained to swords the best sword is one that's as much like a spear as possible (e.g a zweihänder or odachi). The analogy I've seen drawn is that swords were fundamentally sidearms like pistols are now and were used accordingly.
That said the answer to this question might be different if there a specific constraints that prevent spears or spear-like swords from being used, which might include most fighting being indoors, the sword needing to be carried at all times, or social conventions that punish being "over-armed".
My understanding (and small amount of fencing experience) suggests you would not want a Zweihander.
Two handed swords were used for two things (is my understanding): cutting the tips off of spears (a high mortality job), and full plate combat, where it is not used like a spear. In full plate combat you’d be holding it halfway up and using that for extra leverage. Holding a sword out like a spear would be too heavy because you have no counterweight.
I think the real answer to this question depends on how many people you have with you. If I’m alone, I’d probably want something light and long like a rapier so that I could have reach on unarmored enemies and run away quickly from anyone in armor. A spear isn’t going to be super useful to you without support typically.
If I have a bunch of people… staffs honestly. They’re easy to learn, easy to use, and don’t require a tight heavily trained formation like spears or pikes
Spear is the general purpose correct answer, but against heavy armour a warhammer or a mace are the go-to specialised counters. I've read similarly to you vis a vis swords not really being main battlefield weapons in medieval society.
My impression is that aside from being status symbols, due to the difficulty of making a useful spear vs a sword, swords are almost like javelins, so basically a sidearm as you say. Used for specific limited conditions no different from how the Romans used pila.
Are there standard theories on common causes for hiccups?
For me, the most common way to end up hiccuping is if I've eaten bread or something similarly starchy without enough liquid to wash it down - I almost *always* develop hiccups if I eat untoasted bread with some peanut butter, but it goes away when I've drunk enough water to feel like it's washed the pasty matter down.
For my partner, the most common way to end up hiccuping is if he's eaten unpeeled uncooked carrots. He doesn't seem to have any standard way to eliminate them.
This suggests to me that my hiccups are the result of a physical process (i.e., my esophagus is either trying to clear itself, or is getting into spasms because it can't quite clear appropriately) while my partner's hiccups are the result of a chemical process (i.e., something in carrot skins results in the muscles acting weirdly for a few minutes).
Are these both commonly accepted types of causes? Do other people have similar or different causes? Or do you just tend to get hiccups occasionally without any commonly observable patterns causing them?
Breathe into a paper bag (not a plastic bag - the bag needs to not collapse on the inbreath). Always works, even when nothing else does.
Apparently hiccups are somehow related to your breathing reflex, controlled by CO2 levels in the blood. Once you drive CO2 high enough the hiccups stop - holding the breath works for the same reason.
Huh, I'm older (63) and haven't had the hiccups in years, decades? Is there some change with age?
this https://www.medscape.com/answers/775746-68320/how-does-the-incidence-of-hiccups-vary-by-age#:~:text=Hiccups%20can%20occur%20at%20any,more%20common%20in%20adult%20life.
long link of which I can see only the first few sentences of, says hiccups decrease with age... generally.
I have a newborn, and she gets hiccups all the time. I looked it up and what I found basically said that newborns get hiccups constantly because it gives their brains feedback on the position of their internal organs, and as we get older our brain doesn't need that feedback because it's built up an accurate model of where our guts are. I have no idea if that's true, but the younger you are the more you hiccup.
I had regular hiccups for months after three spine surgeries a few years back. The nurses told me it was because moving organs around to reach the spine disturbs the diaphragm, and doing that predictably and consistently causes hiccups. You get left in a state where your diaphragm is very easily disturbed until everything has healed.
My hiccups are often bread-based as well. Most reliably, eating a lot of bread then drinking a really cold drink, especially a carbonated one. When I was a kid,I would ask for no ice in my drink at McDonalds or I’d be done for.
Oh my god!! We've started calling carrots "hiccups" around my house because of the reliability that raw carrots cause my wife to hiccup. I thought she was the only one.
I personally get them pretty reliably from eating pickled jalapenos in a sandwhich, or when I eat something "too" spicy, possibly too fast (or when I cross a certain threshold of alcohol inebriation, like a cartoon character).
To stop them I try to exhale all the air I can and hold my breath as long as possible. If I don't hiccup during the holding my breath phase, it typically results in my hiccups disappearing immediately. If I hiccup while holding my breath, I just have to wait (not effective for alcohol induces hiccups).
Interesting, I have a similar 'solution' but it works much better if I inhale as much as possible. The rough algorithm is:
1) Inhale as much as you can.
2) Hold your breath briefly.
3) Suck in even more air on top of your already basically full lungs, repeating two or three times.
4) Breath out slowly.
Then repeat the whole process around three times, and this has always worked for me in the four or so years I've used it. Drinking water or exhaling fully have not gotten as good of results.
Like you, it's a multi exhale process to get to full empty. I exhale again a few times after "emptying" my lungs to push as much out as possible.
I'm not sure why I go with exhale necessarily, but I think the idea is the same. I wonder if it's the extreme state you put your lungs in that resets them. Either extremely full or extremely empty may have the same effect.
Also my anecdotal success rate may be lower than I remember.
i also often get hiccups from very spicy food, and sometimes from drinking vast amounts of beer, and rarely from drinking soda (though it's been a long time since i last drank soda).
Really interesting! I feel like I might have heard of the spicy one (and possibly even experienced it? I tend to go for slightly spicier food than the average person, but never really push it at all any more, so it's been a while since I would have tested this). But I've never heard of the cold one.
My mother has Alzheimers and asked me if I can help her with organizing her assisted suicide. I do not have a problem with that but with the current law situation in germany it will not work with her specific illness. Since then and because she used to work as a writer I am researching the process of dying, things written on death (like Ernest Becker and recently interviewed Sheldon Solomon on this topic) to give her answers, when she is asking me on the topic of death.
What are your open questions on death? How do you think about it and does anything scare you about it? Anything you have read, that changed your mind? Views on assisted suicide? Im interested in anything on this topic atm.
Marvin Minsky on death (paraphrased, 2nd hand): "it will greatly interfere with my research."
I'm sorry about the situation you're in; it must be awful.
One of my religion's greatest preachers, Tim Keller, has terminal cancer. He wrote an article about it for The Atlantic. He's been studying theology his whole life, and teaching and counseling people, even about death, and now that he's facing it himself, he has some interesting things to say. And he's a really smart guy. Maybe it could be helpful to you or your mother.
https://www.theatlantic.com/ideas/archive/2021/03/tim-keller-growing-my-faith-face-death/618219/
Keller cites N. T. Wright, an anti-gay crusader who believes in the literal truth of resurrection (people who are dead coming back to life).
Sorry, but I can't take solace or advice from someone so divorced from objective reality.
"If those who never lived can live, those who lived once certainly can live again"
Maybe, one day, with insane technology centuries from now.
Not by reading ancient texts written by people long before science gave us the tools to understand reality as it actually exists.
My view of death is basically that articulated by Philip Larkin in his poem "Aubade."
...which doesn't help at all, I guess, but since you're interested in anything on the topic...
Psychology Today's article helped me greatly in coping with death: https://www.psychologytoday.com/us/blog/finding-purpose/201811/facts-calm-your-fear-death-and-dying
In particular,
"As evolutionary psychologist Jesse Bering reminds us, “Consider the rather startling fact that you will never know you have died. You may feel yourself slipping away, but it isn’t as though there will be a ‘you’ around who is capable of ascertaining that, once all is said and done, it has actually happened."
followed by:
"Awareness of our mortality can be a profound challenge to our self-image of being an all-important, indispensable, independent entity in the universe. Or it can fill us with a sense of the preciousness and fragility of this opportunity, the value of a life. It can inspire us and motivate us to live life to the fullest, with a sense that we should not waste our days—to experience, to learn, to grow, to connect, and to contribute to those around us and those who will follow us."
brought me a great deal of peace.
Aside from the legal issue, I think one of the key things to work out now rather than later is a clear threshold you both agree on for "when the time has come"; notably, it feels (on the level of intuition) ethically dicey to kill someone who has forgotten that they wanted to die, which is probably one of the main motivators of the German law. I think for me the balance is whether the person is semi-independent, able to enjoy things in the moment but has forgotten personal memories like who their family is (in which case the person you knew is in some sense dead, but I have the intuition that *a* person is still alive there), vs. entirely dependent on carers and not enjoying life.
Could you partake in assisted suicide tourism? Are there jurisdictions that are more amenable to losing your ability to consent and still carry through with the suicide?
Maybe even look for a place where suicide is legal and cryonics available.
If you take a citizen from one country to another, kill them, and return, you're probably going to get in trouble on return.
I know you don't see it as "killing them" but that's not how the law against assisted suicide would see it.
The country has no jurisdiction over a "murder" happening outside of its borders.
That's not universally true, at least in practice. Many countries will claim and in fact exercise jurisdiction over what their citizens do while travelling abroad, at least where major crimes like rape and murder are concerned. Or over what is *done* to their citizens abroad. If you, a citizen of Nation X, kill a citizen of Nation X in a manner that Nation X disapproves of, you may spend a great deal of your remaining lifespan in one of Nation X's less pleasant prisons, screaming "but they can't *do* this, I was in Nation Y at the time!" all the while and hiring lawyers and writing letters to Amnesty International and yet still rotting away in prison.
Initially I feel you're right, but international law and interjurisdictional (?) law don't seem to behave that way.
We don't typically get in trouble for doing things that are legal in other countries, but illegal in ours. Drug tourism is a real thing. Alcohol tourism out of dry counties I'm sure is a real thing. Obviously not as extreme as "murder", though still tourism for what the local jurisdiction considers illegal behaviour.
I can't think of any laws that would be analogous to "murder" that might be acceptable in other countries and not in your own.
I recognize it's a stretch and I'm not super familiar with the story, but would the husband from "Not without my daughter" be subject to American law upon returning to the US?
It was a pretty controversial thing when countries started banning their citizens from re-entering their native countries after fighting for Isis. Helping someone end their own life on their own terms seems like an even harder thing to enforce, perceptually anyway.
I like to think that if my mom had Alzheimer's and wanted assisted suicide at a point that might be too "late" for my local jurisdiction, I'd take the legal risks and travel to a more friendly country to do it. Just because my home country doesn't have their shit together, doesn't mean she should suffer.
I don't know the story of "not without my daughter," but a country cares very much about its citizens not being murdered, even if while they are overseas.
If you decide it's worth it, okay, but I'd say instead of walking home to be arrested, just stay in the other country as an expat.
For a religious perspective, consider looking into the Catechism of the Catholic Church. It is an incredible document that condenses an enormous body of works. You are in a very tough spot right now, I wish you all my best.
Thank you for your kind words. I will look into it right now!
I am a non believer myself but very open to many insights religion has in store!
The Catholic position on suicide might be unhelpful, though. It is a "grave matter" which is a component of a mortal sin.
How much did you look into going to Switzerland as an option? They have the most friendly laws on assisted suicide.
I talked with Dignitas and Exit - both in Switzerland. Same situation for Alzheimers as in germany. The german supreme court equivalent has changed the law in 2020 and we have a similar situation as in switzerland. But for Alzheimers you either have the choice of dying way to earlyor not at all. Assisted suicide is only an option as long as the person still has the capacity to judge, which doctors will deny very early in the disease.
I wish you and your mother a lot of courage, you will need it whichever path you chose.
I'd advise you to look into why your mother may want to die early. Is it because she fears to become a burden to you or others ? Because she lacks the ressources (financial and otherwise) that may enable her to live her life to its natural end in good conditions ? Of losing her dignity as a human being, or the respect of her entourage as her intellect suffers ? You, and the people who love and care for her, can help assuage those fears, and live a happy life even as her condition worsen. Do not to let her think she needs to go so that you can be free of her needs.
Do you have personal experience with alzheimer (or other dementia) patients?
(Content warning for the obvious stuff, I'm trying not to be too graphic or detailed, but not everyone likes to read about this stuff on a normal day.)
I haven't made up my mind, as I'm far from even an early onset age, but I think I might want to actively decide and choose my exit while I am still able to communicate with others, have some idea of who I was and what's going on, and - probably most importantly - have some idea who all these lovely people taking care of me are; ideally all of these so I can say some meaningful goodbyes.
The alternative could be dying either alone, not being able to move or speak, not knowing where I am, where I came from and how I got there, what is going on, which is all super scary;
or - possibly worse - all of the above, except with a bunch of people I have never met in my life talking at me like they know me, about stuff that they seem to expect me to know, and I can't even tell them to leave me alone because I can't seem to be able to speak.
And they're looking more and more sad and distraught, which makes me feel even worse emotionally.
If your comment was only meant as "check with her just in case it's one of these bad reasons", then sorry.!
It's just that I've often heard certain people who oppose assisted suicide use similar phrases in their arguments, which frequently imply that the only reasons a person could have for deciding on this course are ones that could be described as "misguided altruism or consideration for others", "misguided sense of pride or dignity" etc.
And as long as I can remember thinking about the topic, I've always been furious at people seeking to deny to others what is essentially the most fundamental manifestation of agency and choice there is.
So you believe that societies have no obligation-of-care for the unfortunate? Because "unlimited right to suicide even if you're non composit mentis" essentially implies that (or, more accurately, that even maximally-irresponsible exercises of personal agency should override that obligation, which is essentially the same thing but needs to be asserted for subsequent points to be made.)
Would you also hold that all forms of drugs, even ones known to be incredibly addictive and destructive, should be put on the market unregulated? That there should be no labor regulations, as these all interfere with the free choices of employees and employers? What's your feelings on welfare in general, for that matter?
So you don't want to be part of society. I'd advise moving into some tin shack in Arizona alongside Route 66.
I would argue that society definitely is free to ban things for you in certain cases, and it is free to ban things that you care about and it does *not* necessarily have to make sure that your pragmatic liberty to access it is not affected. Perhaps it *should* but it does not have to.
Do you disagree with that? Or can we start with a presumption that in certain cases society does have the right to restrict you and ban things that you want?
IMHO that presumption is a necessary basis of a discussion, as it is a necessary basis for being part of a society; acknowledging that society can and will have certain rules that may restrict you from doing as you please is a non-negotiable part, if someone denies that core principle then the expected result is both an exclusion from society and *still* enforcing the society's rules if you stay within the reach of society; if you want to opt-out, you'd have to leave it because it won't leave you.
Given all the information I have about you leads to the charitable conclusion of "I have a solipsistic view of state ethics, wherein I am the only significant moral object in existence and any moral question that does not directly reference me, specifically, does not signify", and a less-charitable conclusion of "I have a narcissistic view of state ethics where my personal preferences should have infinitely-high weight", I do think I have all the information I need.
How much time do we have to participate in the ELK game. When are we supposed to be done?
"We are offering prizes of $5,000 to $50,000 for proposed strategies for ELK. We’re planning to evaluate submissions as we receive them, between now and the end of January; we may end the contest earlier or later if we receive more or fewer submissions than we expect."
Thank you.
Anyone else do annual predictions akin to Scott's? I'm trying to make a list for me to do and looking for ideas for the list.
You can make real-money predictions on Kalshi as well if you'd like, they have a lot of 2022 centered contracts
The recent Forecasting Newsletter mentioned a bunch of them: https://www.lesswrong.com/posts/MDfesb7oYu8KhGvLR/forecasting-newsletter-december-2021#In_the_News -> Ctrl+F "Some news media & individuals wrote some quantified predictions for 2022"
Matthew Yglesias and others in that sphere do this I think.
Thanks I'll look into them
Ohh. This is wonderful. Thank you. I'd heard of metacalculus but not gjopen
What are everyone's thoughts on this essay that "pushes back" against the modern advice to “buy experiences, not things"?
I think the author makes a good point about the argument's motivations (e.g. - status-seeking and lack of space for urbanites in expensive real estate markets to store things), but I also think his argument should have acknowledged the disutility of owning excessive quantities of objects, and the false expectation humans generally have that more possessions will make them happier.
https://write.as/harold-lee/theres-a-phrase-going-around-that-you-should-buy-experiences-not-things
My philosophy is that people in the rich world should own fewer things, but those things should be of higher quality and should be taken care of, and owners shouldn't be afraid to sell their things when they have no use for them anymore. (Coincidentally, I'm in the middle of a major personal project to get rid of over ten years' worth of accumulated possessions, and I wonder again and again why I held on to much of these things for so long.) With respect to buying "experiences," people should be honest with themselves over their motivations before making the purchase: Are you genuinely interested in visiting Tahiti, or are you only interested in the status boost and bragging rights that will come from posting photos of your vacation on social media, and being able to bump up your "Number of countries visited" rank by one?
Good essay. Realistically worthwhile ways to spend your money are a mix of experiences and things, but things are absolutely undervalued. One thing I realized when buying a decently sized apartment is how much stuff I can now have, and how many options it opens that I didn't have when renting and moving around every year or two.
One thing the author gets right - but doesn't accent to the extent it is deserved - is that the useful things, the kitchens and home gyms and rifles and toolboxes, require effort and skill to use. For most people in the developed world, that is the primary barrier to entry, not money.
Conspicuous consumption is conspicuous consumption, whether of things or experiences. Spending two dollars for a park entrance fee is a different kind of purchase than spending a few thousand dollars to see Paris.
Memories, however, have no ongoing cost. Physical objects cost something to continue to possess; storage space is only one part of this, there are also ongoing time investments, even if it's just moving a box from one house to another every few years/decades (or having somebody else deal with it after your death).
Granted you can recuperate some of the cost of a physical object by selling it, and some physical objects provide dividends of their own in the form of use, however everyone I know who put emphasis on the resale value of physical objects ended up hoarding things.
On the other hand consumption, whether of things or experiences, has an intensive as well as an extensive margin. The nice new car doesn't necessarily take up any more room than your old beater did.
Also, good memories just get better with time, while even the best objects decay.
Probably an overstatement if you take it at face value, but it's still good to have the occasional corrective to the endless proselytizing of the Church of Travel.
It probably depends on the specifics of the situation, including life stage, poverty/wealth level, and the specific thing being considered.
Someone once bought me a nice juicer, more expensive than I would have bought for myself. It turned out I didn't have enough counter space, didn't like fresh juice all that much, and cleaning it was kind of a pain. I could not save on future healthcare with the juicer, because I couldn't get myself to use it. A lot of "healthy" purchases can be like that, including the cross country skiis and woodworking shop mentioned in the article, and much of my own art supplies. There are aspirational purchases that are still worth making, but it's worth taking classes or renting or at least trying someone else's thing for a while for purchases in the "because it's good for me" category.
I personally prefer experiences to things, and spent most of my time and energy on experiences in my 20s, which I don't regret. But now I have young children, and am not the sort of person to walk them to the park every day even if there were a park within walking distance, so I'd better get some rakes and shovels and wood chipper and weed whacker and whatnot to try to turn the thorn infested yard into a place where it's possible to play. This is somewhat frustrating, because we don't have great maintenance skills, but some maintenance is now required, along with tools and a shed and everything that goes with that. Especially, buying experiences (daycare, classes, trips) in sufficient quantity to keep us occupied is prohibitively expensive at this point.
But if I were going to try giving advice, I'd probably have to ask more specifically: what experiences? What things? How much will you be moving in the next few years?
You can hire a landscaper and unload some of the less pleasant maintenance. Buy yourself a lack of experiencing tedious chores.
Yes, there are a lot of things I could do if I could make more money. Hiring a landscaper would fall somewhere down there, in "if I won the lottery" territory, after a lot of other both things and experiences, including a couple days a week of preschool.
This sounds like one of those Should You Reverse Any Advice You Hear situations.
Some people would definitely benefit from buying more experiences and fewer things. Others would benefit from buying more things and fewer experiences. (A third class should probably just be buying less stuff overall and instead saving their money, and a fourth class is misers who should spend more and save less.)
I've met all these different classes of people. Too much stuff and too few experiences tends to be an older and lower-class crowd, too many experiences and too little stuff is a disease of the younger and slightly higher class mob. As a result there's also a lot of tedious class signalling going on in this sort of conversation.
In the sort of circles I move in, it's far more acceptable to brag about your experiences than about your stuff. Telling your friends about your fancy new car or jet ski is gauche, but telling them about your very expensive camel-milking trip to the Gobi desert is de rigeur, and your friends have to sit there and pretend to be interested.
So anyway, I'm not convinced that people should be buying more stuff and fewer experiences. But they should definitely be aware that "Buy experiences, not stuff" is not all-purpose great advice for everyone, and that it's definitely possible to overdo it.
When I was young, I reasoned about this a bit and talked myself into "buy things, not experiences", because the things will keep producing value for me over time while the experiences will just happen once. I suspect the reason for the classic advice that goes the other way is that most people engage in similar reasoning, and then similarly overvalue it, and need to be pushed back to correct to the right balance.
As spandrel mentions, one important consideration is that buying good enough things can be worth it, but that is often a lot more expensive than people are considering when they are buying things.
As Resident Contrarian suggests, the advice probably also depends on where one stands in terms of finances and household setup. For a middle class or higher person who has already set up their own household, the value of buying a new object is not the full value of owning that object, but only the marginal value of that object over the object it is replacing, which is often relatively small unless you're considering a major upgrade, whereas the value of buying an experience is the marginal value of that experience over watching TV or whatever else you would do instead (it often doesn't compete with, but complements, time with friends and family).
But for someone just starting out, or someone in a lower income bracket, buying a thing could well be a big upgrade. The standard advice is predicated on people not listening to it until they're settled enough in their life to need this advice to switch their strategy.
I'd think it depends on which things and which experiences? Some people are not very good at selecting either what things they should buy or what experiences they should spend money on. Partly because, as you note, they are influenced by trends and percieved status, and partly because our judgement about things and experiences improves the more purchases we make and things we do. Recognizing this is sort of obvious, but it is often ignored in this discussion. I think explains why lottery winners are often miserable, they buy a bunch of stuff or take a lot of cruises without really knowing what they want.
As an aside, I prefer experiences almost every time.
That's a good point that I know is true from personal experience. I had to travel extensively and endure several boring or even bad vacations to learn which kinds of places and countries I enjoyed.
Similarly, I had to buy a whole bunch of initially-exciting things which later lost their lustre in order to be better able to predict which "things" would be worth it.
Can you pass on any insights?
It's a slogan. I think if we were trying to do something better we'd say "At some point, if you haven't already, buy at least one experience in lieu of some object. Then spend some time considering how much you (based on your new experience) enjoyed each thing. Once you've examined your historic enjoyment of both experiences and objects you can thoughtfully assess which one is the better use of your money." It's not a very good slogan, but it's a better fit.
One thing that's interesting about this to me is how much it's written from a middle-class-or-better perspective. When your car/rent/insurance/utilities type expenses are pretty well in hand, it really often is just a choice between a new big screen that you might not have really needed or a trip to the world's best waterpark. But below that level the choice might be between, say, a well-running car replacing your shitbucket and the waterpark, which is a bit harder to parse in terms of the "do you really need a Mercedes" thinking of the slogan.
I agree with you almost entirely, but I think that you could strike out "-or-better" from "middle-class-or-better". It's hard to imagine that upper-class people actually have to choose between the Mercedes and the world's best water park; in fact, I wonder if that isn't a necessary part of the definition. I think the threshold where you no longer have to choose is well below Musk.
Good point. At some point you just have a bunch of money.
Oddly, I'm not sure that isn't generally true for even the people who this slogan is named at - i.e. it might be a backdoor way to say "take more time off of work". I work at a tech startup where nearly everyone makes very decent money and theoretically has access to a lot of time off, but it doesn't superficially seem like there's an above-average amount of travel going on.
I don't think there's a good way to find out, but I wonder if for some people the heard message isn't "hey, stop buying things you don't have time to use; what you want is to intentionally take more time to (X) and 'I took a vacation to beat cyberpunk' isn't socially acceptable."
I think this might be hitting the nail on the head, though taking a vacation to play videogames *should* be as socially acceptable as taking one to lie on a beach and read books, both are about relaxing first and foremost
Agreed.
I've been thinking about that study published last month ( https://pubmed.ncbi.nlm.nih.gov/34878511/ ) which found that adverse post-operative outcomes for female patients are significantly more likely after they are operated upon by male surgeons, but found no other sex concordance effects.
The usual explanations involve extraneous factors like male surgeons being more often in position to perform riskier procedures or female surgeons having to be extra good to make the cut in the OR, but all that would be agnostic of patient sex. And the variety of procedures and the size of the population examined make for fairly impressive breadth of scope.
Other explanations, of course, involve stuff like "women are better at listening to women" which I am convinced is actually a significant factor in diagnostics but surgery is somewhat less exposed to it than IM - and, again, one might then expect at least a slight symmetrical male concordance effect.
So I am wondering which part of the pipeline is the culprit. Is it possible that it's still the latter and female surgeons are better at correcting diagnostic infelicities?
I wonder if female patients’ communication is biased towards the positive when talking to male surgeons rather than female surgeons. A lot of people have a bias towards making things seem okay, and I could see it being easier on the patient end to try to confirm an expected surgeon’s expectation that things went well- and in the process, not communicate about minor pains and issues that become major issues when they aren’t followed up on. This is assuming that the average patient expects male surgeons to expect better outcomes than female ones, or for patients to expect female surgeons to care more about post surgical issues, either of which feels plausible to me.
> but all that would be agnostic of patient sex
Patient sex could be a factor if women more often need certain risky surgeries, and those are more often performed by men.
It sounds like the paper did control for procedure type, but "riskiness" is still something that needs to be controlled to make any conclusions. (I wonder if there's any subset of the data that would make for a natural RCT, where patients were randomly assigned surgeons.)
I can't access the full paper, but the subgroup analyses seem important for forming any conjectures on the underlying causes--were the observations universal or did it vary by procedure, hospital, surgeon, etc?
Reading the abstract and the numbers - there are so many fewer female surgeons over all in both subgroups. I wish I could see the full text and tables.
I've always been fuzzy on converting odds ratios to risk ratios, i.e. "you're X times more likely to die if you have a female surgeon". Odds ratios are nice for logistic regressions but interpreting them is a pain, and I don't believe half the people who use them can explain it. And also, how do the odds ratios change when you take them against different subgroups? E.g. the largest effect, which is significant but not like DOUBLE, is female patients with male surgeons against female patients with female surgeons - does the relative size of these subgroups, as opposed to comparing the large ones, do anything to skew how you'd interpret the odds ratios?
There are SO MANY MORE men in the sample than women surgeons; also, this is <3,000 surgeons performing all these surgeries, so you have to be careful and see if a particular surgeon (e.g. a single bad high-volume male surgeon) who's driving the results, and it's probably not homogeneous how many surgeons do how many surgeries. Likewise, it's important to look at the absolute numbers of adverse effects. If there were, like, 20 adverse effects in total (which it's not, I'm just saying for illustrative purposes), it's so small that whatever the p-value my bias would still be to consider it as insignificant.
Maybe there are "men don't care about women's anatomies as much" effects, who knows. But I think that explanations of "the female surgeons might be better in general", "men get more and riskier cases", etc. or a combination of all of these are also at least as likely and admissible. Also, do women get riskier surgeries - e.g. something obstetric-related?
Since I'm not an expert on any of this, I'd be happy to hear commentary from someone who is more of one and see where I'm wrong.
Interesting. Let's see whether it replicates with an entirely different data set, before getting too excited.
I just want to self-promote a big new post on transformer language models I just posted on LessWrong, which explores a few possible different limitations. This post took me much longer than I was expecting to write and I was working on it in fits and starts for about 4 weeks:
https://www.lesswrong.com/posts/iQabBACQwbWyHFKZq/how-i-m-thinking-about-gpt-n
I also want to signal boost this paper from CSET: "AI and Compute
How Much Longer Can Computing Power Drive Artificial Intelligence Progress?"
They basically show the current rate of scaling the largest AI models (which for the last few years has been large language models) is unsustainable, even with (very unlikely) Moore's law type exponential cost reduction (cost per FLOP halving every 2 years). Simply scaling stuff up must come to an end in the next few years. Major algorithmic improvements are needed. https://cset.georgetown.edu/publication/ai-and-compute/
Does anybody know someone at Substack, preferably in/close to the development team, who could get accessibility issues fixed? I'm a screen reader[1] user and the comment page on ACT is... less than ideal from an accessibility perspective. I could try the traditional customer support route, but that rarely works, usually clueless CS representatives have no idea what you're even talking about and have no power to push the issues you mention onto the developers' todo lists. Finding a frontend dev via Linked In is often our best bed, but I decided I'd try here first.
[1] https://en.wikipedia.org/wiki/Screen_reader
Hi, I have written a complete replacement reader for Substack comments.
I don't think anyone uses it besides me, but it renders everything in plaintext and without a lot of crap, so you would be a great use case.
https://github.com/EdwardScizorhands/ACX-simple
If you say what kind of features would be useful for a screen-reader, I could probably implement them.
Trying it out with Chrome on a Mac
Text is rendered in a narrower column. Avatars are shown in a full square
Yeah, there is some ugliness about it. I mean to fix that border issue but there are other things more pressing.
Scott himself can and has gotten changes to Substack and he is the kind of person who would care about this issue. So that is easily your best bet.
Perhaps outline the major problems in your comment or a subsequent comment and then Scott will probably see it or someone who can connect with him will see it.
He also might not see it on this OT since he's on his honeymoon. Probably better to wait and post again later once he's returned.
For some reason I think I'm spending too much money on dishwasher rinse. Why I should obsess about this is a good mystery in its own right, but, anyway, I'm trying to think of how to usefully dilute this stuff.
A bunch of people online say to use vinegar. But the New York Times has a good summary of what's in the blue rinse aid, and why (and the drawbacks of vinegar):
https://www.nytimes.com/wirecutter/blog/dishwasher-rinse-aid-cleaner-drier/
* Water
* Alcohol ethoxylate - surfactant (uncharged?), most important
* Sodium polycarboxylate - anti-redeposition polymer
* Citric acid - stops calcium from interfering with other surfactants
* Sodium cumene sulfonate - surfacant (charged)
* Tetrasodium EDTA - chelator
* Methylisothiazolinone and methylchloroisothiazolinone - preservatives
* CI Acid Blue 9 - makes it blue
The NYT says that the first one is the most important, so if I could find a supplier, and dilute it with water to the same proportion (I don't know what that is but wouldn't be hard to find), I could probably dilute the rinse aid 50-50 with my new homemade solution and everything would still work (and if it doesn't, I can just stop doing it).
The MSDS for Lauryl alcohol ethoxylate says its safe unless I pour it in a fish tank or eat it, so I'm not worried about safety, but it seems never to really be sold to household consumers. I only place I could get an online quote was Alibaba which wanted a minimum $100 order, and, nope, I'm not *that* curious in this project.
Is there any other surfactant that I could buy as a consumer for a cheaper price than store-brand rinse aid?
How hard is your water?
I don't buy dishwasher rinse. We never used it growing up, so I've never bothered to start. What's the benefit?
It stops water spotting on the dishes. They used to have lots of phosphates in the detergent, but those were phased out for damaging the environment in waste-water. In exchange, the dishwasher puts a little bit of rinse aid into the rinse, which gets rid of the last of the crud and and makes the last of the water dissolve.
After a little research, it looks like I’ve been blessed to have always lived in places with soft water. I guess that’s why I never missed it.
Are you buying the name brand (jet dry or whatever) or buying generic? To my mind, it's worth it to get the generic, but while I'm sure that the generic is probably still much more expensive than mixing my own, I'm not sure the cost savings is worth it when a large bottle of even the name is ~$10 where I live and will last for months. Yes, it's _dramatically_ overpaying for the ingredients, but the amount being saved in actual (not percentage) terms is not worth it for the effort of homemade, for me.
I think I'm spending more than $10 a month on it. It seems to go through pretty fast.
I totally agree this isn't a rational use of my time; it's just a puzzle that got stuck in my head, and if I solve it (by finding a cheap substitute, or deciding there is no normal way forward) I'll move onto something else.
Edward, this is much too much. Is your dishwasher dispensing too much rinse aid? Mine is adjustable, and I spend well under a dollar per month on rinse aid, in a place with pretty hard water.
I've found that choice of detergent makes an even larger difference than rinse aid in the amount of mineral deposits. Smartly (Target) brand powder does a much better job than Kroger brand powder.
Incidentally, you might be the sort of person who is happy to waste an evening watching the dishwasher videos from Technology Connections on YouTube.
Huh, I've never considered seeing if I can limit the rate. I'll look into that.
It looks like there's a shortage at my supermarket, with the store-brand stuff being gone. But I found a cheap way of changing the flow rate which is to dilute it 50% before it goes in.
If you want to a mood boost exercise answer these questions :-)
What am I grateful for about myself?
What am I proud of myself for?
What is the best compliment I’ve ever been given?
List 5 things I dislike about myself. How can I rephrase them to become an opposite belief? Example: I hate my legs to I love my legs because they allow me to walk.
What are my talents?
What are my biggest dreams? What can I do today to start making one of those dreams happen?
What makes me unique? How can I use this uniqueness more in my life?
What is something I wish someone else would tell me? How can I tell myself that more often?
What is one thing I can do today that will make me feel great?
What’s something that I’m really good at?
List one thing I’m grateful for, for each part of my body.
What are 5 things my past self would love about my current self?
What is a challenge that I have overcome?
What do I love about my personality?
What do I love about my body?
What do I love about my mind?
Does this rephrasing stuff actually work for anyone? Feels to me like I am just bullshitting myself, or repressing my negative thoughts ( which I have heard is a bad thing)
I find that it's helpful to ask the question: "What did you want desperately want that you now have?" because it helps me realize (1) how much I'm taking for granted in my life and (2) that many of my wants are short-lived, and so kind of guaranteed to disappoint after a while (i.e. I'm choosing to want passing pleasure over perennial joy). This slowly helps me shift what I want :)
Regular practice of gratitude is a must. If you're a theist its a bit easier because you can thank God, but even if you're not its a good idea to start each day by naming the things you are grateful for in your life.
I don't think being a theist just makes gratitude practice easier -- I think it makes it *possible*. The concept of gratitude only makes sense in a context where there is someone or something to be grateful to. Gratitude is like debt, or marriage -- can't have it without another entity to have it in relationship to.
"Thank you, sunrise, for existing" . . . WTF?
If you're an atheist, as I am, you need a different model altogether, not gratitude lite.
Thank god for what? Throwing us into a world wherein our actions, over which we do not even have full control, can jeopardise not our lives but our *souls* if they violate some cryptic rules allegedly passed down from bronze age people in a dead language which we have no reason to trust?
To suffer well and nobly is a power given not even to angels.
Well, if you would *prefer* that God threw us into a world in which our actions can have no effect at all on our well-being, either now or throughout eternity -- if everything we do is either pre-ordained or sterile, of no consequence -- then I can see the disappointment.
However I fancy quite a lot of people, when they think hard about it, will conclude they prefer a universe in which our actions and decisions are *not* futile -- meaning they can have effects, even very profound effects, on our future (and that of everyone else). And remarkably enough, it seems that that's the universe God did give us. So maybe He knew what he was doing.
I usually thank God that I am alive to experience another day, that I have eyes that can see, ears that can hear, a tongue that can taste, that my body is whole and not missing any members, that I have a roof over my head and a warm place to sleep, that I have more than enough food, that I have a secure job and enough money for small luxuries, etc. I thank Him for my daughters, my wife, my parents, my brothers, my in-laws. I often thank him for the moon, and the sun, and the beauty of snow in winter, and the wonder of plants in bloom in spring, the joyful warmth of summer, and the striking colors of fall. I thank Him for the clothes on my back and the hair on my head.
There is a great deal to be thankful for each day.
If you believe in eternal life after death, it is truly baffling you care so much for these incomprehensibly transient things of your mortal life.
If a beggar knows that he will inherit $10 billion in twenty years time, is he any less grateful to find a $20 bill today?
Is it so baffling? Experience is orientated to the present. Even for the eternal, the here and now or the past weigh strongly on the mind. Most people don't even think much of their elder years in lieu of the moment. It's fitting to be grateful for good moments.
I also thank God that He loved me enough not only to provide the wonder of creation, but also to give me exactly that eternal life.
I'm going to build a simple work bench out of 2x4s and with a tabletop made of a slab of plywood. To ensure that it does not get stained or damaged by any solvents or chemicals I might spill on it in the future, was sort of finish should I apply to the tabletop?
I was thinking of using Minwax polyurethane floor finish.
Should I use some kind of durable paint?
Polyurethane. There is now a water soluble version — water soluble before it dries — which makes cleanup easier.
Out of curiosity, I looked up what finish is used on lab benches and they seem to go with a polyurethane:
https://www.paintcenter.org/rj/may06n.php
I did the same thing 20 years ago. I figured it’s just plywood so I painted it with white indoor latex. When it starts to look shoddy I just clean it and put on another coat.
I’m not playing with many chemicals besides adhesives though. Mainly use it for breadboarding electronics and a home for my Linux machine.
Instead of a plywood slab, I recommend gluing a bunch of 2x4s together and then flattening the top, This will give you a much more thick and stable working surface. It will also make attaching the legs much easier, as you can just mortise and tenon them. As for finishes, the standard finish for a workbench is usually some kind of drying oil, like linseed or tung oil. This is because you don't want an actual film layer on top of the wood for various reasons. It makes the surface slicker and makes it harder to rework the surface. But all this applies mostly to woodworking and I am not sure what your primary purpose for the bench is. If your number one concern is spills, then polyurethane is probably your best bet. If you want a really hard, slick surface that is super waterproof, go with epoxy (but it's quite expensive).
How would I flatten the top?
Wouldn't it be simpler to make a thick, stable tabletop by layering one plywood slab on top of another and then screwing them together?
I want to use the bench for "basic work." If I need to hammer two things together, clamp something, maybe work on some car parts, etc. There's no particular hobby it is intended for.
I made a bench last year. I used two by fours on edge, attached plywood to the top of that, agree then finally put a sheet of hard board on top. I didn't apply any paint or finish, I hate doing that. I also hate sanding, but hardboard is flat. When you screw in the hardboard, use a countersink bit and counter sunk screws.
Is this what you used?
https://www.homedepot.com/p/Hardboard-Tempered-Panel-Common-1-8-in-4-ft-x-8-ft-Actual-0-115-in-x-47-7-in-x-95-7-in-832777/202189720
You flatten the top by using a power sander.
Yes to the polyurethane floor finish and double yes to the double layer of plywood. Don't forget to overbuild the table legs - brace them if necessary, but a wobbly workbench is a no-no.
The old Time-Life home improvement guides have detailed instructions for a similar workbench in one of their books (the Home Workshop one, I think). They recommend filling in the small gaps between the boards with a paste of white glue and sawdust, then sanding it smooth. They also suggest using a thin sheet of hardboard as a disposable top layer.
This is all optional for most purposes. If you build it well from good lumber, it should be flat and smooth enough for most purposes, and it takes real effort to do more than cosmetic damage to the surface.
Professional Ethics Question:
I'm in my late 20s and have a great job. High profile, nice people, good compensation and I find the work I do pretty rewarding. They are in the process of creating a new-position for me as a way of promoting me (they'll have to announce it and interview multiple candidates, but I've been informally told that the job is mine.) I'll be interviewing for this new position tomorrow.
They created this new position because my particular position is basically unfillable by anyone else and they are trying to ensure they can retain me for the next 5 years or so. It requires a very specific blend of 3 separate knowledge bases that is incredibly rare, not because it's especially difficult or anything, just because its relatively obscure. There are around 6 or 7 people in the country who could do my job and I know all of them (and none of them are looking for a new job). The plan is they'll make me a supervisor and then start hiring up people I can train.
Then, out of the blue, last week I was separately offered my dream job. This is literally the job I dreamed about as a kid and that I've been trying to get into for the last 5 years. I don't want to get into details since its a small-enough field that people in it would be able to figure out who I am if I mentioned it, but I'm not exaggerating when I say that it's the equivalent for my field of being offered a chance to be an astronaut. There is basically 0 chance I don't accept this position. The catch is that they won't be able to bring me on for a another couple of months.
How much notice should I give for my current position, and is it wrong to still interview for the promotion position at my current job even if there's a slim-to-none chance I'd ever accept it? I like my coworkers/leadership and don't want to leave them in a lurch, but it also seems potentially negative to let everyone know I'm a short-timer.
Two months' notice isn't necessarily weird, in fact that was the requirement in one of my former jobs. Although there's a chance they might just let you go right away, they also might really appreciate you staying and training a replacement. (If you haven't already, this is a good time to read your employee handbook as to what the notice requirements are.)
Like others said, though, don't give notice until it's 100% you have the next job. Offers can fall through for reasons out of your control, and it's an awful situation if you already quit your current job. If that means you take your promotion before you quit, it's unfortunate timing, but I don't think you've breached any ethical norms. Moreover, I think most professionals understand this.
I would tell them the straight story as soon as you're certain of your intentions. Bear in mind you have four decades of working career ahead of you, and you never know when you may run into the same people again -- but with some roles reversed. Especially if, as you say, it's a small field. It's always wise not to offend people if you can possibly avoid it.
And in this case, I think you can. In my experience no one is as indispensable as your self-description above implies. MacArthur thought he was indispensable in Korea -- until Truman decided to put that to the test, and it turns out, he wasn't. Your management will not be happy that you have found your dream job, but they will understand when you tell them, and they'll figure it out. It happens. Nobody actually expects you to put your devotion to the team ahead of a dream.
But they will *not* be inclined to understand if you string them along, even a little bit. If you do, it means they will have to go through whatever pain it takes to replace you, or re-organize their approach to things, et cetera, but have to backtrack first, which is more work. And what if an opportunity arises between now and when you tell them to more easily replace you? But they can't take it, because they didn't know...they would definitely hold that against you.
Even leaving ethics aside, I think you should tell them for your peace of mind, so you can look them in the eye when you meet them later, and for their present utility, so they have the maximum time possible to figure out how to deal with the situation.
Putting on my management hat, it's really annoying when someone I was counting on decides to take a better job somewhere else, but it's part of the deal. Or, lack of deal. If there's any sort of explicit commitment, you are ethically bound to abide by it if you reasonably can. And actually taking a job that was explicitly created as an incentive package for you to stick around for another five years, *might* count as that. But it doesn't sound like you've done that yet, so you're a free agent. The company hasn't explicitly promised not to fire you tomorrow, so you're not ethically prohibited from quitting tomorrow.
You are ethically obligated to provide them as much notice as you reasonably can, and by US norms two weeks minimum. But as others have noted, there is some risk involved in telling your employers about your probable new job before you've locked it in, so it may not be reasonable to tell them tomorrow that you'll be leaving in two months or whatever. That's going to depend on details only you can assess. But do try to get a firm commitment on the new job as soon as you reasonably can.
If you have an offer in writing from the new job then accept it, quit your current job gracefully, and take a month off between jobs to see the world or something.
Unpressed free time plus spare money is a combination that doesn't come up so many times in life, you should make the most of it.
Wouldn't you rather be paragliding in Patagonia or something than sitting around in your office running out the clock and feeling vaguely guilty?
Giving a notice of a month or two sounds about right to me, and I haven't seen it having negative effects on the last month or two of employment. But giving notice before you are sure you will get the next job is dangerous - your employer start thinking of replacing you, and you might end up without both jobs.
Everyone knows that talking about your next job is dangerous, so nobody expects you to do it, so everybody expects they'll get the notification by surprise.
If you're a short-timer, is there still time for you to help train up wherever will be in that department next?
They'd have to hire someone, and that process wouldn't be complete by the time I have to leave.
Assuming you work in the US, I want to push back on the idea that this is a "Professional Ethics Question". This is a professional networking question, a reputation question, a professional relationship question, but it's not a question of ethics at all. Employment in the US is at-will, and employees cease their working arrangement at any time for any reason. There may be reputational consequences for that, but there is absolutely no ethical violation. It's the intended behavior of the system to allow for employees to switch jobs as quickly as possible.
So in deciding how much notice to give your current employer, the only considerations should be things like how this will hurt your relationship with your current coworkers and how much you care about that relationship going forward. Personally? I would probably be very honest very early on because I would want to minimize disappointment, and if you're leaving the job for your dream job anyway, what is the worst they can possibly do to you in a few months? Again, if they were to make your day-to-day job worse, you can always quit earlier. If they understand your field and respect you, then they should understand your decision and be respectful of it.
Thanks, I appreciate this.
Your employer is planning on interviewing multiple candidates with no intention of offering the job to any of them. Why is there no consideration of if this is unethical? It seems equally ethical to you interviewing with no intention of keeping it.
So at least there's no reason to be terribly concerned with the unfairness of it.
Congrats, btw.
ETA: Anyway, from my experience there's a pretty good chance your boss (or his) would hop ship in a minute if a better deal comes along for him or her, too, regardless of how much they might complain when you resign.
>>>Your employer is planning on interviewing multiple candidates with no intention of offering the job to any of them. Why is there no consideration of if this is unethical? It seems equally ethical to you interviewing with no intention of keeping it. So at least there's no reason to be terribly concerned with the unfairness of it.
You know... when you put it that way, this does seem like a much easier call.
>>>Congrats, btw.
Thanks. I'm still kind of in shock.
I've been strung along by recruiters a lot of late, and am currently waiting to hear back on a too-good-to-be true of my own, so it's what's on my mind.
Agreed with the rest of the comments. If you are absolutely positive you will get the dream job and it doesn't turn in three months "So when do I start?" "Oh we changed our minds about hiring on someone new, sorry!" then take it and let your employers know you will be leaving. Don't interview for the promoted position as that would be unfair. Use the time before you leave for your dream job to train in someone new to take over. If your employer is unhappy and fires you, well, you have your new job waiting.
If you're *not* 100% certain you will get it, then don't burn your bridges. Go ahead with the interview but let your employer know you have been offered a chance at the dream job. That at least gives them warning and lets them discuss with you if you are leaving for sure or not.
> Go ahead with the interview but let your employer know you have been offered a chance at the dream job.
That seems contrary to almost all advice I've seen – informing your current employer of a _chance_ at a "dream job" is incredibly risky.
By "chance" I don't mean "there's sort of something I'd be interested in applying for" but "they really want me and I'm going to do an interview there next week". It's not much good to their current employer if they set up the whole 'promotion' interview based on the belief that OP is going to be there for the next five years, then three weeks later he's out the door with "Bye, new job!"
OP's dilemma is that they are one of the few 'make yourself indispensable' types and their current employer needs time to train someone in to do the job. Otherwise, yeah it'd be no problem to keep his mouth shut, do the interview, get the guaranteed long-term job, then hike out the door ten minutes later. It's only courteous to let them know "you really do need to get started on training up a replacement right now, because I may not in fact be around for the next five years" rather than leaving them completely in the lurch.
Again, if they were bad employers, too bad for them, but OP says they've been pretty decent. It's not too difficult to be decent in return. This *of course* all depends on the dream job being a solid certainty, or a very good chance, otherwise keep his mouth shut, do the interview, get the promotion, and if dream job evaporates then he's not lost anything.
I personally more follow what you're describing, but I've explicitly traded better compensation for better colleagues/bosses and better 'working environments'.
But a lot of the standard advice seems like it should be at least _considered_:
- Most people that think they're indispensable aren't.
- Employment is _very_ asymmetric. Bosses/supervisors/managers/employers generally won't put a single employee above the business (and I don't think they should either). But, because of that, employees probably shouldn't risk their employment (and its income and benefits) because of a desire to be courteous.
- Some employers, by policy, will fire employees that are discovered to be searching for another job, e.g. interviewing. There are many circumstances in which that is the most _secure_ policy, e.g. to prevent an employee from exfiltrating sensitive or secret business info.
You're right that it's absolutely:
> ... not much good to their current employer if they set up the whole 'promotion' interview based on the belief that OP is going to be there for the next five years, then three weeks later he's out the door with "Bye, new job!"
But then giving them a reason to replace you _before_ you've secured another job, e.g. you've received a hiring letter or employment contract (and, ideally, accepted the offer or signed the contract), is "not much good" for OP.
I think your advice would make _more_ sense if OP was something like a 'technical cofounder', or if the business was very small.
If OP _does_ convince them to seriously begin training a replacement, but then OP's dream job _doesn't_ materialize, wouldn't their employer _also_ be upset at the costs for finding and training a replacement?
There's a couple of things going on from OP's description. To take your points in order:
(1) I agree that the vast majority of people are not indispensable. We are all very easily replaced. OP says that they are working in a small field and the particular position they hold is a sort of boutique one, where several disparate skills/fields are combined in one position and it's not easy to hire on a replacement because this position has, as it were, been grown instead of being a standard 'we need an accountant/coder/sales person'
(2) Again, I totally agree: the company is not your family or your friend, no matter how they may try to create this impression so as to wring the maximum work out of you (relying on the guilt of "you don't want to let down your *friends*, do you?" instead of "we aren't going to pay you for this extra work, we expect to be able to call on your time whenever we want"). If it made them tuppence profit, they'd boot OP out the door in the morning.
(3) That being said, if as OP says they have been good to them *and* it's a small field where everyone knows everyone, it's better to stay on good terms if possible. You don't want to get the reputation of being someone who will walk out and leave an employer high and dry if for no other reason but that it will make it much, much harder to get a new job in future.
(4) And it can't be emphasised enough that all the advice is conditional on OP *really* being in a position to walk into the new job. They must have a sure guarantee that it won't be a case of "we changed our minds" if they burn their bridges with the old employer. If it's only something like 'casual verbal conversation about new position likely and would you be interested?' ABSOLUTELY do NOT tell their current employer they are going to quit. If it's sure and certain, 'did the interview, offered the job', then it's better to mention that they will be leaving before the current employer sets up the whole fake round of interviews
(5) As to training up a replacement, even if OP does not leave, they say that the whole point of the fake interview and promotion *is* to enable the current employer to be sure OP will stay for the next few years and to start training up replacements. If OP is the only one at present who can turn the mangle, then if OP gets sick, gets hit by a bus, or (as here) is going to jump ship to a new dream job, the current employer is badly stuck. They should have been training up someone all along, it's late to do it now, but that is their plan. So OP breezing out on them after the interview is going to look bad.
Again, once more with feeling, I am *not* advising OP to tell all in the morning. Only if they are 100% sure the new job is a solid offer and they will be starting in three months' time. Any doubt at all, keep their mouth shut, do the interview, and wait to see what happens. Best case: the new job will come through. Worst case: it never happens, but they retain their old job and have their promotion to boot.
I agree – assuming that OP's description of the situation is accurate (and they haven't left out any other relevant info).
If they're correct that they were "offered [their] dream job" – and it's an 'actual' formal/written offer, and also that "they won't be able to bring [them] on for a another couple of months", then I think it'd be perfectly fine to give notice immediately and offer to help find a replacement for the next "couple of months".
I just wanted to offer some standard and general skepticism (and I also like comment-conversing with you specifically :)).
Thanks! I appreciate the advice.
Your response will seem to hinge on how sure you are that you are getting the "dream job" position. If you are 100%, then I would recommend giving your current employer a heads up so you can start training a replacement and working towards the transition. They will appreciate this, even if they are unhappy about the fact that you are leaving.
If you are less than 100% certain, then you may want to wait until you are certain or be tentative in your discussion with your employer. If you have a good relationship with your supervisor, you may let them know that it's possible you are taking another job, and ask them how they would like you to proceed. Your comfort level will determine how far you are willing to go in giving notice. I would recommend saying nothing if you think it's a 50/50 chance of getting the new job, and personally wouldn't saying anything if less than 60% certain. Once you are more certain, you can change course or reconsider.
Keep in mind that you are under no obligation to tell them something now. The reasons you might want to tell them are primarily about helping the employer (and one that has been good to you). That may come back to help you later, especially if you want to maintain good contacts in the industry/company and may ever apply to work there again. You're young enough that things could easily change over the course of your career, and you may find yourself needing those people in the future. I've seen that happen literally dozens of times, including with people who left on less than great terms thinking that they would never be back. I've also seen people who go back to a previous employer begging for a job, and get told no because of the way they quit. There's a lot of room between your situation and burning bridges, so you have room to give yourself time and go through the interview process now.
Congrats on your career success!
Interviewing for the "promotion job" and then not taking it wouldn't be unethical, just potentially annoying to your current employers. It's not unethical to consider and then turn down a job offer. If you were trying to decide between the two jobs, I would say do the interview and then weigh the two offers/use them as negotiating leverage. However, it sounds like this is not your situation, and you're certain you want to take the "dream job".
I would say, if you're 100% sure that the dream job is happening, don't interview for the promotion job, and let your current employers know about your plans right away (unless for some reason why you can't let current employers know about the dream job yet and it's really important to keep up appearances until you do, in which case maybe you still want to go through with the interview).
On the other hand, if there's any chance whatsoever that the dream job offer could be rescinded, or any chance you would decide not to take it after all, go through with the interview, just in case, as a hedge. In this case, you probably don't want to let your current employers know about the dream job offer until you have the official promotion offer in hand. Your current employers should be understanding about this once the situation is settled. Remember that as much as this situation affects your employers, it has a much bigger impact on you and your life, and it's OK to take that into account.
Caveat: I'm pretty sure I work in a different field from you, so there might be cultural norms or other factors to consider that I'm not taking into account in my advice.
Thanks for the congratulations and the advice!
Adding to this, the standard advice of not telling your employer about your plans to leave hinges on your current employer potentially firing you because they view you as a liability now. That seems to be very much not your situation.
That happened to me once. Interview for both positions. See what comes through. If the job with the other company comes through after you get your promotion, you can just shrug your shoulders and say, "sorry, this was too good to pass up."
Audio illusions. Like an optical illusion but for your ears. Pretty nifty. https://www.youtube.com/watch?v=kzo45hWXRWU
Interested in effective altruism AND in Art?
I´ll make you a portrait for a donation to The Against Malaria Foundation.
Here's why: https://art4effectivedonations.wordpress.com/
Does anyone have any suggestions for psychiatry related blogs that are written engagingly and talk about interesting topics other than SSC/ACX? I wasn't much of a fan of The Last Psychiatrist's style, but am interested in other suggestions
I have an original autograph of a certain, now deceased, sportsman. He was pretty controversial, but regained much approval after his death.
I know very little about NFTs, but it seems to me like this might be valuable if converted to one. There also doesn't seem to be any NFTs associated with this person, so this would be the first.
Does anyone have any advice how to go about this? Is this even something which is done?
It really depends on what blockchain you want this to be minted on.
You can mint on OpenSea (very popular NFT marketplace). It's easy and you don't have to pay gas fees to mint: https://opensea.io/. You can choose if it is on the Ethereum or Polygon blockchain.
Another to look at would be Rarible (https://rarible.com/create/start). You can choose between Ethereum, Flow, or Tezos blockchains.
There are so many options out there but either of those should suite your needs. Whether or not you'll be able to sell this NFT is a whole different story (that'll mostly come down to marketing) but hope this helps.
Thank you so much. This does help.
Would you have any idea on marketing strategies that are generally used?
Sure! I’d say it’s a mix of shilling on Twitter and shilling on Discord. Almost every crypto project has a Discord these days.
Hard and tedious stuff. How does one get noticed in a sea of noise and bad projects?
I have no knowledge of how to do this practically, but if you do it, could you report back the approximate $ cost of minting one? I'm curious about this.
I'm confused by the ELK problem - it seems to be saying "imagine our AI can ignore "garbage in, garbage out." How do we get it to give us a non-garbage answer?" And my immediate response is "if you're getting garbage inputs, how do you know that *any* of the AI's knowledge is correct, latent or not?"
Their first example of the problem goes like this: A robber fiddles with the camera to make it show the diamond is still there, then steals the diamond. The AI still knows that the diamond is missing (they don't say how), but only reports "the camera still shows a diamond," as it was programmed to. And the ARC people are asking "how do we get the AI to tell us that the diamond was actually stolen"?
But a better question would be "how does the AI *know* that the diamond was stolen?" It's much easier to reveal latent knowledge if you know where it might be located.
For instance, maybe the AI is thinking "the camera shows a diamond, but the pressure sensors on the pedestal show the diamond was removed. I conclude that the camera is faulty and the diamond was stolen." So you ask the AI about the pressure sensors, verify that the diamond is missing, and catch the robber. In short, knowing what the AI's reasoning was based on allowed you to duplicate it.
But now, the robber has watched Ocean's Eleven and tries the following trick instead: He messes with the vault's wiring and trips the pressure sensor remotely. The camera shows a diamond, the pressure sensor shows no diamond. The AI informs you that this pattern of data indicates the diamond is being stolen. You quickly rush in and open the vault... and the robber, who was waiting for this chance, makes off with the diamond. Oops.
This is the problem: if the AI is always capable of telling garbage data from real data, then you don't need an interrogation process - you can simply copy the AI's method of gathering information to learn if the diamond is still there. But if the AI is sometimes fallible, then no amount of interrogation is sufficient because there's a possibility that the AI doesn't *have* the knowledge you need, and it's simply reporting what the robber wanted you to see.
Or to put it another way, if it's possible to elicit latent knowledge with perfect 100% accuracy, that means you failed at the design stage, because you could have made that knowledge explicit instead.
Turning garbage input into useful output is easy by comparison; my impression is that they're trying to create an algorithm or process that behaves as a trusted interface to a zero-trust environment, which is considerably harder, if not impossible.
I tried to read the ELK proposal, but IMHO it is badly written with ambiguous language that needs to be reworded. I would recommend they send it through a couple of editors.
Maybe the ambiguous language is deliberate? If you can understand it and propose a solution, then you can teach the AI properly on fuzzy data.
Or maybe they have no idea what they're doing and are hoping to hire someone who does. People do that, alas.
>The AI still knows that the diamond is missing (they don't say how), but only reports "the camera still shows a diamond," as it was programmed to.
Why would you program the AI to not use its judgement? Why spend the money on an AI if you just want to know what the camera shows? You could just set up a feed.
I think the "camera" in this metaphor is supposed to be "the measurements that a human can use to double-check the AI." You don't know how to read all the pressure plates and laser beams and so on, all you can do is either look at the camera (easily gamed by the AI) or try to formulate a question to the AI (requires somehow mapping the incomprehensible gunk of your mind and the AI's so it can understand what you really mean by "has it been stolen?"). And my point is that there's a third problem - the AI can still be drawing incorrect conclusions even if it knows exactly what you mean. And if you can solve this problem, this gives you more information (about the AI or its inputs) which also allows you to solve the ELK problem.
The problem is that you *cannot* copy the AI's method of gathering information. That is why it is the AI and you're the human.
The process is as follows. Overwhelming amount of data in > incomprehensible processing steps > report about the conclusions out. Since you, as a human, are not capable of following the steps that the AI takes, you do not know if those steps result in it reporting about the data truthfully, or if it reports a misleading conclusion that aligns with its incentives.
The documentation explains it quite thoroughly. Even though I haven't read all of it, I can see the general shape of the problem and why it's hard. https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit
Shouldn't you just eliminate the binary of stolen/not-stolen and have it report a percent chance? Presumably, if it were a human, it would note that 1 out of 100 sensors had questionable data and decide further investigation (or notification of others) was in order...
My point is that even if the AI is reporting everything it perceives truthfully (whether directly or because of your clever interrogation method), the AI could still be perceiving things incorrectly.
You can substitute "incomprehensibly huge blob of other sensors" for "pressure plate" in my example, and the logic still holds - you can't distinguish the case where the incomprehensible blob is correct and the camera is faulty, from the case where the incomprehensible blob is incorrect and the camera is honest.
And conversely, if a human can tell when the blob is correct or incorrect, then it's not incomprehensible and the human has more information than just the camera.
True, you can't distinguish the two cases, and the problem isn't trying to. The premise of the problem assumes that the AI is perceiving things correctly. You're not getting garbage inputs at all. I don't see what's so confusing about this.
That assumption hides an important piece of information - the AI's ability to translate its sensor inputs into an accurate model of the world. If the AI is perceiving things correctly (and you can prove this), then the only question you ever need to ask is "where is the diamond?" The AI has solved the problem of filtering out garbage inputs, and this necessarily includes "garbage inputs that might fool the human watching the camera but not the AI."
If, on the other hand, you can't be sure that the AI is perceiving things correctly, then it's impossible to elicit 100% perfect information because the AI itself doesn't have such information. You can never prove it to be safe, no matter how friendly your AI is or how good your questioning. It would be hard to even build the AI in the first place, since you don't have a way to debug or test it.
This problem is like asking "Suppose we built a chess engine that always knows if a given position leads to checkmate, but doesn't say what move we should make. How do we figure out what the right move is?" But finding out the winning move is a necessary step in finding a checkmate! Likewise, developing an accurate model of where the diamond is in spite of the robber's attempts to fool you is an integral part of making the vault-guarding AI.
That piece of information isn't hidden, it's a central part of the problem. I get the impression you're trying to solve a completely different problem than the one in question. I'm not able to explain this any more clearly or consicely than the documentation, I really recommend that you read that instead.
The point of the problem is that even in the easy case where the AI is perceiving things correctly, where it has some good reason for trusting sensor A over sensor B (maybe the last thing the camera saw was a man in a ski mask walking up to it with a screwdriver), it's still not trivial to get the information we want out. They are trying to solve the easy case.
My argument is that "make sure the AI is perceiving things correctly" is an easier question than "make sure the AI perceives things correctly, and is correctly leveraging those perceptions to do what you want." One is a prerequisite for the other.
I agree there's a problem with eliciting perfect knowledge. However I also think the question cuts at something else was well, which is how difficult it is to communicate with something that doesn't have the same wiring and broad experiences as yourself.
Eg. imagine a simpler version of the problem, "you reward the AI with utils for reporting that the diamond appears in front of the camera," because you don't know how to articular all the possible ways that the diamond could be stolen and appear that way to sensors -- and in the case of the diamond *actually* being stolen, the AI now solves the 'problem' of needing to show you the image of the diamond on the camera, when there is no diamond there, which it might do all sorts of ways, eg. by taking a few frames from earlier and repeating them.
You don't even need to think of the AI as being something like malevolent -- imagine the AI is an alien being that has no concept of diamonds or cameras or theft, and instead is simply trying its best to solve a problem that has been presented to it. "You must show me the picture of this diamond on this pedestal" -- well, easy-peasy until the diamond disappears. Now what? How do we solve this new problem of being rewarded for showing the diamond, but there is no diamond?
My impression is that deeply embedded in this problem is the "how to communicate to aliens" problem; I think of [Wolfram's long bit](https://writings.stephenwolfram.com/2018/01/showing-off-to-the-universe-beacons-for-the-afterlife-of-our-civilization/) on this from a few years ago.
I keep circling around to something like, "you need to have a competitor" (an 'adversary' that challenges that you got it right in some fashion), and "you need to answer a question about what the population who cares about this would say" (and ideally a way to poll the population in question -- however this embeds the difficulties of aggregating preferences, it's own difficulty).
Yeah, I do think the communication problem is fair - I read further into the paper and they seem to be doing some interesting work to operationalize "how do we take a big blob of neural network and assign human meaning to its innards, and how do we do that algorithmically so we can do it with any weird AI we invent in the future?" But I do think that a generalized, provably-perfect solution is either impossible or AI-complete - it's basically asking "how do we make an AI that knows what you mean and never answers in a way you would consider misleading?"
I don't know if an *adversary* is necessary, but I do think "conversation" is a necessary part. The AI needs to know when there's uncertainty in the question (whether because it was ambiguously phrased or the human genuinely doesn't know what exactly they want) and be able to say "more information is needed" rather than "I'm pretty sure you wanted to turn the universe into paperclips."
I’ve been an ethical vegetarian for all of my life. I take the stance of “I don’t eat meat because I have good alternatives, but if I was stranded on a desert island, yeah I’d murder an animal.” I’m not vegan.
So with that background said, there’s good odds I’ll get diagnosed with something like celiacs and have to cut all gluten out of my diet. It’s not clear to me that I’ll be able to stay vegetarian after that. So I have a few questions:
TLDR:
1) Anyone out there who is a gluten free vegetarian? how’s that going for you?
2) Is there a non-gluten version of “vital wheat gluten” (what I use in my fake meat recipes)
3) If I’m going to learn how to cook meat, are there any good beginner but non-kids cookbooks out there you’d recommend?
1) Not allergic, but I rarely ever eat gluten because of a family member who doesn't. Currently vegan for ethical reasons. Going great so far
2) For meat alternatives, I recommend tempeh or soy crumbles (TVP). TVp, like vital wheat gluten, can be bought in bulk for relatively cheap then seasoned to match lots of different cuisines. Tofu is also the "classic"; I'm not personally a huge fan, but some love it. There are also plenty of store-bought options; for example, Gardein beef-style crumbles are gluten-free.
3) N/A. Would not recommend meat.
Bonus: would highly recommend the brand Enjoy Life, which is always gluten-free and never contains animal products. Their stuff is pretty dang good when looking for snacks/sweets
Would be happy to help if you had any more questions :-)
Fish can be a happy medium, and I know at least one vegetarian-inclined individual who won't eat meat but will eat fish.
I find that literally entering into Google, "easy quick tasty healthy instant pot/slow cooker/pan fied/baked salmon fillets" or whatever -- tacking on th keywords of what you need - gives you twenty bajillion recipes, thru which you can read and decide which are actually easy, which have impossible to find ingredients, which overlap with what. This will point you to better and worse cooking blogs (e.g., Natasha's Kitchen is great but very involved, and above my cooking skill level for now.)
My girlfriend has celiacs, and she's been at least a vegetarian for many years now, and so am I for all purposes nowadays. For January we're going full vegan, and really it hasn't been much a problem, although I miss cheese somewhat.
The key is she likes to cook and is very good at it(I don't really but I try to help as much as I can). A lot of foods like pasta and bread have gluten-free versions that are very good, and we're in Eastern Europe, if you're more west the selection is likely a lot better.
For protein we eat a lot of legumes like beans and chickpeas, soy and tofu can be very tasty if prepared nicely, and I've also been surprised generally with the number of fake burger-type of things that are vegan and gluten-free, I'm sure it's not the healthiest thing around but you can't be perfect all the time. Plant-based dairy imitations have also gotten very good, to the point where I prefer plant-based milks and yoghurts to the real thing most of the time.
By all indications we're in great health, I do a moderate amount of sports - if I started full-on lifting again, I might need to get a protein shake, but for my current activity levels it's not an issue.
The biggest pain is going out - finding something that's both vegetarian and gluten-free can be a struggle, especially when restaurants will do things like throw in croutons or soy sauce or any number of gluteny things that were absolutely not listed in the menu or the allergy list.
To summarize, it can work well but your whole diet needs to be built around it to a large extent.
Interestingly, where I am in suburban Texas, every restaurant that's trying to seem moderately nice has a bunch of menu items they claim are gluten free, but many of them don't even bother to indicate whether they think any of their menu items are vegetarian. (I don't know how accurate their assessments are of the gluten-free-ness of the food, but "gluten free" has enough mindshare in Texas that they market a flourless chocolate cake as "gluten free brownie" without bothering to indicate that this is in fact a traditional flourless chocolate cake rather than a brownie with some weird gluten-free flour).
My niece is a gluten free vegetarian. When we go out, she usually orders vegan sushi. I don't know how healthy her diet actually is but she always finds something to eat. Look into raw food "cookbooks," as they tend to be vegan and gluten free. They end up using a lot of cashews, coconut, and avocado. Other protein options are chia and teff, and of course tofu. In terms of straight up vital protein, try Bob's Red Mill Textured Vegetable Protein.
My advice to you is to simply realize that ethical vegetarianism is internally incoherent and abandon it: it either implies we should eliminate all predators, which would demolish Earth's ecosystem, or that humans have a special responsibility toward ethics due to superior intelligence, which automatically puts us in the "spiritually special" niche which is the traditional justification for us to kill and eat animals that ethical vegetarianism rejects.
Also, I know three separate people who used to be vegetarians, individually concluded for various reasons that they should not be, and reported increased vigor, health and energy upon once again regularly cramming their faces with beef, so even if fully healthy vegetarianism is possible, it's clearly not as easy as people like to claim.
Christina's reply to #3 is solid.
Ethical vegetarianism doesn't say you have an obligation to eliminate all meat-eaters. It says you yourself should not be a meat eater.
A parallel moral theory says you shouldn't gratuitously insult people, but also doesn't require that you eliminate every human who gratuitously insults people, and in fact doesn't even ask that you imprison them or fine them. You can think that something is wrong while also thinking that most ways we have of preventing others from doing that thing are even more wrong.
I absolutely look at it in terms of utility maximization. However, just because someone doing X produces less utility than them not doing X doesn't mean that me *preventing* them from doing X is better than just letting them not do X. Often my means of prevention cause all sorts of worse problems (having laws that punish people who gratuitously insult others would clearly produce a lot more problems than it solves).
I don't think this is any more plausible than the initial plausibility of the idea that if it's bad for people to get drunk, then it should be good to make it illegal for people to get drunk. Trying to make large changes to evolved ecosystems is likely to have more unforeseen side effects than mere prohibition of alcohol.
This reply reminds me of Stack Exchange:
"You asked X. That is a wrong question; you should have asked Y instead. The answer to Y is here. Voting to close the question."
I believe that there are consistent moral frameworks that do _not_ require being an ethical vegetarian (I follow such a framework myself, so I certainly believe it's internally consistent), but it doesn't at all follow that ethical vegetarianism is inconsistent, given that there is no "one" correct ethical framework. If pressed, I could probably come up with 2 or 3 ways in which vegetarianism makes sense, even if they aren't things I personally ascribe to.
It's probably true that many/most ethical vegetarians have inconsistent personal reasoning that could have lots of holes poked in it, but it isn't at all true that this is _necessarily_ the case, and without knowing this person's reasons, we certainly can't assume it.
Even more important though, consistent or not, why do you care? If someone else has a nonsensical ethical framework, if it isn't causing me or anyone else any harm, then why would I try to point out whatever flaws there may be without them inviting such a discussion? I don't see the value in trying to convince even the non-logical vegetarians that they are making some kind of ethical mistake. It seems sort of condescending.
> My advice to you is to simply realize that ethical vegetarianism is internally incoherent and abandon it: it either implies we should eliminate all predators, which would demolish Earth's ecosystem, or that humans have a special responsibility toward ethics due to superior intelligence, which automatically puts us in the "spiritually special" niche which is the traditional justification for us to kill and eat animals that ethical vegetarianism rejects.
it does nothing of the sort. yes, humans are marked out as special in that we are able to make moral choices, but most people, when deciding whether some creature is morally relevant, don't care about whether that creature has the ability to make moral choices. they care about other stuff, like whether the creature can suffer. that's why we still think infants matter morally, even though they can't make moral choices.
you might be conflating moral standing and moral agency. these two are different and should be derived independently. i wrote about this here: https://www.erichgrunewald.com/posts/moral-standing-is-not-moral-agency/
Animal predators don't have the same agency as humans in choosing what to consume, so they are exempt from any ethical constraints in their dietary choices. Also, I don't see how being "spiritually special" inherently provides justification to kill and eat animals when the choice not to do that is available. I am not a vegetarian, but your arguments do not make sense to me.
If animals have exactly the same right as humans, then yes we would be obligated to protect them from predators just as we are obligated to protect humans from being killed. But why can't there be an intermediate level of personhood, where you don't have all the rights of humans, but you have more rights than a rock? For example, it's unethical to kill you, but it's not unethical to fail to intervene when something else tries to kill you. Or it's ethical to kill you, but it's not ethical to cause you pain. Or it's ethical to cause you pain if it's for the purpose of meat production, but it's not ethical to cause you pain for sadistic pleasure. I don't know if any of these positions are true, but they're all internally coherent.
Re: #3 - I love and highly recommend "Salt, Fat, Acid, Heat" by Samin Nosrat and "Twelve Recipes" by Cal Peternell. They both dive into how cooking works and offer really excellent recipes, too.
I also rely heavily on "Mastering the Art of French Cooking" when I'm dealing with an unfamiliar cut of meat, because Julia Child is an excellent teacher and provides a good overview of how to approach different types of meat. If you want to go super nerdy on the science of cooking, "The Food Lab" by Kenji Lopez-Alt goes deep.
On a related, but slightly different note, Adam Ragusea has really good cooking videos on YouTube (as well as broader food culture ones) that focus in particular on the ways in which his recommendations differ from traditional ones, and tries to explain *why* they differ. That is, he explains why the Chesterton's fences are there, so that you can understand why your situation might be one where you do things the traditionally recommended way, or one where an alternative is either better, or so much more convenient that it's worth doing even though it's a little worse.
And although he does eat meat himself, a lot of his recipes are vegetarian.
2) Is there a non-gluten version of “vital wheat gluten” (what I use in my fake meat recipes)
Have you checked out fake meat lately? I'm been an ethical vegetarian for 15 years now and the past 3 (basically since impossible meat came on the market) have been far and away the best. Beyond's "meat" is gluten free, tastes indistinguishable from the real thing and is available almost everywhere.
I’ve used a bit of manufactured fake meat but tbh it’s expensive and I’m a grad student so I’m looking for cheaper options
Not to knock cookbooks, but have you considered youtube? I like learning recipes from
[Chinese Cooking Demystified](https://www.youtube.com/channel/UC54SLBnD5k5U3Q6N__UjbAw)
and
[Chef Frank](https://www.youtube.com/c/ProtoCookswithChefFrank)
No actual help with celiacs, sorry, but hopefully the Chinese food tends to use rice flour or corn starch instead of wheat flour, which should help?
So I’m not a strict vegetarian but I do have celiac, I try to eat less meat, and I’ve thought about this quite a bit. I could go vegetarian and not have nutritional issues, but it would be a fair bit more work, which I why I haven’t so far. Definitely do not try and go vegan if you have celiac. Dairy and eggs are incredibly useful.
There are a number of things that can sort of substitute for wheat gluten, like xanthan gum, the water from cooking chickpeas, corn starch, and so on. They’re used a lot in gluten-free baked goods. None are quite as universally good as actual wheat gluten, but using the right one or a combination can work fairly well.
I would largely avoid buying fake meat products if you’re trying to eat vegetarian as someone with celiac. Not only are they not as common, it can be hard to verify the supply chain of manufactured products for cross-contamination potential outside of a “certified gluten-free” sticker in the US or similar protected claim elsewhere. If you’re making your own, that’s a bit different but you can definitely make it work.
One aside, stay away from most oat-based products. Not only is oat gluten similar enough to wheat gluten that it may trigger your celiac on its own, but oats are really likely to be cross-contaminated with wheat due to how they are grown and processed, and the regulations in the US aren’t sufficient, so things like Cheerios can be marked “gluten-free” but will often cause people with celiac to react. Look for purity protocol oats, which are grown and processed separately if you do want oats and can tolerate them. They’re a lot more expensive.
As far as cooking meat, I don’t have much specific advice or a good resource to point to, but it’s not too difficult. Early on, cooking too long is better than not long enough in terms of safety, over time you’ll learn how to time stuff exactly right. Mostly, just minimize the amount of time your meat spends at room temperature and clean surfaces and hands religiously.
Thanks for the heads up on oats and cross-contamination, I had no idea.
1) Gluten-free pizza crusts, at least at restaurants, seem to have gotten a lot better over the last decade. They may not taste like normal pizza dough, but I'm not gluten-free and I've been to some places where I'd order the gluten-free crust because it makes for an interesting change.
3) Not a cookbook, but will help make cooking with meat less challenging and more flavorful. If you do end up needing to add meat to your diet, I'd strongly recommend getting a sous vide for cooking (Breville Joule or Anova were the two top brands when I was shopping for one). For most of my adult life I have overcooked meat because of fear of e coli or salmonella or other boogeybacteria. The sous vide will cook your meat to the exact temperature you need, and no more - and cook it all the way through. You can also seal spices/herbs/oil/butter in the bag (or use the reusable silicon bags or ziploc bags - ziploc works for meat cooked to a lower temperature), which will help give the meat more flavor. Once the meat is finished in the sous vide, you can put a sear on it, and that's an improvement over regular cooking, too, because the sear is just for flavor (meat is already cooked all the way through) so you don't have to leave it searing long enough to have that gray zone inside the sear. Using a sous vide was a major upgrade for me when cooking meat at home - I used to just hope I could keep it palatable and now I can make some dishes that taste close to restaurant quality.
the minimalist baker has lots of vegan, gluten free, and vegan gluten-free recipes that are simple and good
Has any explored the Kabbalistic significance of the Flying Spaghetti Monster?
It looks something like a deity out of love craft. Summoned by people who were mocking the concept of deities… so where does that lead, from a kabbalistic perspective?
Discussed this with some of my Kabbalist friends, and the consensus was Hod (the 8th sefirot), because Hermes is the trickster god, and Hermes is correlated with the planet Mercury, which is correlated with Hod.
I'm curious about the term, what do people mean by "Kabbalistic significance"?
@Lucas-- have you read Scott's novel Unsong? If not, you should, it's great, and it should also answer your question. Unsongbook.com
I studied psychology and neurology back in 2006-2011 (did fMRI research, and the quality of research in that field was so bad it put me off academia for life).
At the time, a major topic of debate was whether the DSM5 (diagnostic manual which at the time was a fairly recent update of the relatively slimmer DSM4) had massively overreached in terms of medicalising normal behaviour (and also perhaps being unduly influenced by industry).
For anyone still working in the field: is this debate still ongoing? Is opinion swinging either way?
I only got back into clinical psychology in 2021, and the DSM comes up regularly in my particular niche. There are a lot of things to fault the DSM5 for, but medicalizing normal behavior is not one that I've ever heard come up. You could argue that the pharma industry influence means that practicioners are more likely to prescribe expensive medication even if therapy would be more effective. Is that true? No idea! But it seems like a debate that someone would have had.
Personally the term 'medicalizing normal behavior' strikes me as a bit ridiculous, because no psychiatrists are going around to people's homes uninvited, DSM5 in hand, and checking whether any normal behaving people qualify for a diagnosis so they can push some pills on them. If someone is seeing a psychiatrist, that usually means they have a problem they want help with, and the psychiatrist then pulls out the DSM5 to see how they can help. Would it be better if it said "You need to be at least THIS mentally ill to qualify for treatment"?
I think the DSM serves its role adequately (and no better) as a sort of catalog of mental disorders. It's useful to know what symptoms commonly co-occur, which disorders they are usually typical of, and what treatments have been found effective. It's the opposite of a problem if you read it and think "I have all these things and I'm completely normal!" Good for you! Other people may not be so lucky.
Also, doesn't pretty much every diagnosis in DSM5 (and DSM4) require that the patients quality of life be adversely affected? Whether or not something is normal behavior, if it is screwing up peoples quality of life, it seems worth investigating and trying to correct. (We could ask the author of "DSM-5 Made Easy." He is lurking around here somewhere.)
Some of my recent writing at De Novo:
https://denovo.substack.com/p/aducanumab-update
Medicare won't cover Aducanumab (Biogen's bamboozle) except as part of a clinical trial. This is great news, and much better than I had expected.
https://denovo.substack.com/p/kaposis-sarcoma-virus
An overview of HHV8. I find this to be the least worrying human herpesvirus.
(In related herpesvirus news, there's now more evidence linking EBV to multiple sclerosis. https://www.science.org/doi/10.1126/science.abj8222 )
https://denovo.substack.com/p/help-doctor-ive-been-exposed-to-proprietary
Lots of companies sell research materials but don't disclose what they actually are. This hurts reproducibility of science and has caused lots of frustration for me personally. Companies shouldn't use trade secrets; the patent system exists for a good reason.
Metacelsus. METACELSUS. OF COURSE. GOD DAMN IT
I've spent hours of my life trying to come up with a catchy play on Paracelsus, and I never thought of Metacelsus. Genius.
Thanks, I'm glad you like it!
People do confuse it a lot with Metaculus, though.
I'm moving to SF from Lisbon this week (reach out if you're in the area). I've been considering what I'm hoping to find in SF, and why other cities haven't felt like a fit. I've boiled it down to this list:
1. Communities I want to engage with; in particular, technically motivated, scientifically engaged, board game playing, rock climbing, yoga-doing, NERDS. Where do I find them?!
2. Evidence of non-conformist attitudes (weirdos! Weirdos everywhere!) Otherwise, I seem to get bored of the city in about a year.
3. A culture around giving a shit, at work and in general. So, not Lisbon. Not Oslo. But,
4. Events and places worth going to! Life! What San Jose was devoid of.
5. Nice nearby places to do my outdoors hobbies (climbing, running, yoga)
6. Walkability and/or bike-ability
That boils down to basically, San Francisco, Austin, and New York in the US, and Berlin, Melbourne, and Taipei outside of the US. Portland looks to be a bit small. I’ve heard Seattle underperforms on weirdos.
I'd be curious to know what other people prioritize in places they've chosen to live.
Surely, you would know SF if you lived in San Jose. Arguably it’s the same metro area. Also I’m pretty sure you don’t have an exhaustive list of walkable or bikable cities.
Is Austin clearly better on these fronts than Seattle and Portland? (Perhaps 3 is missing in Portland, but I would think that Seattle is comparable to Austin on all fronts, and perhaps slightly better for climbing and walkability/bikeability, unless cost of living is high enough that an East Austin rent would put you in the suburbs in Seattle.)
On two short visits, I got the impression that Seattle's culture leans into 1, 3, and 5, but underperforms on 2 and 4, being generally an older median aged city. My impression of SF is that it's slightly smaller, but less sprawling, slightly more expensive inside the city, has better weather, and a long history of pushing back on cultural norms that I'm plausibly attracted to. Have you spent much time in Seattle?
I haven't spent a lot of time in Seattle, just been for conferences at various points (usually in the summer, which I'm sure gives me an unrealistic pleasant vision of biking conditions there). I spent 10 years in the Bay Area, but never lived in SF itself, and have now lived in Texas for 7 years, but only spent one pandemic semester living in Austin itself.
Oddly, the US Census tells me that SF County has median age 38.1, King County (Seattle) has median age 37.1, and Travis County (Austin) has median age 33.9, but when I take a moment to think I realize this is basically just telling me that there are more children in Travis County than in SF or King County, and isn't really telling me about the adults.
https://www.census.gov/library/visualizations/interactive/2014-2018-median-age-by-county.html
I hadn't thought to look, thanks for updating my priors.
An alternative plausible explanation might be that the lower cost of living allows for a younger population, including but not limited to, raising children.
A variety of cuisines. Although the Burmesese restaurant we really enjoyed in Vermont was in a tiny town. Moonwink in Manchester Center.
If everybody is non conformist is anybody non conformist?
Only those who are non conformist in a wrong way. But that is typically called "lack of social skills", instead of "non conformism".
Yeah, the weirdo terminology is inherently rather overloaded, in particular around the agency of the person being declared "weird."
I use "weird" to signal something between:
- person has gone to the effort to strip down and interrogate the norms they've been handed, while having the presence to maintain and upgrade their own set of norms, ones that don't benefit from regular normative reinforcement, preferably without being too edgelord about it
and,
- person has more interesting than conversation
but risk including persons without social graces, with varying degrees of self awareness.
Scott had a post awhile ago about how scientists seem to often go through an edgelord phase, until figuring out how to play within a system without stepping on its tender spots. I think being "weird" in the ways described is reasonably similar.
Let's share some sunshine at an upcoming SF ACX meetup and discuss further!
looking forward to!
My first priority in choosing a place to live is that it not be a city. Cities have historically been where city dwellers congregate. City dwellers typically are:
1. Not self-sufficient by choice
2. Users, those who by nature take advantage of others
3. Clingers
4. Prone to degeneracy and revel in it when they find ways that fit their proclivities
5. Not averse to clutter, deterioration and filth
6. Huddled masses
On the other hand, they are subject to a lot of diseases, so they get a really good immune system. Which leads bacteria and viruses to become extra-virulent. Which make it really bad when city dwellers go back to the land for whatever reason.
Just ask the surviving Indians. Hard to ask the dead ones.
If I were a surviving Indian, I would be mad as hell that the government of the United States brought their diseases from their cities out to my wilderness homeland and deliberately infested my ancestors with them, and then stole the wilderness homeland and ruined it for habitation by what had been a higher life form.
Rural people ARE NOT subject to anywhere near the diseases of SHITIES. We face other dangers, yes, from accidents and such.
The sad thing is that your subscriptions indicate that you're either a high-effort troll, or you genuinely believe this near-strawman version of what a rural person thinks.
Also, as a proud Indian, fuck RIGHT off with dragging Indians into your weird little feud.
That's not an accurate accounting of what happened. This video explains it better: https://www.youtube.com/watch?v=JEYh5WACqEk
Yes, let's hear it for the upstanding, productive unclingers of the rural persuasion.
We do not want or appreciate applause, just to be let alone to ponder the wonders of creation and our Creator.
Does that meant you'd support abolishing agricultural subsidies?
Absolutely! They are a Marxist foolhardy scheme, that subsidizes the worst of farming practices and penalizes the best.
As a member of the huddled masses I enjoyed reading this comment quite a bit! Please don't strike this one down Scott.
It seems you have a strong opinion! Spot on, I am absolutely prone to degeneracy and I do indeed revel in it
I've lived in London, Plymouth England, Glasgow, Valetta, Downtown Manhattan, Portland OR, Palo Alto, Mountain View, San Jose and currently, Bristol England.
San Jose/Silicon Valley is a bit of a cultural desert but you can't beat it for outdoors stuff. Plenty of yoga, board games etc. You need a car to get anywhere.
Big cities like London & New York have everything you need (except maybe the outdoorsy stuff) but it takes a lot of effort to find it. I didn't make a single friend outside of work in two years in NY but both cities are wonderful places to live.
Portland and Bristol are both awesome. Don't discount them because they are not huge. Both have lots of arty types dressed in black, students & activists, theatre, live music and fantastic beer on every corner. Both very walkable and cycleable and plenty of outdoorsy stuff nearby. I didn't own a car in London, Portland, NY or Bristol.
The biggest game changer for me is the ability to meet people outside of work. I like to go for an occasional beer in a pub and in Silicon Valley, I spoke to maybe 10 strangers in pubs in 23 years. In Bristol, I meet that many people in a week, belying the English reputation for coldness. I don't think I made eye contact in two years in New York. Portland is quite like Bristol in this regard; London is somewhere in the middle.
Portland == beer.
And coffee and great food and rain...so much rain.
Bristol == beer too! They are very similar in a lot of ways.
Bristol is an underrated city for sure. Lots of tech as well.
I'm interested to hear your opinion of Glasgow! I would think, given what you seem to value, it should have performed fairly well, but you don't mention it.
I lived a little bit outside Glasgow so I didn't get to know it that well. This was in the 80s too and Glasgow has changed a ton since then. I did see the best concert ever at The Barrowlands though! The Pogues with guest performers Kirsty McColl & Joe Strummer and before Shane MacGowan was too far gone. It was the concert where they debuted Fairytale in New York! Terrific!
I might be underestimating Portland! Thanks for the detailed comment. Your mention of being able to meet people out of work is something I might have stated more explicitly in the first point; I felt somewhat alienated by the less-than-social environment of Oslo and San Jose, in exactly the way you describe.
I've been to New York, and for whatever reason I tend to feel simultaneously ovewhelmed by the city, and a bit unsure of how to "do social" in New York. I suspected London would be same, and I'm not certain whether there's actually a way to live in a foreign country without still having to pay US taxes.
If you are an "American Person" (i.e., an American Citizen or green card holder), you have to pay taxes on your worldwide income. Most countries have a dual taxation agreement though where you can deduct taxes paid overseas from your US taxes.
Speaking the local language is my criteria. I prefer to be able to move outside the expat/anglophone bubble. I don't speak Portuguese, so I would not be attracted to that city even though it might have a lot to offer.
I described myself as "new age boring" on an internal work call the other day because of my interests in Psychology (specifically positive psychology), Cryptocurrency, Psychedelics, and health (e.g. bio-mechanics and breathing practices).
Anyone else identify with this "new age boring" set of ideas? What else would fall into this category?
Sometimes I read something and further realize how unoriginal I am. Completely down with all the Interests listed.
Others that may qualify are Death Denial, nootropics, Longevity and Absurdism.
Yoga, rock climbing, DIY (esp. woodworking and tailoring).
No idea why (aside from yoga) but those seem to draw exactly the crowd you mention.
Since when I was young, maybe around six or so, on nights before special days I looked forward to a lot (like my birthday), I agonized over the realization that my current self, which I equated to my current train of thought, would end once I fell asleep, so this current longing that I had for whatever would happen on the next day would never actually be fulfilled. This current longing would end with my current train of thought. This made me really sad, it felt like my current self would die and get replaced the next morning by a somehow very related self. This new self would be very new in its essential emotional state, in that way (but not in all ways) discontinuous from my current self. I continued to have this feeling from time to time (maybe 2-3 times per year, less later on) until well into my twenties. At some point it somehow stopped. I have not talked about this often, but when I did, I never found anybody that could relate. Does anybody have any thoughts on that?
I remember asking my parents about this as a kid and not being satisfied by the answers. Eventually the dread went away ... I would like to say because of some grand philosophical insight, but really because I learned to get distracted by more concrete, less introspective thoughts.
One thing I have noticed as an adult is, my train of thought actually gets distracted fairly often even in the waking hours. It is not unusual that something like the following happens: One morning I have been concentrating on my programming work for an hour or two. Then I am suddenly startled by wind rattling the windows. Then I turn back to work: I was distracted only briefly, so it is quite easy to pick up where I left. But I stop to think a bit more, it certainly is already much more difficult to recall what I had been exactly doing and thinking an hour ago, before I sat down and started programming in the first place.
Sure, I am remote, so I had breakfast and drank coffee, but which coffee mug I picked and why I chose it? The special colorful one I particularly like, or one of the set of boring ones (because there are more of them in cupboard?) I can try to recall, but that particular train of thought I had while making the decision, it has been already utterly lost. And it is about as well as I can remember what I was doing the previous evening. With some effortful concentration I can pick up and remember more details, but the exact train of thought has already been gone.
Like, I kind of get what you're saying, but this sounds arbitrary to me. Why do you consider sleep to be the time that you change from one self to another? Your current self dies and gets replaced every nanosecond of every day, you are never the same person you were in any instant.
I still kinda feel that my current self dies each night, but since there is nothing I can do about it anyway (staying awake would only postpone the inevitable for a few hours, at a cost to my future selves), it seems better to spend those last moments thinking about something pleasant, rather than worrying.
The philosopher Derek Parfit takes this idea in the opposite direction. We know by definition that the current self won't experience the thing that is being anticipated for the future self. But yet we have some positive feelings for the fact that this future self will get it. Those positive feelings are real. Although the future self is *more* like the present self than all the other selves that exist, all the other human selves are in fact still a lot like the future self and the present self, so whatever positive feelings one has for the future self that gets to do the fun thing, can also be had for any other person that gets to do something good. As long as the longing is understood sufficiently impersonally, the longing is in fact satisfied, and this sufficiently impersonal longing is, he argues, the foundation of morality. As he put it somewhere, “My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air.”
That seems rather silly to me. You identify with your future self because you share the same DNA, and thus have the same Darwinian interests.
If you take a generalized version of Darwinism into account (i.e. what is the economically optimal stance to take in the sense of global productivity and civilization continuity), then the non-ego-supremacist version (i.e. the universal value) wins, except for edge cases. To see that is a simple matter of coincidence of such goals with the universalist position: an universalist society favors the entirety of society, while an egoistic society only favors each individual with limited cooperation options. So when a civilization-wide challange comes up, the cooperative society naturally should emerge prevalent under most conditions, This is not a strictly genetic form of evolution, but more of a learned optimality or survival condition conclusion. You don't need to iterate this scenario 1000 times for our genes to change our minds into this paradigm -- we can simply learn it from being fairly rational and universal learners. In fact, due to climate change and existential risks such iteration at the largest scales probably can't be afforded.
Now, that universalism *still* has problems... it values the whole, but it's unspecified *what* about the whole that is valuable (without such specification, you basically get The Borg, or Grey Goo or something). The value is in the human (and general conscious) experience of each individual and the collective experience of the network of individuals (I think the Star Trek Federation ideal really approaches this as well, to give another fictious example).
A cooperative society can function via individuals acting on individual incentives, without those individuals losing focus on their own egos. Losing that would probably require some sort of brain alteration out of "We" or "Brave New World", since group selection isn't going to work on a genetic level.
That misses the point. That's the proximate explanation for why I actually have this kind of behavior. But it's not a *reason* for me *to* have this behavior.
You've already got the behavior. Why do you need a reason to have it? I'm an emotivist/non-cognitivist who doesn't think there are objective normative truths, but this is an area where I'm going to cite Egan's Law: "It all adds up to normality" https://www.overcomingbias.com/2008/06/living-in-many.html
I think your melancholy here is/was too focused; why do you believe your waking train of thought to have continuity, in terms of identity? As you have experiences and perceive the passage of time, your identity constantly shifts. Is your point of hesitation effectively "discontinuities" with respect to time on the n-dimensional curve that is "you"?
In short, our identity is constantly shifting, and you are not the "you" you were 5 minutes ago.
I think this is an interesting comic that addresses your point:
https://www.existentialcomics.com/comic/1
(I would actually recommend the entire works of this comic, as it provides a light introduction to many famous philosophers and ideas but is also very funny imo, though the linked comic isn’t)
That really resembles this cartoon from 1990 (at least for the first half or so):
https://www.youtube.com/watch?v=KfHbsMa_wao
Robin Hanson compared losing consciousness & resuming it to death (as applied to emulations) here:
https://www.overcomingbias.com/2016/04/is-forgotten-party-death.html
I would like to register an anti-recommendation to that comic in the strongest possible terms. Its politics are awful (and it's an extremely political comic strip) and would demonstrably lead to tens of millions starving and dead within decades, since he has the same view of political economy as Mao.
I don't think you have to worry about anyone becoming Maoist by reading that comic strip.
I'm a worrier
I don't remember having this particular thought, but I do relate, because I remember having a lot of weird philosophical ideas as a kid.
'You' are a network of electrical events in a mass of neural cells. The notion of self is an emergent phenomenon used to explain observations and act effectively in the world. The self is important in this way. But also, it has limitations. The boundary the events is arbitrarily limited -- we could come up with larger divisons that may include multiple individuals (i.e. organizations) that communicate between themselves. Or we might come up with smaller divisions like the left and right halves of your brain, or even smaller specialized regions of your brain. The core element is a pragmatic one... what definition of 'self' is the most useful and most sustainable? The evolutionary and mostly historical answer is the common individual. But it is important to keep perspective, to understand that we are part of a larger system, that this mass of events changes in character not only from one day to the next, but from one moment to the next as new information and knowledge about the world comes in and as subtle neuroplasticity and learning effects take place; as the state of your mind evolves; and as the world itself changes.
I don't think it's rational to cling too strongly to the self, i.e. to feel attached to a particular state and fearful that it "goes away", as our instincts of self-preservation suggest. It's only rational insofar as to preserve the continuity (or creation) of beneficial states of mind, and only insofar as you have influence on them. I think most information we acquire allows us to mature and sustain those good states of mind (hopefully including this knowledge about the limitations of the concept of self!), so that aspect of learning is something to embrace and cherish. On the other hand, some properties of the brain change with age such as neuroplasticity; however, we have little control over that, so it's no use getting sad over this particular set of inevitable (and not necessarily malign as a whole) change.
The bigger picture is also a great avenue into non-egoistic ethics I think... the self (human individual) is ultimately just a boundary promoted by evolution, and fixation on it is a cause of a huge array of issues of civilization (not every issue to be sure, but close... all failures of coordination like climate change, etc.) . But it *is* an important boundary for how we operate society, at the same time, for organizational purposes. You just have to keep in mind it's not worth attaching any supremacy to it.
Ultimately the greatest boundary of every living creature is the one that makes the most sense to me to assign value to.
Rather than merely being a cause of "a huge array of issues" I'd say civilization itself is built on it.
I've thought about this, especially on how going to sleep and waking up is important for me because it's a form of "rebirth". Especially on times where I didn't sleep much, I found that I really missed this "daily reset".
On the other hand, I've also felt a deep melancholia when the sun is rising after a long, good night with friends (sometimes with MDMA helping). I would describe this feeling as "realizing the impermanence of things", everything ends eventually, and in some ways it's important and hopeful, and in others it's terrible and sad.
I've also thought about the discontinuity of experience in the case of scenarios like brain uploading or cloning. If you build a perfect copy of someone, and destroy the initial person, for the world nothing changed, but I would still call it "death" for myself, the person being left behind.
If you've never heard of it, there's a game that kinda explores those thoughts, especially around the cloning part, called SOMA. Fair warning, this is mostly through the horror/psychological horror lens.
Totally vague question, here. I've been a software engineer for many years, working at a big software company. I've enjoyed my time here, but as pressure has mounted to always be moving up and up and up, and after getting promoted a few times, I find myself enjoying work less and dreading it more. Sure, I make a lot more money than I did when I started out, but what's what worth if I don't enjoy my job, and I already made a ton of money when I started out. Work seems to involve so much damn coordination and management of other people, it's all just a logistical nightmare to get the smallest things done, and there's so much to keep on top of, it's truly exhausting.
I wonder if the root of my problems could be that I accepted promotion (or rather, they promoted me without really asking), and I should have remained lower level. I'm the sort of person who likes being a jack of all trades, enjoys the learning process, enjoys helping others a lot, but is less enthusiastic about really becoming a master and the leader. But, I don't know if any company really wants someone who doesn't seek to be the best of the best and the next leader of gigantic initiatives.
So what should I do? Is it possible to willingly move backward in my career? Or should I try out a smaller tech company? Or maybe I should get out of business-driven software entirely. But if I did that, I don't really know what would be my other options, big tech is really all I've ever known.
Getting this sort of thing to be more normalized would help avoid the perils of the "Peter principle", whereby anyone who is good at something is continually promoted to new jobs, until they find something they are not good at, which they then keep doing for life. If everyone was happy to move back one step at that point, the world would be a lot better.
https://en.wikipedia.org/wiki/Peter_principle
Yeah. The problem, for at least my company, is that the incentives for the company are not really aligned with anyone inside the company. The company explicitly does not want anyone to get too comfortable in their role. It's a gamble, and one that's worked well for them. They're effectively making a tradeoff, accepting that they have higher attrition rates, but in turn, they get the occasional employees that are superstars, who they push into leadership who end up doing amazing things. This strategy has some merit, after all, we're all familiar with the idea that someone who's been around a company long enough knows too much, and no one wants to have to replace him because it'll be a major hassle, even if he isn't really performing that great anymore. My company basically says, screw it, get rid of that person, and we'll manage to find another. This is all great for the company, but less great for the people inside the company, unless they are real type-A go-getters who want to rise to the very top of the corporate ladder.
> the incentives for the company are not really aligned with anyone inside the company
This. From the company's perspective, the 1% probability that you become a superstar leader is totally worth the 99% probability that they will just make your life suck for no good reason (that is, no good reason from your perspective).
A possible strategy could be to consistently show utter incompetence at anything related to management. (This could also get you yelled at, and possibly fired.) You already failed by revealing your skills, but maybe in the next company.
Are there any companies for which these incentives are not misaligned? I don't know if my current company is standard.
If you have a big company with many managers, I guess your only options are to either forcefully promote your own people, or hire managers from other companies.
I suspect that the second option is even worse (both for the company and for you), because it attracts some kinds of people you want to avoid -- for example managers who do things that create extra profit in short term (and get them a bonus) in return for a huge loss in long term (they avoid the impact by strategically changing companies *before* shit hits the fan... which is exactly why they are now available for you to hire).
A possible way out of this dilemma is to have many employers, but few managers. But that means giving your people lots of autonomy. And if you already have professional managers in your company, they will resist this as much as they can, because it threatens their jobs and destroys their careers (managers typically advance by increasing the number of managers working below them). So situations with the right number of managers are probably unstable: you either barely have any managers, or enough to create pressure to hire even more managers.
In software development, Scrum was a process originally designed to replace managers, and have autonomous teams instead. But clever managers hijacked the keyword, and these days in 99% of IT companies, "Scrum" essentially refers to using Jira and having daily meetings, while still having managers who can override any and all inconvenients (for them) parts of the original Scrum.
The most realistic solution seems to be working for a small company. If there is just one guy who owns the company, and five employees working for him, he will not try to convert them into managers. -- The problems are in long term. Small companies can easily go bankrupt. Or they grow bigger, and then the owner decides it is time to take a break and hire professional managers instead.
Another possible solution is to keep changing jobs whenever the situation becomes too uncomfortable. A new job allows a new start at the bottom. But you need to consider how your CV will look like after 20 years of doing that, so don't do this literally every year.
maybe consider a library or museum position? lots of software engineering to be done but totally different pace and organizational incentives
It's probably worth exploring the possibility of redefining your responsibilities at your current level. If you're currently in a people-management role, ask about a lateral move to the equivalent level on the IC job ladder. If you're already an IC, talk to your manager about transitioning from tech lead and organization initiative work to something more to your preferences. I've seen a lot of variety in what kinds of work very senior ICs do at big tech companies, including exploratory prototyping work, deep subject matter experts, product architects, and technical firefighters for troubled projects.
Startup. Even if you're the sole employee, and it grows, you can hire another employee until you have enough employees, then you can hire a project manager. They don't manage people, they manage the project.
You mean I should start my own startup? Sounds interesting, but I think I'd need to have some sort of good product idea first I'd want to implement.
Or you could try finding a cofounder with a good product idea.
> Is it possible to willingly move backward in my career?
Yes. I'm working with multiple people who are individual contributors but were team leaders/managers before.
Unless you really need the money _and_ actually get extra money from being on the management track (which is not a given in IT at all), recruit yourself for an IC role on your next job hop and enjoy your life.
Personally I turned down the opportunity to become a tech lead because I'd rather have minimal commitments, every time I see how much bullshit the person who stepped up instead has to deal with I'm happy I dodged that bullet.
I don't know if that's quite what I mean. I don't consider moving from manager to IC to be moving backwards, nor do I consider moving from IC to manager to be moving forward. I and my work consider the two to be parallel tracks. For both tracks, as you move upward, you are expected to do more managing of others. When I say I want to move backwards, I mean I want to have less stress, less coordination, and maybe less responsibility in general. And I'm also fine with less money.
I think that maybe the issue is that in my company, there is no dedicated tech lead role, it's just what you get promoted into on the software engineer track. But it's still considered to be an IC role.
Yeah, at some point on the ladder even a technical role would acquire managerial elements (though I think calling it IC is muddling the waters at that point). That's what I meant - going back from those roles to just a senior dev is very doable, and my colleagues did that so they have less stress, more time for their families etc.
So I think you basically got to the answer in your last paragraph: smaller firms value different skills than mega firms especially when it comes to the value of coordination versus individual contribution.
This is generally also true for firms who maintain legacy products. The person who knows their way around a legacy system can have outsized compensation.
Ultimately there’s really no such thing as moving backwards in your career if you’re becoming a better master of your craft.
Hmm, interesting. I guess though, that immediately makes me worry; what if I don't really have the skills to maintain my job as an individual contributor, so I have to supplement my work with coordination? How do I know I actually have what it takes to make it in a smaller company?
But also, don't get me wrong, I love helping others and teaching others, and collaborating. It's just that sometimes in my team's line of work, it's completely bonkers the amount of collaboration and coordination that's needed.
Larger firms tend to really value people who are good at thriving at larger firms. I think this is a generalized statement nonspecific to tech. Walmart, P&G, and Apple all really value insiders across their roles and functions.
You might not be a fit for that extreme end of the spectrum. Just like you might not be a fit for a 20 person firm. There’s probably a firm maturity and culture that’s somewhere in the middle. Just interview and see how others run their shops to get a sense of what’s out there.
Are you sure the issue is related to your position only? It may (also) by linked to your organisation aging (your coworkers are not the same, or do not have the same motivations and state of mind, the hierarchical structure has changed) or aging of the project(s) you are working on (old code syndrome).
I am in a very similar situation, but my posiition did not officially change (at least since the degradation started). The code and organisation aging, on the other hand, are very clear and the source of the problem.
Yeah, that could totally be true. But man, does switching teams scare me! The last (and only) time I switched teams, a few years ago, I realized that it's completely impossible to tell what a team will be like without working on it for at least 2 months. Every team I spoke to would make themselves sound like they're doing the most exciting stuff, and they're basically the best team ever, and all of the anecdotes and claims would be based on some degree of truth. But then, I'd talk to other people who luckily happened to know more about those teams, and they'd tell me it's a hellhole working there. And then I chose the team which seemed best, and sure, it's been okay, but it also has massive problems. I mean, I guess I wonder, does any team in any software company actually have good code, and a good response to the problems of aging?
Everyone lies. If they don't, the next change in management can completely change everything, anyway. But the more you hate your current job, the less you should be afraid to change it. Worst scenario, you will have to change it again. Make sure you collect as much money as possible, and retire early, if that is an option. Try alternative strategies, such as not giving a fuck (while pretending you do).
A thing that worked for me is to keep phone numbers of my former colleagues and classmates, and once in a few years call them asking "hey, where do you work? are you happy with your work? is your company hiring?". That at least gives you some insider view. (Also, you learn that most IT people hate their jobs, so it's not just about you.)
I think most dont't, and code/team aging is one of the biggest force behind rise and demise of software solution (the competitor is not better because of better UI/algorithm, at least not ultimately. It's better because It is younger code with new enthusistic coders.
And appreciating teams from the outside is super hard everywhere, not only tech world. The western company world (especially in the US, but in Europe too, just to a lesser extend) work on (mostly fake) enthusiasm. There is very little honesty there, apart maybe among peer who know each others well and for a long time, or good friends working in a completely different world. It's just very risky to say your job suck, but you do it because it's the best way to earn money you know...So no team member will ever say that to an outsider. Probably not even in the blue collar world, which I do not know as well but I have friends there and it does not seems so different.
The best (but still very very uncertain) indicator is how do you find your future management (1 to a few level up)? Honest people you would enjoy out of work context, or not? This you can often have an intuitive idea after a few meetings
Hah, I've liked and admired much of my management just fine. But I have never ever had a single manager who I'd ever describe as honest. And they are always nice people when the team goes for beers, but they keep underlings at an arm's distance. They are always hiding something.
I'm pretty sure that my company simply does not allow demotions, period. I'd have to move to a different company.
I need an advice related to psychiatry and gender, so this blog seems like an impressive fit.
I have only been exposed to the concept of transgenderness very recently, so if I’m misunderstanding something that’s supposed to be common knowledge, correct me.
27 years old, assigned male at birth. Since I’ve been a teenager, I’ve been suffering from a really bad depression. It’s the bane of my entire existence, my number one problem in life. The depression is extremely treatment-resistant; my case is stumping psychiatrist after psychiatrist. Nothing seems to help. Even electroconvulsive therapy, the most effective and hardcore solution that is usually only deployed as a last resort, did nothing.
For years, I’ve been doing some kind of Pascalian Medicine approach on myself, trying everything under the Sun in hope that it sticks. Whenever I stumble upon a paper saying that some supplement has some mild anti-depressant properties, you’d bet I’d be chomping that supplement down by the bottle, because if there is even 1% chance it will help, it’s worth it. But nothing is working so far.
In my quest for the cure, I have stumbled upon two very curious facts:
1) Gender dysphoria often manifests as a combination of several psychiatric conditions, including depression. These conditions are impervious to “traditional” treatments because they don’t resolve the core issue.
2) There are a lot of people around who are in denial about being trans, often inventing very elaborate narratives to persuade themselves they are cis.
Now, there is of course a very fundamental philosophical problem about how do we ever find out what’s true if every thought and emotion might be elaborate denial. But...
When I read about symptoms of gender dysphoria, they seem to be *suspiciously* accurate, to a much bigger degree than experience of two depressed people would correlate.
When I browse /r/egg_irl, the memes seem *suspiciously* relatable and confusingly fascinating. I have spent hours digging through the sub, bizarrely mesmerized.
None of this is, of course, a smoking gun. Transgenderism is very uncommon; there needs to be a lot of evidence to outweigh the prior implausibility, and all I have now is vague hints and tentative speculation. But what if there’s a chance?
If it turns out my depression is borne of some kind of suppressed transgenderism, that would be the worst single piece of news I’ve ever received. It would mean that I would never beat my number one enemy without transitioning, and transitioning is not possible for me for a variety of social, legal, financial, and other reasons.
I’m not even sure I want to investigate this avenue further. I’ve read a story of a trans woman who was more-or-less stable if vaguely unsatisfied, until she tried some feminine clothes, and got so *into* it, that her entire perception of herself has changed, and she was never again able to look at her male body in the mirror without being debilitated by dysphoria. If poking around the Unknown too carelessly would bring upon me some kind of Lovecraftian comeuppance and destroy my sanity... I would certainly want to avoid that.
So...
Does this story make sense in general?
Is there a way to find out if I’m a trans person in denial that is resistant to self-deception and wishful thinking? I’m assuming the answer is no, because otherwise it would be on the front page of every trans space, but maybe there’s some kind of special case solution that would fit here?
If my depression turns out to indeed be the result of gender dysphoria, is there a way to treat it without transitioning? Maybe some kind of symptomatic treatment to ease its effects?
Just some food for thought here about the term "treatment-resistant depression". It might be helpful to think of what you're suffering with as chronic unhappiness, which is a term that commits you less to a simple, biological model of what is wrong, and also opens up more possibilities for ways to fix the problem. If think of yourself as "having treatment resistant depression," your picture of what's wrong is going to be nudged by the phrase towards a picture of a something like a happiness lever in your brain that is stuck in the "off" position, and needs to be greased by chemicals or jarred by ECT or some such so that it can move freely. I'm sure there are some people whose unhappiness really is due entirely to some brain glitch that's the equivalent of a stuck happiness lever, but there are many many other models besides the simple brain glitch one that can account for chronic unhappiness, and yours may fit one of these other models better.
In fact, you're now considering one such alternative model: You are a trans person who cannot enjoy life until you start living it as the gender you feel like you are. There are lots of other possibilities that are analogous to the suppressed trans model -- situations in which someone's longstanding unhappiness is the result of some other profound, unmet need, such as: deep loneliness; living in a setting where they are not good at any of the skills that are valued; living in a setting where they are despised and mistreated; being profoundly understimulated because they avoid so many things out of fear.
And there are other models besides the unmet need ones, models that have to do with getting stuck in mental loops; models that have to do with being way overcommitted to some idea of how you're supposed to be. . .
To be honest, your suppressed trans model of what's wrong does not seem terribly plausible to me. Many, many gay or trans people live in cultures that view their sexual nonconformity as a repellent abnormality punishable by death, and some of these gay or trans people even buy into their culture's view, and think they are going to burn eternally in hell fire -- but despite all that, these folks cannot stamp out their gender dysphoria or their sexual attraction to their own gender (though of course they may hide it from the world). I don't think being trans is very suppressible.
But maybe some other model of your longstanding unhappiness would help you find a way out.
" I don't think being trans is very suppressible."
On the contrary, it must be, much moreso than homosexuality, since we have virtually no historical record of it before the late 19th century (whereas gay men and laws against them are omnipresent). If it's not a wholly socially mediated conversion disorder but a fundamentally biological problem, suppressing it is eminently possible and we've just forgotten how to do it.
Yea I was going to echo Eremolalos, there have been common trans-feminine tendencies in various societies and cultures since ancient times. The exact manifestation differs, and only recent technological advances created medical transition as we know it. But a small percentage of the population, of both genders but especially natal males, have been having very transgender-like experiences for millenia.
Separately, I think trans-ness is roughly as suppressible as homosexuality.
Wikipedia offers quite a lot for you to rebut or reinterpret:
https://en.wikipedia.org/wiki/Transgender_history
Essentially all of the European stuff is weaseling (note phrases like "galli priests that some scholars believe to have been"; those scholars are wrong, and probably know they're wrong, but they're putting these assertions out there specifically to muddy this exact kind of conversation, which is maybe the thing Wikipedia is most vulnerable to after having Kubrick's talk page squatted) or pure fakery, brought on by present societal trends. This makes me highly skeptical of trusting any of the other claims from regions with which I'm less familiar.
I think it would be unreasonable for you to demand I rebut each one severally, but it's fair that I should give just a few examples so you know I'm not just talking out of my ass:
* The Saturnalia crossdressing is part of a carnival of inversion and prima facie absurd to connect to transgenderness; so is the disguised woman in Ekklesiazusae, who is a joke in a comedy used to set up the larger (and baldly misogynistic) comical premise of the play. It's also weird that the article writer didn't alight on Agathon in Thesmophoriazusae instead since there the joke is that Agathon is *already crossdressing at home* when the main character comes to ask him to crossdress in order to sneak into the women's assembly, but not to help, no, Agathon refuses: he's only doing this because he's a big ol' homosexual (cue laugh track in Greek).
* D'Éon was clearly a man who got into the crossdressing as some sort of weird... who knows, but who pretty frankly admitted he was a man; the speculation continued in spite of him, and most likely in large part because of a legal decree that he *had* to wear women's clothing in the Kingdom of France (IIRC as consequence of him using his fake femininity to his advantage in a court case, but don't quote me on that). There's a preserved letter from him to the royal court requesting that he be "spared this ongoing humiliation" or words to that effect, a request which was apparently rejected.
* Catalina de Erauso was just a woman. She crossdressed for the wholly practical reason that for most of history it was clearly better to be a man in legal terms, at least. This is hardly unheard-of, especially in the 17th and 18th centuries. Anne Bonny and Mary Read are two other famous examples.
* The Public Universal Friend was insane.
(Elagabalus was one of the few exceptions I was thinking of when I wrote "virtually no", but it will be noted that A, a 1600+ year gap isn't very impressive proof of continuity, B, he existed in a highly unusual social situation which might have been innately less suppressive than the surrounding society due to his inordinate levels of authority and freedom, and C, during the last vogue of historical revision before transsexuals this same account was widely held among the pop-revisionists to be slander against Elagabalus from his enemies, possibly because he was gay.)
I agree that it would not be reasonable for me to expect you to rebut every single case of pre-19th century transsexualism mentioned in the Wikipedia article. And I myself do not have enough in-depth knowledge of a single one of the cultures and times mentioned to argue back against your arguments. But I have to admit I do not feel like I have moved much closer at all to your point of view. Here’s why:
- I don’t understand what you are getting at here: “If it's not a wholly socially mediated conversion disorder but a fundamentally biological problem, suppressing it is eminently possible and we've just forgotten how to do it.” And to the extent I do understand what you’re getting at, I don’t agree with it. If being transexual is a fundamental biological problem, why would suppressing it be eminently possible? I’m not sure what counts as a fundamentally biological problem. Would left-handedness count? If so, it’s a fundamental biological problem that doesn’t have the properties you’re saying transsexualism does. It’s not eminently suppressible — people throughout history haven’t *failed to realize* they’re lefties. (Of course, many have learned to partially or fully overcome their left-handedness, but that’s not the same as not realizing they had it.)
-It seems implausible that transsexualism (or the ability of transsexual human beings to recognize their own transsexualism) suddenly emerged at the end of the 19th Century. What would explain such a radical change happening then?
-It seems implausible that there should be so many things in the historical record that look like evidence of transsexualism, but that not a single one of these things is in fact explained by the existence of transsexuals. The most parsimonious explanation of there being evidence of transsexualism in so many eras and cultures is that transsexuals have been popping up in human populations all over the world for millenea.
#1: Is it just "problem" that's tripping you up? It seems to be a problem to those suffering from it as a rule, but I'm not at all averse to tabooing it for greater mutual comprehension, so let's do that. I'm simply getting at the fact that, de facto, transsexualism appeared in the late 19th century. It is therefore evident that *if* it is not social but biological, then it is a biological characteristic which has the trait that it can be fully and durably suppressed by some type of cultural practice. Left-handedness and homosexuality are good examples of biological characteristics which *do not* have this trait, since left-handers and homosexuals appear all over the place, all the time, regardless of how many people get whacked with rulers and/or lynched. (For further mutual clarity, I am against both of those cultural practices.) That is to say that, despite variably intense work to find one over numerous centuries, our culture has *not* been able to discover or devise an effective suppressor of either of these characteristics.
#2: Whether or not it seems implausible is not important. It *did actually happen*, that's what's important. Here: https://www.nature.com/articles/s41598-021-97778-3 is a link to a(n unrelated) article about what I would classify as an incredibly unlikely event, but, all the same, the article is about demonstrating *that the event occurred and how*, not trying to deny or disprove it. That is our task in this instance: not to deny, but to understand.
As for what would explain such a radical change, I proposed two possibilities:
* It's a conversion disorder (as hysterical paralysis or anorexia), which has gradually spread via social contagion, from an initially minuscule group to a larger one as this particular conversion disorder becomes more appropriate to our zeitgeist, or outcompetes the others through such means as the invention of the reflex hammer, or whatever the hell is going on there more specifically.
* It's biological, but something changed in Western culture around the fin de siecle which (again, very gradually) began to lift the lid on the very long-standing suppression mechanism. Atheism? The Decadent movement in art? The dread Automobile? Who knows. I frankly don't have a good candidate here, but I'm willing to listen.
A third supposition which is neither of these has been advanced by A. Jones, PhD, viz. chemicals in the water which are turning the freakin' frogs gay. I personally do not subscribe to this man's theories, but I *do* worry a great deal about microplastics as an endocrine disrupter for transsexualism-unrelated reasons, so I can't elevate my horse to a too-high altitude here.
#3: Again, extremely few things in the Western historical record actually look like evidence of transsexualism. You believe this because people have been hard at work for two or three decades twisting all the minutiae they possibly could so that they'll look like evidence of transsexualism. (Catalina de Erauso puts on men's clothes so that she can join the army: suddenly this is proof that she was a transsexual man all along. Does this mean that the Israeli, Swedish and other armies which allow women in combat roles in fact have no women in combat roles, only transsexual men? It seems to me that if you actually think about this kind of evidence dispassionately you'll realize that the logic simply doesn't hold. Modern transsexuals don't just go off and do gender-nonconforming stuff, they're seriously psychologically affected by the nature of their bodies and crave medical treatments of various sorts. Modern *women* just go off and do gender-nonconforming stuff like join the army if they feel like it.)
Also, I want to point out that nothing I said at all contraindicates *other cultures* being crammed full of transsexuals from the year dot. If transsexualism is suppressible, as it must be if it's a biological characteristic that existed all along, it's entirely possible that the West suppressed it and e.g. the Polynesians just didn't. That it's possible doesn't make it inevitable.
What is *not* possible is that it's biological, it's omnipresent in the human genome, Polynesia and India were full of transsexuals the whole time, and yet somehow (by sheer random chance? Now *that's* implausible) all we have since ancient Rome is... *maybe* one medieval Jewish writer who might have meant any number of things by that poem. Again, homosexuality isn't like that, at all. And yet it's not like Western culture historically approved of it.
Maybe it's just incredibly rare.
Whoops, I missed this, or rather, I missed that it was a reply to me! Modulo people's personal thresholds for "incredible", I think we can say it *is* rare, but that's not really the question. Some napkin math: At present the population of Europe is roughly 500 million; out of those, I'd guess a few hundred thousand identify as transsexual? Say 1:2000. Over all of recorded history up to the late 19th century, I'd hazard a guess (again, low-precision math warning) that at least a couple billion Europeans must have lived. Out of those, *one guy* was apparently transsexual, for a rough figure of 1:2000 000 000. So, the question is: why was transsexualism a million times *more* rare for most of recorded history than it is at present? That's the part that requires explaining.
How do you know #2 is a "fact" given the caveat you provide immediately after it?
I tend to dismiss psychiatry as being woefully unscientific, but at least they have a duty of care to their patients, unlike reddit memers.
Hi! I'm a trans woman around the same age, been transitioned for some time, with some related experience and some differences. Would be happy to talk it out with you. If you want to, shoot me an email at jmb3481 [AT] gmail [DOT] com
I guess anyone can if they have questions or are curious. I have some unique perspective / takes, I think.
Are you okay with sharing your perspective/takes here or would you rather do it in private?
Im okay with them being public, and Im slowly writing my own blog series to share eventually.
In the meantime, I really hate working through substacks comment interface, and I find personal, one on one, safe interactions to be much more productive when discussing trans issues in the current discourse landscape.
Especially for such a sensitive issue as whether this individual may be better off transitioning or not.
If publicity is important to you, id happily agree to posting a transcript.
Your story makes sense in general, which isn't of course a hard evidence in favour of anything.
One of my friends recently transitioned in her twenties, she had no outright depression, but was deeply in denial about some of her needs, explicitly forbiding herself lots of things as "irrational". It was her narrative allowing not to think about being trans and it was pretty harmful, damaging her self integrity and relationship with other people. The moment of her enlightment was when we went to a cross dress party, there she finally had a "valid reason" to try on a dress, and then she slowly, upon lots of reflection, let herself to be who she is. Now she is much happier.
In my teens I was horrified by the faсt that my beard is growing that I will never have really smooth face ever again. I was shaving as hard as I can sometimes cutting myself in process. Once I tried to let my beard grow for experiment, my girlfriend started complementing my facial hair and I felt as close to dysphoria as I ever was. When I went to cross dress party I felt fucking gorgeous in a dress, and I sometimes repeat the experience. I also have a lot of trans friends and my close community is very trans positive. And I have zero intentions at transitioning. I've learned some things about myself and my queerness. But now I'm quite satisfied with my masculine body, even with the facial hair and do not want to change it.
I think wearing a dress once, then instantly being disgusted by having male body is a very minor percent of cases. I believe it can be helpful to do some genderbending in your head, and consider the majority of options at first. Men can wear dresses. Men can actually look gorgeous in dresses. If it's just about dresses you don't have to change your gender identity to try them on from time to time. And if you are not satisfied with your gender identity you do not have to do some irreversible changes to your body in order to fit in the opposite category. Being non-gender conforming man is an option. Being non-binary is an option.
As for a way to find out if you are in denial about your dysphoria, I would recommend starting with these questions:
What does being a man means to you? What does being a woman? Do you feel comfortable inside your body? Would you prefer having a female body rather than male one if you could choose? Does the thought of never experiencing having a female body feels very sad to you?
Another perspective: I am equally fascinated and horrified by egg_irl because yes, they are relatable, but the implication that if you're unhappy, depressed, or uncomfortable with traditional masculinity, you're actually trans and in denial just sets off all kinds of psychological manipulation alarm bells for me. I'm sure that thousands of others have felt, seen and thought the exact things as you do, without pausing to think and asking yourself, is this actually right?
The two facts you mention seem highly suspect to me in the context of the unprecedented cultural influence of transgenderism. I'm pretty sure that if you asked a trans person about the expression of gendery dysphoria ten years ago, they would not give you the same answer. Around the late 2000s, I was regularly reading a forum that had a very active subset of trans users who, in a time when transgenderism was only beginning to be seen as something other than a weird fetish, all had a common thread to their experiences: for them, it was absolutely knockdown obvious that they were trans. This is the biggest difference I see in the narrative about transgenderism then vs now, and it makes me think that there is a strong cultural influence at play. Also see Lucas's reply on this.
So, given your history, I'd doubt that transitioning is the one thing that will cure your depression. Nevertheless, if this idea is holding so much sway over you, it may be worthwhile to think about why. I have asked myself similar questions, and although I'm quite comfortable with my gender identity and satisfied with deciding for myself what masculinity means to me, there are nevertheless some psychological hangups related to internalized misandry I might be dealing with. I hope you manage to conquer your depression, one way or the other.
I'd echo the maximum skepticism of egg_irl. That place's whole schtick is that any kind of mental/emotional distress or gender nonconformitivity is secretly a sign of being trans. For example - the post currently at the top of the subreddit is claiming that, as a male, listening and signing along to music sung by a female is a sign that you're actually trans.
I'm sure that this maximalist approach has helped a lot of people overcome some deep-seated resistance to their actual condition, but I'm equally convinced it's led a bunch of impressionable people down an unhelpful and potentially harmful path.
Definitely look into psychedelic assisted therapy if you want another avenue to attack depression. Ketamine, psilocybin, MDMA assisted therapies are showing themselves to be very effective against treatment-resistant depression. MDMA in particular is a revelation that can't be adequately described, which is why it's so effective against PTSD.
As someone that has tried these molecules in a non-therapeutic context but with sometimes therapeutic intentions, is there a big difference between X-assisted therapy and X? I can see that MDMA had lasting positive effects on me, at least the first time, but I don't have any to associate with ketamine for example.
Scott wrote about ketamine therapy for depression here a few times:
https://astralcodexten.substack.com/p/drug-users-use-a-lot-of-drugs
https://astralcodexten.substack.com/p/peer-review-request-ketamine
One thing he mentioned is that therapeutic doses and recreational doses are extremely different, and they are usually taken in quite different settings. The therapeutic mechanism seems to be fairly different from the recreational mechanism, whereas for MDMA and psilocybin the two are much more closely related.
The ketamine doses I take are close to the therapeutic doses. Like lots of people said in the comments, the recreational doses presented seems way too big. That might be more "extreme abuse doses".
> I can see that MDMA had lasting positive effects on me
Since you have (presumably positive) experience with MDMA, imagine if your attention was directed to processing difficult experiences or trauma while under its effects. The MDMA prevents a lot of our innate avoidance behaviours to facing the pain of trauma. It becomes easier to accept and process anything, and when this is guided by a professional trained in reframing thoughts and experiences, it can be quite transformative.
There have been some studies showing that psilocybin-assisted therapy had much better outcomes than just psilocybin alone. Both participants reported meaningful spiritual experiences, but only with assisted therapy did this seem to produce significant changes in long-term thought patterns and behaviours. Which makes sense for the same reason above: the therapist directs your attention and helps reframe your thoughts and experiences when you're in a highly receptive altered state of consciousness.
Thanks, that make a lot of sense.
Recreational psilocybin and psilocybin-assisted therapy are indeed very different things, for one. When used in therapy, your environment is adapted to facilitate as deep of an introspective experience as is possible, with added emphasis on comfort and safety. It's not the drug that has the therapeutic effect: it's what your mind is doing while on the drug, and the therapy is supposed to put you in the right state to allow that to happen. At least, that's the idea I got from it when I last read up on it, which was a good few years ago.
Thanks for the explanation.
You're in somewhat of a rough spot here because, while there are resources out there about "how do I know if I'm trans", they all seem to be written from the perspective of encouraging you to transition, rather than actually helping you figure out. I get the impression that the writers (who probably aren't thinking in these terms) want to minimize the false negative rate of trans people they don't identify and so end up with a relatively high false positive rate. There's a tendency to consider the reference class of people looking at your resource as being "trans people who haven't realized it yet" instead of "people who may or may not be trans", almost as if there were an assumption of "well if you have to ask, ...". This makes sense in that many trans people were in situations that meant they needed a whole lot of encouragement and wish they'd gotten it sooner, but it makes these sorts of things less useful in diagnostic terms.
It sounds like you recognize that finding egg_irl relatable isn't ironclad evidence; anecdotally, I can say you are correct in that. That particular memeplex reminds me a bit of esr's Kafkatrap concept in that denying the label isn't considered evidence against it applying.
There may be some things you could try that your brain parses as 'feminine' even though men also do them (at a lower rate). You could, for instance, grow your hair out a bunch, even braid it. In my local social bubble, that's not common for men, but it's not unheard of either. It also has the advantage of being trivially reversible if you don't like it. (I don't know anything about how prevalent cases like the "can't back out now" anecdote you heard are to give advice on whether you should, sorry. Also note that you can do this kind of thing if you like even if you come to the conclusion that you're not trans.)
Yeah, it seems to me that if a guy has like 20% feminine traits, "hey, it means you are actually trans", while ignoring that he also has 80% masculine traits, which probably should also mean something. Just because for some people it was really hard to overcome their denial, it doesn't mean that everyone else is also in a denial.
Okay, so what would be the best way to test this hypothesis? My first idea was some safe space where you could come, get some quick expert help with crossdressing, and then just play your role among supportive people, and see what it feels like.
The problem with this experiment is that it does not control for "being among supportive people". Like, if in your everyday life you are surrounded by shitty people, then there is a chance that being among supportive people will make your day better, regardless of being trans or not.
So the proper way would be that you come twice, flip a coin, and either first day be your current gender and crossdress the other day, or vice versa. An expert would help you seem like the other gender... or seem like someone who just pretends to be your gender... and then you would spend some time with the supportive people who do not know what your biological gender actually is. Then you could compare what feels better.
Ah, except there is the obvious problem that actually they will most likely find out your biological gender, from the sound of your voice, or whatever. (Maybe you should not be allowed to talk, only type on a computer?)
>>>The problem with this experiment is that it does not control for "being among supportive people". Like, if in your everyday life you are surrounded by shitty people, then there is a chance that being among supportive people will make your day better, regardless of being trans or not.
I really wonder how much of the increasing trans rate is an artifact of how positive and supportive the trans community is?
This a non-political thread, so... how to put it carefully...
Supportive communities are a great thing. It's just when such community is in a contrast to a generally hostile environment, and their support is conditional on X... it may create a strong incentive to pretend or believe that you are X.
I'd like answers about this too, I'm in the same boat, though with less intensity. I'm 3 years younger, my depression is less severe and I've tried less things but this is basically where I am at too.
A few other things I'm thinking about: the recent review by Scott of "Crazy like us" may mean that since we seen more stuff about trans people in the media, we may be influenced by it. Still, I don't remember an "answer" in that review once you've been exposed to that. Can people in Hong-Kong stop being anorexic if people stop talking about anorexia?
This brings us to my second thought: societal level vs personal level. The vast majority of discourse that I see about trans people is focused on the societal level, especially the discourse "against" it. But frankly, I don't care about that. If treating my depression means that society is a bit worse, then so be it. Still, this means that most content is either hyperpostive stuff from the trans side (because showing negative content would make it harder for everyone to have rights, or maybe even expose them to the possibiltiy that they're wrong, which might be a mental hazard), or negative about the societal impacts, often to the point of hateful. Analysis about the personal level is hard to find.
Another thought: is this a situation where trying to have a clear view of things is negative? If you're "questionning", and you keep questionning yourself after transitionning, you might not achieve the "best happinesss" compared to rejecting everything or accepting everything, to the point of lying to yourself.
Were there events like that in the past? For example, did people "became" massively gay at some point? Or did people divorced en masse at some other point, once we reached the "tipping point" of media/societal acceptance?
I'll finish this by saying that I have a strong negative prior on "being trans", as if this was true this would require to make lots of changes, many of them that are hard to reverse.
You make a good point in bringing up "Crazy Like Us" because not only is the trend of transgenderism in the media influencing its prevalence, the thing it's supposed to treat (your depression) is influenced in the same way. There's a theory that depression is just our culture's way of manifesting chronic stress, and this is the reason it's so hard to treat. The failure to treat it with conventional medicine is like using antivirals to treat a bacterial infection.
In fact, the treatment that succeeds *culturally* appears to work best against depression. Scott wrote (I forget where) about how Cognitive Behavioral Therapy was doing great against depression when it was the Cool New Thing and had cultural influence as a good treatment. When that influence waned, different treatments were devised, and one of them caught the hype and went on to be as effective as CBT was originally at treating depression.
I think transgenderism is riding on a similar wave, except a hundred times bigger. The prevailing narrative now is that it doesn't matter if you don't hate your body or acting your gender, you're just in denial and transitioning will cure your depression.
Now, it's possible this is all entirely useless to you as an individual. I understand that saying depression is a cultural disease doesn't make you less depressed, and that people who transition and are honestly happier afterwards are valid to feel that way. Maybe transitioning does, in fact, help against non-gender-dysphoria related depression for cultural reasons. But you have to wonder if the cure is worse than the disease.
Id like to comment just to push back against "the prevailing narrative is youre just in denial and transitioning will cure your depression".
3-5 dozen hyper-online weirdos on twitter and the baby-trans also-hyper-online redditors at r/Egg_IRL arent representative of trans people. Most trans people youd ask (And my community is full of them, Im trans) would disagree strongly with this narrative being true and that theyre supposedly pushing it. And the current medical standard-of-care is still to require therapy prior to starting hormone treatment.
Is hormone treatment getting easier to access? Yes, but this is largely medically informed. The effects of the first 6-9 months of hormone treatment are about 95% reversible, and medically quite harmless. Thats why the medical consensus has readily reduced the previously excessive gatekeeping once it became socially acceptable to do so.
And every therapist I have had has been very clear that transitioning will not on its own cure any co-morbid mental health issues (not that I had any) and they also screen potential patients for severe issues before recommending treatment.
Thanks, that's good to know. FWIW I also know plenty of trans people who would disagree with that narrative being true, but it's hard to deny that the hyper-online weirdos are having a disproportionate effect on the conversation, simply by being hyper-online enough to get picked up by the algorithms. The fact that timujin has discovered (but not necessarily fallen down) that rabbit hole is a pretty strong indication of that.
I absolutely agree that they are having a disproportionate effect on the conversation. Twitter honestly is just a disaster for civilization. Remember from previous of Scott's posts, 80% of all content on twitter comes from 20% of its users. And what, maybe 10%? of the US have a twitter?
So all of the discourse-shaping (For all topics, not just trans stuff) effects of twitter come from maybe 2% of the population, selected to be from the loudest, most prolific, most online, and most engagement-inducing.
Yikes
> There's a theory that depression is just our culture's way of manifesting chronic stress, and this is the reason it's so hard to treat.
Considering how much depression and anxiety can be intertwined (at least they are, in my case), that would make sense (for some cases at least).
> In fact, the treatment that succeeds *culturally* appears to work best against depression. Scott wrote (I forget where) about how Cognitive Behavioral Therapy was doing great against depression when it was the Cool New Thing and had cultural influence as a good treatment. When that influence waned, different treatments were devised, and one of them caught the hype and went on to be as effective as CBT was originally at treating depression.
That's news to me, last time I checked, CBT was recommanded by Scott, for example in the "peer review: depression" https://astralcodexten.substack.com/p/peer-review-request-depression or https://lorienpsych.com/2021/06/05/depression/.
> Maybe transitioning does, in fact, help against non-gender-dysphoria related depression for cultural reasons. But you have to wonder if the cure is worse than the disease.
I wonder if the "big change" may be the reason it helps depression. I think I remember seeing somewhere that big changes like leaving your job, changing countries, etc helps against depression, in which case transtitionning should be compared to "placebo big changes".
"I wonder if the 'big change' may be the reason it helps depression. I think I remember seeing somewhere that big changes like leaving your job, changing countries, etc helps against depression, in which case transitioning should be compared to 'placebo big changes'."
As far as I understand, this scales smoothly by severity, e.g. a small amount of malaise will often be alleviated by taking up a hobby. The exact nature of the change being less important than its size being proportional to the amount of distress you're feeling sort of makes sense since it's probably less about getting *to* some specific place than getting *away* from where one is now. (Damn, that's a bad sentence, but hopefully it parses after a few tries.)
Thanks, your sentence makes sense.
Indeed, the "big change" may fall under the umbrella of "getting away from the depressing thing" as per that post. Also, I didn't mean to imply that CBT doesn't work; it just doesn't work as well as it used to. Right now, its effectiveness seems to be on par with other standard interventions. There was another intervention with a similar name but it was so generic (something like Active Behavioral Adjustment) that I simply can't remember it.
Maybe ACT, Acceptance and Commitment Therapy?
I think it's something like that, but I'm not sure. Can't find any ACX results for that. Thanks for the suggestion though.
Thanks, I didn't know tht CBT effectiveness was a bit lower now. I guess the optimal thing in that case would be to not read more about it, read lots of positive content and practice it.
Consider that, in 2022, you're living in a media hysteria that's pushing trans identities on people extremely hard. "Rapid onset gender dysphoria" is a thing now, and anecdotally, multiple young people in my circles have been convinced they're trans by their friends before rejecting that identity as manufactured.
I'd strongly suggest reading something gender critical along with your reddit diet, just to have a variety of viewpoints on hand and preserve the ability to think clearly.
Apparently the "Stanford protocol" rTMS / SAINT is quite effective against treatment resistant depression, even more so than ECT. You could try getting in touch with the researchers to see whether there's an opportunity there.
I'll look into it, thank you
Consider how you might feel and act if you transitioned. A useful sniff test might be to act that way before you transition, and see if it changes anything. You don't have to update your gender just to discard masculinity norms.
Before you seriously consider transitioning, consider changing your lifestyle in smaller ways, and test if the feeling persists. Eg, investigate any unchallenged trapped priors, your social life, where you live, and perhaps exercise more.
https://careers.google.com/ seems to have openings :).
I'm guessing this is a mis-reply but if not then well played.
Perhaps you wanted to reply to a different comment? I don't really see how this is related.
I imagine he was responding to incoherentsheaf's comment below.
I'm nearing the end of a math PhD (algebraic geometry) and I'm plotting my escape from math academia. I've gotten very interested in synthetic biology and closely related fields. In the small amount of downtime from dissertation work I've been reading a lot of the basic textbooks, papers etc and am considering trying to work in this area after I get my degree. I would love to hear from anyone who knows about this area who has thoughts on e.g. areas that might be especially well-suited for a mathematician, prospects on getting hired with this background, or general arguments for/against this as a career path. Also would appreciate hearing about other areas outside of pure math that mathematicians have transition to working in (especially related to biology, or outside of the more typical paths of data science/software engineering/investment banking etc).
I'll join ThatGeoGuy in broadly recommending engineering as a source of applied math jobs. Most of the versions I know are adjacent to Department of Defense (which may or may not appeal, and may or may not be feasible), but some aren't; I'm enjoying my time at one of those (after a short stint as a software developer, and another as a data scientist).
A friend recently left into quantum computing.
I gather there's also the NSA. From what little I heard (I'm a Suspicious Character with a Russian citizenship), it's a good work environment.
I believe Noah Goodman (https://cocolab.stanford.edu/ndg) did his PhD in algebraic geometry as well (maybe it was algebraic topology) before doing a postdoc with Josh Tenenbaum in cognitive science, and now being quite successful in the field himself. It's rare to find an academic whose PhD is in such a different discipline than their current work, but he's one example.
Network science (i.e. applied graph theory) is pretty hot in biology right now. My girlfriend does this.
Did she get a degree in this field or moved into it after graduating?
Math major in undergrad, currently a PhD student in it.
Is this the program at Northeastern?
Yes
Data science / ML? You're halfway there already, and bioinformatics has way more datasets than people to make sense of them.
With regards to areas outside of pure math -- have you considered robotics / navigation?
With a background in algebraic geometry, I would assume you have some communicable skills towards optimization, geometric fitting, etc. There's a lot of robotics companies out there that are struggling to find these skills. The company I work for (https://tangramvision.com) is not currently hiring, but might be in the next year? We primarily stick to Lie representations for our camera / multi-sensor calibration, but if you have any cross skills in optical physics and software, that wouldn't be too far off from something we look for. We've definitely had interest in geometric algebra in the past, mostly for building intuition, but we're always looking for better ways to do things.
There's a lot of robotics companies out there and what you look for might depend on the timeline of your graduation, especially if you're considering something more like a startup than a larger company, but a lot of the skills are cross-domain.
Can you recommend a resource to read about Lie representations in this context? I'm a former math PhD currently working in something that might be broadly construed as computer vision (we certainly have cameras from which we extract information), and curious to learn more about how other people handle e.g. the calibration stuff.
Thanks! I haven't really looked into this area much. I'll definitely read up about it. I work in a very pure/theoretical area, so I haven't touched anything resembling optimization/fitting/anything data related since probably sophomore year of college. But I will probably end up learning more about that sort of thing when I get closer to applying to jobs.
From a fellow maths phd: learn how to program, and learn to do it well.
I'm a solid programmer and would be interested in jobs that involve programming, but probably not more business logic focused software engineering jobs.
If you can program well (in a language that people use, e.g. C++/Java), you shouldn't have problems finding a job in either machine-learning, data science, computer vision, engineering or the intersection of these fields.
If you're not following Auston Vernon's blog, you're missing out - it's good stuff. He had a good piece on nuclear power recently: https://austinvernon.site/blog/nuclear.html
FedEx wants to stick anti-missile lasers on some of their cargo planes - apparently there have been issues in the past with delivering to certain areas. I was thinking that might be useful on passenger and cargo planes going forward in dealing with pesky small drones ignoring exclusion zones.
Not clear that this would help against drones. The "anti-missile lasers" aren't hard-kill weapons, they don't burn their targets out of the sky, they just dazzle and confuse infrared sensors. Which is great against a missile that is using an infrared sensor to *try* and hit your airplane, but doesn't do anything against a radio-controlled and/or GPS-guided autonomous drone that's just flying whatever course it was commanded by someone on the ground. Even if there's a vulnerable camera on the drone, that has nothing to do with where the drone is going to be when your plane crosses its path.
I don't understand AI alignment as a field, or at least wonder about the premises. Mainly:
1) Does super-intelligence translate to super-powers? Like, if Terrence Tao wanted to be president or a billionaire could he do so easily? What if he was twice as smart? 10x? 100x? How come our politicians and business leaders don't seem to all be super-geniuses?
In feudal times it was blindingly obvious that intelligence alone didn't translate to power. Now we live in a more complex world and there are more advantages to intelligence, but I still wonder how far that goes. It seems possible there could be some undiscovered physics that gives you free energy or something, so you get massive power once you cross a certain threshold, but that doesn't seem *obvious* to me.
2) Will super-AI happen all of a sudden (in years vs decades)? If it happens over decades it seems likely that the best AI alignment research will take place after AI is better understood, and we will have time to do that research. GPT-3 is very impressive but seems far from an existential threat.
3) Will all the organizations focused on creating AI pay extensive attention to alignment research done in different organizations? If it's alignment research by OpenAI themselves or something this point doesn't apply.
4) As an extension of (3), what about the people-alignment problem? It seems inevitable that *eventually* bad actors will deliberately use AI for dangerous things (trying to take over the world, etc), so even if best practices exist to prevent accidental mistakes they will eventually be ignored. I'm sure there's the thought of having a good super-AI to monitor everyone in the world etc, but I wonder if we are at a point where that can be thought about in a precise way.
(1) and (2) are probably the biggest things I don't understand about it. Personally I'm kind of expecting that if intelligence that gives superpowers is even possible, we'll fall short of that initially, so that the human or human+ (but not super-powered) systems will provide a better training ground for AI alignment research than we have available today.
1) Human intelligence has some biological limitations: even the smartest human has only one body (can be only at one place at a time), only two hands, 24 hours a day, neurons working at 100 Hz, only 1 topic to consciously think about at any moment. Also, one bullet can kill them. No matter how high your IQ, these will become your limitations. (Unless you are a mad genius and build a new robotic body for yourself.) This is why social skills are so important, because making other people do what you want, is a way to overcome these limitations.
I assume that some of this would *not* apply to a smart artificial intelligence. That it could run faster, or make copies of itself (to work in parallel, but also as backups). This could be more like 10 or 100 Terrence Taos that are perfectly loyal to each other, think 10 times faster, can share their memories, and are immortal in some sense (like, if you kill one of them, he later respawns in the factory, and only the few last moments of his memory are truly lost). They could specialize at multiple things, each of them focusing fully at one. They could be quite scary.
2) No one really knows. But after some decades of very slow progress, we also see some things happening surprisingly fast. No one knows whether superintelligence will be one of those things.
The first AI, the one that will supposedly doom us all if it isn't properly aligned, will probably also have similar limitations. It likely won't fit, or at least won't run, outside the customized room full of computronium it was designed for. Or if it can adapt itself to run on a commercial server farm, it will do so at the expense of seriously impacting the performance of that farm to the point where its owners will note that it isn't doing what *they* paid bignum dollars for it to do and start doing some thorough and intrusive maintenance. And trying to disperse it across a botnet of ten thousand hacked PCs, will likely cripple it with latency.
Eventually we'll have to deal with really powerful, versatile AIs (or go full Butlerian Jihad or something). So AI alignment is an important thing. Just, not something we have to get absolutely right the first time.
Since the inevitable AI discussion has come up, here's something I've been wondering: why do we have reason to believe that explosively self-improving AI is a strong possibility so long as the AI is both at a "human" level of intelligence (whatever that means) or above, but we also have strong reasons for believing that such an explosive level of self-improvement isn't possible in people?
For instance: there have been hundreds of thousands of extreme geniuses born up until now - all of which were/are an order of magnitude more intelligent than us plebs by measured IQ. Why did none of them decode their own brains and then invent a means of making themselves even smarter? Why is this so unlikely, but AI turning itself into God is a worrying possibility?
I think the idea is that we've been able to produce steady improvements at artificial intelligence, in ways that we haven't been able to produce steady improvements at natural intelligence. Since the improvements we can do in artificial intelligence depend on the amount of intelligence we can bring to bear on the problem, the idea is that once artificial intelligence at slightly greater than human level (assuming that's a meaningful thing) can be applied to the problem, the rate of improvement at artificial intelligence will start to increase. As long as the problem of improving artificial intelligence doesn't become exponentially harder just after the point of reaching human level (whatever that means), that suggests that there should be a period of rapid increase soon after we reach that level, faster than whatever came before.
That's a massive, mountain-sized "if". And, as I've pointed out elsewhere, it is one of a number you need to grant for the exact scenario that excites so much discussion to play out.
It also requires one to believe that the regular spasms in AI research (which inevitably produce interesting new tools for specific problems, but have not yet provided progress to a theory of "general" intelligence - whatever that is) are leading slowly upwards in a grand sweep rather than the stutter-stop progress they seem to convey. We might (note: might) just be on track to end up, in 100 years time, with a sparkling but unconnected bag of tricks for solving a bunch of specific problems and no way to put it all together.
Again, the part that fascinates me about my original question is that, with the limited actionable evidence we have available (versus the none for computer-enabled AI), it may be easier to make a super-intelligence the biological way by just smushing a bunch of grey matter together. And so we've had forever for some bright specimen of the human race to come up with a way to make biological superintelligences out of us, and yet here we all are.
My operating theory is that a) intelligence is just a big, hairy problem that nature partly solved over 500 million years, apparently by fluke, and b) that super-geniuses are apparently just more interested in constructing epicycles, coming up with novel mathematical methods, and generally navel gazing, than they are in ruling us all as god-kings or super-charging their minds. This, scant though it is, is pretty hopeful stuff where the distant prospect of GAI is concerned.
Presumably because programming and electronics manufacturing are demonstrably easy when compared to genetic engineering. Biology is messy and far more unpredictable.
Wouldn't electronics and programming sophisticated enough to produce an equivalent to a human brain be, by definition, just as complex and intractable though?
More to the point - as far as we can tell, it may actually be easier to achieve higher levels of intelligence using braincells than transistors. Remember that we have not yet definitively proved nor disproved that intelligence isn't just a function of how many neurons can be packed into a single cranium. Remember also that we have a few billion examples of human-level intelligences created using neurons, and none created using transistors.
Regardless; this question isn't just about the raw difficulty of the task. It's just to ask why it's taken more or less for granted that we shouldn't kill off the next John von Neumann lest he/she decide to tinker with his/her own already-superior mind and rapidly ascend to godhood. Why is this unlikely, but exploding AI isn't?
If we knew how to construct biological brains, then we'd probably have good ideas on how to build ones larger and better than any that we currently find in humans. The production function for biological brains is something we have limited control over. We can't just plug two brains into each other and scale human intelligence. We try basically as hard as possible to do this with social organizations like governments and corporations. Our success is limited, but notice that we still worry about corporations being out-of-control and too competent at maximizing profit at the expense of things that we care about, and this results from the corporation's values not being aligned with the public's at large. Unaligned values + high competence can produce bad outcomes, and our ability to respond effectively to these bad outcomes can be at odds with the entity's competence. If Big Tobacco was competent enough, we might have never been able to educate the public on the harms of smoking and pass taxes on it.
So if we create an artificial brain, that will be the result of understanding how to do so. We will have much more control over the production function of this artificial brain, many more knobs to turn than we know with biological brains. It's possible that we run into unforeseen issues with scaling intelligence. It's possible that intelligence itself gets really hard beyond human levels. There seems to be evidence that very basic scaling attempts (e.g. GPT2 vs GPT3) leads to a large increase in competence. It's possible that such scaling doesn't apply to general intelligence, but we don't know. And so just the possibility of superintelligence is worth worrying about.
I think I _mostly_ agree with you...but it isn't necessarily clear to me that, when you get to the level of programming complexity that is AI, that it's necessarily simpler/easier to manipulate than biology is. Since we haven't yet achieved general AI, we don't know what level of complexity it will require, so we can't answer that question. _Maybe_ it will be simpler and easier to change/improve than biological based intelligence, but maybe that level of intelligence _requires_ that level of complexity and increased intelligence will actually be _more_ complex and _harder_ to manipulate in a way that is linear (or even exponential!) so that the AI still lacks the ability to meaningfully improve itself.
I don't think we know for sure which direction it could fall, and both seem plausible to me.
> I think I _mostly_ agree with you...but it isn't necessarily clear to me that, when you get to the level of programming complexity that is AI, that it's necessarily simpler/easier to manipulate than biology is.
I think it's clearly harder to change biology. Any biological entity necessarily encodes a considerable amount of non-intelligence related information in its genome or epigenome due to its evolutionary history, and requires a delicate biochemical environment necessary for its function, and we understand almost none of it in a purely mechanistic sense. Any changes to extend intelligence are like playing a game of Jenga while walking a tightrope over a snake pit.
AI seems to be a purely informational problem that doesn't carry this baggage, and for which we have a well developed mechanistic understanding that discards irrelevant details (computer science). We also have proofs self-improving systems in the form of Godel machines:
Goedel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements, https://arxiv.org/abs/cs/0309048
It seems to me that any electronic entity necessarily relies on a considerable amount of delicate, non-intelligent infrastructure that encodes some hard physical limits on what it can do. For instance; if we do the maths, and discover that a suitably-large (but perfectly optimised) neural network to simulate a human brain-equivalent requires however-many terabytes of storage, however-many petaflops of computing power and however-many kilowatts of power, then surely it's not too hard to just make sure that the infrastructure provided to such a system is significantly less than that? And, you know, keep a hair-triggered deaf-mute manning the main power supply with an axe at all times just in case?
However; even to engage with the above argument is to grant a whole raft of things that I don't think are massively tenable:
- that we'd ever be interested in a 'general' intelligence when our own clearly is not (and can in any case be found for cheap anywhere where food is plentiful);
- that, granting such an interest, it would be entertained for any sort of economic purpose (in the same way that engineers are not currently racing to construct the walking, swimming, flying and grass-powered machines we know to be possible by observing ducks);
- that, having granted such an interest and such a purpose, it would be within our power to develop one without also developing dozens of other technologies that fundamentally alter our lives well before that point so as to make the question of consequences irrelevant;
- that such intelligent systems, once developed, should show a capacity for organic improvement over optimisation and iteration;
- that such a process should be linear rather than a series of exponentially-more difficult steps that stymy each new iteration/improvement, and require ever-escalating access to resources which cannot be had simply by asking for them;
- that such a system should be stable enough to form long-term goals and plans rather than being an even greater mass of neuroses and self-sabotage than we are;
- that, having gone the long way around in developing such a system to start with, we would not also have developed the tools to forestall all the major foreseen issues as well;
- that, having granted all of the previous, we would even be able to foresee the most important issues from where we stand in the ignorant present; and
- that, having granted all of the above, the specific scenario of superhuman AI followed by exponential self-improvement and loss of human control is the most likely of the roughly 1 billion possible outcomes of such a long, complicated, deeply specific chain of events.
As always, the discussion around AI ends up including so many very specific premises that it's conclusions seem self-ordained. It's the modern version of philosophers debating how many angels can dance on the head of a pin.
> - that we'd ever be interested in a 'general' intelligence when our own clearly is not
There are scenarios where we are interested in this, and scenarios where general intelligence is created accidentally while attempting to tackle some other optimization problem. Either outcome is feasible.
> - that, granting such an interest, it would be entertained for any sort of economic purpose
I mean, dumb AI is already a huge money maker. This point isn't even in contention.
> - that, having granted such an interest and such a purpose, it would be within our power to develop one without also developing dozens of other technologies that fundamentally alter our lives well before that point so as to make the question of consequences irrelevant
Possible, and this is the outcome Musk is pushing for with Neuralink, ie. merging humans and machines to mitigate AI advantage. Without that, historical trends suggest the opposite outcome: we are becoming increasingly more dependent on machines and information systems.
>- that such intelligent systems, once developed, should show a capacity for organic improvement over optimisation and iteration;
These machines could be designed this way (see my reference to Goedel machines), and some of them arguably would because if the AI is smarter than you and your competitors, then you'd be stupid not to exploit it to design the next generation product, including the next gen AI to preserve your advantage. All of the required incentives are already in place.
> - that such a process should be linear rather than a series of exponentially-more difficult steps that stymy each new iteration/improvement, and require ever-escalating access to resources which cannot be had simply by asking for them
No one is assuming linear progress. Intelligence necessarily has an asymptotic limit due to the Bekenstein Bound, ie. above a certain information density it would collapse into a black hole. The question is, do you think it's plausible that the human brain is anywhere near that limit? Clearly not, and so multiple orders of magnitude more intelligence beyond human reasoning is very plausible.
That said, no doubt our current technologies have limits which will necessitate different computational substrates (maybe optical computing), but we're already applying AI to these problems, so this is part of the progress to come.
> - that such a system should be stable enough to form long-term goals and plans rather than being an even greater mass of neuroses and self-sabotage than we are
Neuroses result from messy biology and evolutionary baggage. This isn't an issue for AI. Certainly it may have its own quirks, but that should worry you *more* because it's completely unpredictable.
> - that, having gone the long way around in developing such a system to start with, we would not also have developed the tools to forestall all the major foreseen issues as well;
What incentives do you think would lead to these results, aside from the people explicitly working on AGI dangers?
> - that, having granted all of the previous, we would even be able to foresee the most important issues from where we stand in the ignorant present; and
The most worrying conclusion is that we *can't* foresee all the dangers, but the ones we *can* foresee as being plausible are already terrifying enough.
I frankly don't think AGI danger is anything like the mental masturbation we sometimes see from philosophy. The error bars are wide, but it's clear that AGI can be an existential threat in the future. There are many more than 1 in a billion terrible outcomes for humanity.
All of the incentives to invent AGI already exist, and that's why I disagree that AGI becoming a threat necessarily requires a "long, complicated, deeply specific chain of events". You could say the same for human flight. Certainly we would have had no reason to believe predictions that it would be achieved specifically in 1903, but we had considerable reason to believe humans *would* achieve flight at some point.
> Does super-intelligence translate to super-powers? Like, if Terrence Tao wanted to be president or a billionaire could he do so easily? What if he was twice as smart? 10x? 100x?
At 100x as smart, he wouldn't have to bother with politics at all, but could probably seize power over critical systems directly if he wanted to. We simply don't really know the limits of an intelligence that's 100x smarter than our smartest people, but dangers they could entail are crazy high.
> 2) Will super-AI happen all of a sudden (in years vs decades)?
The unpredictability is one of the dangers. We could have enough computational power, and someone just hasn't quite figured out the trick, and if they do it by accident and realize what they had done, it might be too late to contain it.
Just imagine if it were common for people to have their own hobby biolabs, where they regularly experiment with making genetic alterations to cowpox. The vast, vast majority of any such mutations will come to nothing, but there's a non-negligible chance that someone will accidentally create smallpox 2.0 and devastate the world. This analogy is not necessarily as far fetched as you might think, at least in the near future.
What does 100x smarter mean? It could be that 15 IQ points (1 sigma) equals the ability to solve a problem in 1/100 the time but the complexity of problems increases in such a way that it still manifests as 15 IQ points.
It's not clear to me that an 8sigma intelligent human is in any way superhuman. There's been enough 6 sigma's (1/billion) that we should have noticed if it was.
Good question. There are different ways to calibrate this, but the context here is superintelligent AI that's much, much smarter than humans, so that's the scale you should be using. I don't think a human with that degree of intelligence has ever existed, or probably could ever conceivably exist without serious genetic engineering.
"At 100x as smart, he wouldn't have to bother with politics at all, but could probably seize power over critical systems directly if he wanted to."
Could he? Really? By what means? This seems to be the heart of the question: that propositions of this type smack of the wishful thinking of nerds.
In actual fact, an ape with a stick can brain a genius not just *as* easily as a jock, but more easily. Internetdog is not alone in feeling that the burden of proof rests on those who claim that this principle stops applying after some arbitrary threshold of genius.
> Could he? Really? By what means? This seems to be the heart of the question: that propositions of this type smack of the wishful thinking of nerds.
Nearly everything is connected digitally these days. It's already fairly trivial for moderately intelligent hackers to compromise our networked systems, so take that as a given.
Procurement orders, staffing, orders and directives, and so on are all communicated over networked systems. A super AI or person could insert., delete or manipulate these in multiple ways to achieve their goals. Thus, even systems that are disconnected from the internet have a path through meat space that can be manipulated indirectly via digital means.
Someone that is two orders of magnitude more intelligent than our smartest people shouldn't find this too challenging since such exploits have already happened with ordinary intelligence. If you don't believe this is possible, then I don't think you understand how significant an "order of magnitude" really is.
I am about 300x smarter than my cat. Therefore; I can make my cat do whatever I want using only my persuasive abilities.
I am about 350 000x smarter than an ant. Therefore; I can understand it and control it to the point that I am master of it, body and soul.
"I am about 350 000x smarter than an ant. Therefore; I can understand it and control it to the point that I am master of it, body and soul."
Ignoring the silliness of '350,000x', it's trivially easy to control ant behavior with chemicals
Sure - go and order an ant around the room with your thoughts. Or, if that seems a tad contrived, go and pilot one around like your own little ant avatar using pheromones.
The fact is that the only way we presently know of to control insects directly is via worryingly direct methods such as implanting electrodes in them. Even then, you're only gaining a limited amount of control over the poor abused creature by what amounts to brute force applied directly to its brain. You can't manipulate it with subtlety or finesse, or bend it to your will in any more sophisticated of a way than an ox driver with a whip.
In any case; my point (which was exaggerated for comedy's sake) is that some trivial stat like "100x as intelligent"* does not, in and of itself, grant the more intelligent party some sort of magical insight into/control of the other. So the idea that a sufficiently smart AI could just perform "social hacks" to puppet us to its will without our knowing about it is sort of ludicrous on that basis. Having created what amounts to an alien mind, it would probably be just as baffled by us as we are by the other, lesser, minds around us. And, in the end, a malevolent super-intelligence controlling us would end having the same results as we are used to in our day-to-day lives: vague, unpredictable, and with large amounts of brute force and coercion applied.
If you're willing to then grant that we're dumb enough to put said super-intelligence in a position to do so regardless of it applying such known methods (which is, admittedly, depressingly possible), then a lot of the rest of the arguments about AI alignment fall away immediately as being pointless in the face of our overwhelming ability for self-destruction.
* According to some estimates I could find with about 5 seconds of googling, the average human (86 billion neurons) has 344 000 times more neurons than the average ant (250 000).
I don't grant either your orders of magnitude, but I also don't find either of your conclusions implausible. Cats can be trained, and controlling ants via pheromones is totally feasible in principle.
Of course, as I'm sure you know and are intentionally glossing over, communication barriers raise their own obstacles that must also be surmounted, which isn't an issue with a superhuman intelligence (although would be for an AI).
All of this depends on people not being aware of the AI's actions. Once people are aware, they can actively decline to follow the AIs wishes. Worst case scenario, we scrap the internet (physically tear down the infrastructure) and start over on a lower level of development. This is very bad for humans, and many will die. This is fatal for AI. Really dumb humans can tear up or disconnect internet cables.
"All of this depends on people not being aware of the AI's actions. Once people are aware, they can actively decline to follow the AIs wishes. "
You think a superintelligence will do things in a super obvious, understandable way that allows it to be caught? It won't be smart enough to not get caught? I think you display a failure of imagination. and think we can outsmart a superintelligence.
It's not a matter of outsmarting it on a level playing field. It's a matter of logistics, primarily. A computer system has a lot of range and ability in a computer system (i.e. the internet), but an extremely limited range beyond that. Can a very smart computer order a bunch of stuff on Amazon? Sure! Can the computer open the boxes that the stuff arrives in? No! So, the computer needs intermediaries, humans, to do a whole lot of work for it. If the humans figure out that the AI is running its own own game, then the humans have a massive advantage. Humans are very suspicious once they are aware of a concern, so even a dumb human can refuse to help a computer system (or unknown requests from a potentially non-human source).
Yes, there are scenarios where humans can catch it in time. Our manufacturing infrastructure is also fairly automated though, and this will obviously progress, so the real question is whether the AI could be detected before it commandeered enough infrastructure to preserve itself. This scenario was featured in the TV show Person of Interest.
And these are only the Skynet/hostile AI scenarios. There are plenty of nightmare scenarios that don't even have anything to do with Skynet-like conflict, like the paperclip maximizer. For instance, an AI that's tasked with solving protein folding optimization problems and connected to a protein synthesis system could accidentally synthesize some new prion diseases or viruses.
Have you ever worked at a factory? Are you aware of what is required in retooling an operation to produce something new? These are not easy accomplishments, and they are not automated. In fact, they would be very difficult to automate, due to the fact that changing the automation that exists is a big part of the problem.
Is it forever impossible? No, but there are dozens of major steps in automation - automated logistics/trucking, automated mining, automated tool making, and so on, that need to take place before an AI would have anything tangible to take over. All of those processes require massive human involvement now, and even the most automated would break down within minutes of humans withdrawing their attention, let alone humans actively thwarting the AI. Loosening a single bolt on a big machine could cause it to break down in short order, with no current means for an AI to diagnose or repair that machine.
I think many who worry about AI know a lot about computers and not a lot about the difficulties of doing anything else.
Suppose you could move 100x faster than you can now with the same efforts. Do you think that you won't be able to become very rich with such ability if you wanted to?
Notice how such super-speed allows you to become top athlete in many sports. Take football. Speed isn't everything in it, but 100x super-speed would compensate for your lack of skill, compared to other top players, and allow you to easily triumph them all.
Does super-speed translate to superpowers? Obviously! Is intelligence less or more important for our society than speed? Isn't it the more potent superpower, then? Then why isn't it obvious? I think it's because we just lack the ability to imagine ourself super-intelligent, while we can easily imagine ourself have super-speed or super-strength. You do not have to already be super-fast to imagine how it is to be moving even faster. But you do need to be super-intelligent to think super-intelligent thoughts.
I can imagine a number of ways AIs could take over the world, I'm just having trouble imagining how they might take it over *right away*, or how it might happen by accident in the early stages. Of course, others have thought about this more than me so I was looking for ideas (and Scott cleared up point (3) at least).
My non-expert expectation about AI is that like many other technologies it's easy to overestimate in the short-run, and to underestimate in the long-run.
For example people have had high expectations of AI that weren't met back in the 1970s and 1980s [1]. In the late 90s people were talking about the "new economy" where the internet would be ubiquitous and technology companies would dominate everything. After the dot-com crash they were disillusioned, but it basically came true two decades later.
Right now self-driving cars are a non-trivial problem where it's hard to put a definite timeline on it. I expect we'll be able to have AIs reliably drive a car before they can take over the world.
In the long run I could imagine a number of doomsday scenarios - you could have self-replicating nano-machines, bio-weapons, conventional weapons hijacked by AIs, AI-aided political subversion or repressive governments, some nuclear-type physical weapon discovered by AI that is easier to make (and so harder to stop proliferation), etc. I just don't see how these physical-world existential dangers manifest right away, even if we rapidly go from "unable to drive a car" to "superintelligence" (which itself seems like a big assumption). Even for hacking it seems like you would need to give a poorly-understood super-AI unrestricted network access.
[1] https://en.wikipedia.org/wiki/AI_winter
"I can imagine a number of ways AIs could take over the world, I'm just having trouble imagining how they might take it over *right away*, or how it might happen by accident in the early stages."
That's exactly what I'm talking about! We can imagine some proxy for higher intelligence, like inventing new technologies (only that we have already thought about) or thinking faster (only the same kind of thoughts we think at our level of intelligence) At best we can imagine highter intelligent being to instantly arrive at conclusions, skipping all the mistakes and mental stumbling (only the same conclusion we can arrive).
But we can't actually imagine how it is to think super-intelligent thoughts because to imagine them we would need to think them and we are not super-intelligent ourselves. How it is to invent the technologies that we couldn't even think of, or arrive to conclusions that we couldn't grasp in principle. And that's why we are completely missing whole dimension (or even multiple dimensions) of strategies, that a super-intelligent being does not miss and can use. Imagine trying to contain a 3-dimensional object in a two dimensional prison. No matter how thick are the walls, the object can just go over them in a way that two dimensional beings couldn't even imagine.
Intelligence is the thing limiting our possibilities in the decision making process. For a higher intelligent being there are just more possibilities to achieve the outcomes they would prefere. A better chess-player can win in situations, where worse player wouldn't be able to. From a worse player's position, a better chess-player can win the unwinable games. It's a superpower, that seemingly defy the logic itself.
But wait, you may say. Can't we arrange a situation on the board, that even the best possible chess-player wouldn't be able to win? Maybe their king is already under check and they do not have any other figures, and we have all of them to corner the king? Well, I think we can. It would be quite interesting to find the least gruesome condition in which perfect chess-player wouldn't be able to win, but in principle it seems to be possible. But, it's where the methaphor breakes. It's only possible because we know all the rules of the chess. And we do not know all the rules of reality.
Sure, there's probably things in this world that we can't imagine and have no evidence for. But at some point it becomes almost a theological question, similar to Pascal's wager. Maybe this particular question is different, but as a practical matter I think it makes sense generally to default to not updating our internal model of the world until we have more evidence about how things work.
This a little bit abstract, but one way of thinking about how increased intelligence might apply to the world is asking if the complexity of the world is linear, combinatorial, or even chaotic.
Take chess for example. If the complexity was linear, someone who thought twice as fast might be able to see 10 moves ahead instead of 5. But the actual complexity is exponential, since each position might branch into 6 other positions depending on the moves taken. Increased computational power has diminishing returns. So in exponential systems we'd expect a relative advantage but maybe not something totally qualitatively different (computers haven't simply solved the whole game of chess, despite it being limited to a 8x8 board).
(As an aside, for chess beginners make a lot of mistakes, but at the higher levels of play I'm under the impression that the better players aren't able to turn-around "unwinable games" so much as avoid getting into them. So if someone was behind a piece they might just resign rather than pulling a logic-defying comeback).
Even worse than exponential, a fair amount of real-world systems seem chaotic. Things as simple as a double pendulum [1], but also the weather and maybe geopolitics. In chaotic systems, small differences in initial conditions get reinforced over time, so even if they're deterministic on some level the approximate present doesn't predict the approximate future. And it's basically impossible to measure the present in the infinitely precise way needed to predict the future.
Thinking about chess, computer science, math, politics, etc, an exponential explosion of possibilities seems to be almost the norm - it's easier for me to think of exponential / chaotic systems than it is for me to think of linear ones.
So this view of things is part of the reason why I'm skeptical - even if AI is more efficient than humans on an energy/computation basis, it surely won't be free. In an exponential world you would run into energy constraints well before any kind of omniscience.
Of course there are specific technological thresholds that grant a lot of power - things like nuclear weapons. Even if AI didn't take over the world on its own initiative, if it merely aided the discovery of similar technologies we don't know about yet that could be super dangerous. I'm not sure how alignment could prevent that specifically though.
[1] https://www.youtube.com/watch?v=U39RMUzCjiU
I'm not sure how Pascal Wager is relevant here. Pascal said that even if we had little to no evidence in favour of God existence it's reasonable to worship Him, as the reward is infinite as well as punishment for not worshipping. But as we aknowledge that there are infinite possible Gods with little to no evidence in favour of their existence, some of whom may even prefer not to be worshipped, Pascals math doesn't really work out. Nevermind the issue with Kolmogoroffs prior on the existence of the omniscient being.
Do you claim that there is as much evidence in favour of existence of more intelligent strategies which we can't see with our current level of intelligent, as of Christian God? This seems wrong to me. I've both been in a position where I can see a strategy which another person can't, and where I'm not smart enough to see a possibility that someone else notices. And while a little surprising, these situations have never felt like total mindblown. People have been inventing new things throughout the history and it feels not surprising at all. Are ideas of known unknowns and unknown unknowns even controversial? It seems that I'm really missing your point here.
World complexity is an interesting topic and I would be glad to explore it, but I have a feeling that we should at first agree that super-intelligent agents can in principle arrive to ideas and strategies that we can't even imagine. And only then go into details how hard/easy it would be for them.
If we were to weight the importance of things it would be something like the evidence of it being true times the consequence if true.
Where it reminded me of Pascal's wager is the argument about AI often puts most of the emphasis on the consequence, in the face of very indefinite evidence.
Sure, AIs could cross some threshold of super-intelligence that grants them real world powers beyond our imagining. And the reason that we don't see concrete mechanisms for this is because it is beyond our imagining. But is this even a falsifiable belief?
Of course AI will be able to do things people can't (and to some degree already does). But what people are concerned about seems to be near-term, destroy-the-world existential risk.
Reasoning about specific superhuman strategies doesn't seem possible, but assigning probabilities to unknown unknowns in the absence of evidence doesn't seem reasonable (since that can lead into Pascal's-wager territory).
The rest of my comment was trying to reason about it through an indirect approach, by thinking about the shape of power returns to intelligence in fields we are aware of, and operating under the assumption that initial AIs may have much greater computational ability than humans, but it will still be finite since the hardware will have an energy cost.
I think moving 10% or 20% faster would be better in sports, if you want to make money. Dominating football games by almost teleporting around the field would just cause them to change the rules. Hoping your 100x is under control.
1.) I don't think so. I think a community of very intelligent people thinking intelligence is the single most important thing in the universe is a bit, well, obvious, when you consider it.
2.) I don't think super-AI is even possible, and arises from considerable confusion about the nature of intelligence, and the nature of the universe we occupy.
3.) The answer looks like an obvious "no" here to me.
4.) It seems relatively obvious to me that the problems alignment researchers are trying to solve are, when you get down to it, the same problems anybody designing a government are trying to solve, and when you sit down and think about that, it suggests that alignment is actually kind of a bad thing (imagine that our ancestors, at any point, successfully aligned government with what they then thought the correct morality was). However, as bad as it is, it's strictly better than the alternative, in the case that I'm wrong and it's actually relevant.
"suggests that alignment is actually kind of a bad thing (imagine that our ancestors, at any point, successfully aligned government with what they then thought the correct morality was)"
This is just you being obviously biased by the fact that you happen to agree with the morality of the present day. Even putting aside the fact that this is largely a fact of you being RAISED in today's society (rather than being the product of some objective analysis of the virtues of modern western morality), there's absolutely no reason to think that future humans or AIs will have moral values you deem superior to ours *by virtue of existing in the future*.
A contemporary person to us would not deem their moral values superior to ours, which is why we would want to impose our moral values on them, the degenerates. Such is the way of history.
Also as the way with history, observing that I would not wish my ancestors to have had success imposing their moral values on me, I come to the conclusion my descendants would not wish me to impose my moral values on them.
"it suggests that alignment is actually kind of a bad thing (imagine that our ancestors, at any point, successfully aligned government with what they then thought the correct morality was)"
I don't know. Hard to imagine that 16th century Frenchmen or Italians succeeding in this wouldn't make the West a significantly better place than it is now.
What in particular do you think would be better?
Racism not invented, social engineering not invented, culture of Christendom (i.e. Europe) seen as obviously superior with no self-hating nonsense, hereditary aristocracy preventing striverism, functional monastic system filling an immensely important social niche, minuscule, lackadaisical state barely able to tax salt, legal duelling, hatred of The Turk, admiration of The Turk's neat carpets, universal forced obeisance to the Pope. Attempts to build Modernist probably punished with live burial and/or Modernism not invented.
This is only a preliminary list off the cuff, though.
The Church violently opposed dueling and punished it with excommunication, so it's a bit odd to see your praise for Papal power and Christian culture co-existing with nostalgia for dueling.
"Universal forced obeisance to the Pope" is a joke obviously, I thought it was obvious which part was deranged as a self-deprecating downward spiral gag but perhaps not. That said, I was assuming alignment to a sort of "average/intellectual person's morals", not any specific individual's. In period, the huge quantities of dueling during the last decades of the 16th century weren't legal at all – the last legal duel in France occurred in 1547 – and kings tried occasionally very hard to suppress it, but *popular morality* endorsed it, which is why it persisted for centuries. The tax is the same thing, Henri IV is probably the only French king who would have disdained to use modern panoptical technology to squeeze the last drop of excess blood out of every peasant and burgher in the land, but obviously this wasn't in line with *period morals generally*. Your average random guy (Guy de Randôme?) would have approved of the Pope *but also* of fighting, even if those things were to some extent incongruent. In fact, this kind of combination of respect for the pious with utter disregard for the idea that one ought to follow their example or anything of that sort is something I wish we could have more of in our time.
"Racism not invented"
Racism was never "invented". Ignoring the utter meaningless of the word, 'treating people of different races differently' is an emergent behavior, resulting from a natural viewing of different looking people as different and the enormous mean behavioral differences between races. It was not a formal thing that people developed and made everyone else follow.
Speaking as a non-heterosexual individual, I don't particularly relish their moral system being perpetuated by a perfectly aligned government.
I agree, we should reorganize the entire course of western civilization because a tiny fraction of the population will be better off in a particular way if we do.
What amuses me is how people arguing against me make my point so much better than I myself could.
Are you sufficiently selfish to insist on this even if it would make society worse overall? What I'm saying is that *in sum total* we've pretty self-evidently blown it since then; I don't really think any specific policy is so crucial that I'm willing to fuck it up for everybody just so I can have that.
Yes. Also, their morality was garbage.
"Yes."
Yeah, I'm sorry, in that case your argument doesn't really generalize to a broadly applicable principle you can use to convince non-you persons. Anyone who would think from behind a Rawlsian veil of ignorance that making society worse for everybody else in order to improve it for 2% is... well, for one thing he's a hypocrite if he then also favors making the top 2% richest pay taxes.
Your statement raised further questions for me but we're already parlously close to the dread and forbidden Politics, so I'll leave those out and tap out of this subthread here in order to respect the bulls of Pope Alexander the Rabbinical.
Shit, if only they had!
"I think a community of very intelligent people thinking intelligence is the single most important thing in the universe is a bit, well, obvious, when you consider it."
I've just tried to consider it from this perspective and it doesn't really seem to work out.
Is it some general principle true for all communities, that people with impressive quality X think that it's the most important thing in the universe? It doesn't seem so. People in the X focused communities do tend to think that X is somewhat important, but not the most important thing that everything else depends on. And no other community apply such appocaliptic importance to X as with AI risks community.
So are very intelligent people specifically worse at these? Are they more vulnerable to such cognitive biases? It's possible but it realy seems that it's supposed to be the other way around.
Notice, also, that the opposite seem to work much better. Lets examine these phrase:
"I think people, which are not part of a community of very intelligent, thinking intelligence isn't that important is a bit, well, obvious, when you consider it"
Now these does seem like a general principle. People who lack some quality or are not into some activity tend to downplay its importance. And one can expect less intelligent people to be more susceptible to cognitive biases.
This particular tendency is a lot more common than you seem to think it is. There are obviously many communities in which it's not a good fit, but if it is? Well, religiosity has the same tendency, right down to the apocalyptic scenarios.
There are also quite a few interest groups with similar tendencies towards secular apocalyptism; oceanographers, climatologists, astronomers, geologists, computer scientists, various economic beliefs, political theorists. They will all claim to be the most important fields in the world, and their knowledge thus made the most important, at various times for various reasons. This isn't a recent thing. People like feeling like they're part of something important.
>How come our politicians and business leaders don't seem to all be super-geniuses?
I challenge this pretense. With some exceptions that tend to be related to inherited wealth and legacy, it seems to me that most (not all) of our politicians and nearly all of our business leaders are, if they remain successful over a long period of time, pretty smart. I think luck/chance can explain most outstanding short term successes.
It sometimes take intelligence to recognize intelligence. Dumb people are not good at evaluating the intelligence of others. Are you certain you are evaluating our current elites properly?
I was comparing to Terence Tao, who was doing university math at 9yrs old, not the national median something. Being skeptical that they are super-geniuses isn't the same as calling them idiots. I'd expect if you dug up test scores or some other available proxy for intelligence (other than political success), they would be smarter than average but not close to the top in the entire country.
I also think that if you look up actual power and influence, politicians are more powerful and influential than average, but not close to the top in the entire country. There are a lot of things a president can do with their distinctive power, particularly in terms of military deployment, and certain modifications of already-existing government programs, but these aren't the sorts of things that most people actually want to do. In terms of ensuring that their children live happy and healthy lives, that they themselves have a lot of opportunities to do fun and interesting things, and so on, presidents probably have about as much power as the average American with a $400,000 salary. And even within the political realm, most things the president can achieve need the cooperation of lots of people (notably the Speaker of the House and Senate Majority Leader if it involves legislation, but even something purely executive like Bush's PEPFAR needs a whole bureaucracy to execute it, and thus likely was as much the work of several dozen other designers as it was of Bush himself).
Nancy Pelosi and Mitch McConnell are two individuals that have achieved a lot more than the average person in their position, and I think they really are quite a bit higher on the "intelligence" spectrum than the average national politician.
1. I tend to think intelligence correlates (weakly) with other good things, but the tails come apart pretty quickly. See Table 2 in Part III of https://astralcodexten.substack.com/p/secrets-of-the-great-families for some evidence that politicians have higher IQ than average, and high-ranking politicians have higher IQ than low-ranking politicians. But it's obviously a weak effect. Just as people who are good at baseball will probably have a weak tendency to be better than average at basketball just because they're physically fit but you still can't make a winning NBA team by rounding up baseball All-Stars. On the other hand, 0% of successful politicians are mice, chimpanzees, or five-year-olds (and not just because we don't let them run). Once you get to super-extreme differences, even small correlations become really important.
2. Nobody knows! I agree that the best-case scenario is it goes very slowly and gradually and we have a lot of time to tinker with things. There's some reason to think this might be true - eg it took AIs 30 years from "played chess passably" to "beat human champion". On the other hand, it only took AIs a few months from "played Go passably" to "beat human champion", so right now we don't know what analogy to use for things we care about. I think most people surveyed believe AI can be superintelligent within a decade or less of being human intelligence.
3. Right now the two leading AI research groups are OpenAI and DeepMind. Both of them at least claim to be interested in alignment and pay attention to alignment work done outside their organization. I'll be posting soon on some conversations between Richard Ngo (one of OpenAI's leading alignment people) and Eliezer Yudkowsky, for example.
4. Yes, definitely this is bad. Some AI alignment people believe that it will be easier to align an AI to do something very simple like "prevent other superintelligent AIs until we solve harder problems" than to actually be aligned in general; if people take this route, then this might solve the human alignment problem too. Otherwise I don't know of any great solution here.
> On the other hand, 0% of successful politicians are mice, chimpanzees, or five-year-olds (and not just because we don't let them run). Once you get to super-extreme differences, even small correlations become really important.
To extend the thought experiment though, what if you put a human in a chimpanzee or five-year-old society? Ignoring the size advantage on the 5 year olds, I'm not sure exactly how they could use the additional intelligence to take over the world. I think you'd need to bring in access to weapons to really take over (uh, not that people won't do that with AIs at some point).
I'm sure I read somewhere that some early experimentalists brought up small children along with baby chimps - and the experiments were curtailed because instead of the chimps acting more like human children, the children tended to act more like chimps!
Whether that would be good or bad in terms of an AI risk analogy, I cannot say...
In a 5Y old world? It's not only size. If you are not completely uninterested and actively participate; and if you are not really strange/creepy (in which case you will fare even worse in the adult world), you basically are the undisputed leader of any 5YO group, almost god on earth.
You enjoying the position is not the same question....
Hmm, that's true although I'm not sure it's only an intelligence thing. To adjust for other factors, maybe we should make it so you are a 5 year old with adult intelligence in the 5 year old world (which would probably make it more challenging).
Even that analogy probably isn't perfect. I've seen it suggested that dogs are more friendly than wolves because they retain aspects of juvenile behavior into adulthood, and kids in general might have some instincts that would make taking over the world easier compared to adult chimpanzees etc.
Well, obviously you detect intelligence in others with your own cognitive abilities. So 5YO will judge their peer with their typical 5YO brain, if you are an adult within a child body you will be judged, I guess, either as a very inventive and super competent friend, if you are friendly and participate (which would required not being bored to death, or keep moving even if you are). If you disengage, you will get tagged as too serious, boring and recluse, and mostly ignored probably except when there is a issue that other kids know you can fix (the loner is strange, but he can install new games on the parent ipad and prepare great hot chocolates, lets go find him). Probably the kind of experience most gifted childs have...
Among 5YO bullying should not be a big issue. Among 10-15YO, or among chimps, it could, so you need to have both good social intelligence and play the social dominance game cleverly, and be at least of average physical strength....
The big issue is that to a 5Y0 (or a chimp), the world is a tiny group, maybe 10-50 friends. So taking over the world may be misleading (something to consider for AI: maybe our world will just a be a (nice, or annoying, depending if the superintelligent AI is friendly or not, if he find us cute) distraction to what it find important, like 5YO games when you think about your promotion. And still, we may be affected without even understanding the issues or the reason the AI acted like it did (like a 5YO in the middle of a divorce)
I tend to think about optimal AI as "glasses for brain": improving a blurry picture.
For example, picking up the single most promising molecule for treatment of some diseases out of ten million "pixels".
Of course people with glasses have caused a lot of evil (to the degree that they became official villains of the Khmer Rouge), so people with AI will do so as well.
Can anyone point me to where the basics of AI safety have been discussed , a FAQ maybe .
Because every time i hear about AI safety i think this is like discussing seat belts in a car headed
for a cliff . And since i cant be the only one that sees that even an aligned AI spells doom i`d like
to see what has been said about that and why the obvious solution : completely forbidding (general ) AI research is not the route they are going .
A bit old and long, but I think this 2006 Yudkowsky write-up is the best one-stop resource:
http://intelligence.org/files/AIPosNegFactor.pdf
On banning research in particular:
"One may classify proposed risk-mitigation strategies into:
• Strategies which require unanimous cooperation; strategies which can be catas-
trophically defeated by individual defectors or small groups.
• Strategies which require majority action; a majority of a legislature in a single coun-
try, or a majority of voters in a country, or a majority of countries in the UN: the
strategy requires most, but not all, people in a large pre-existing group to behave
a particular way.
• Strategies which require local action—a concentration of will, talent, and funding
which overcomes the threshold of some specific task.
Unanimous strategies are unworkable, which does not stop people from proposing them."
This DSL thread opens with a well-exposed skeptic position that there's nothing to worry about, then there's a debate.
https://www.datasecretslox.com/index.php/topic,2481.0.html
I'm fairly sure that our own Scott has written well on this topic somewhere in the old SSC, but I can't find it.
Thank you very much
I don't think AI research could be stamped out even if the whole world got together and agreed to do that (which it won't, of course). Among the things that should maybe be banned, AI is the complete opposite of nukes, which must be built in big, recognizable facilities and require expensive, hard-to-access materials. Huge progress can be made on AI with just cheap electronics and lotsa IQ points.
I'm not sure that AI research can be defined well enough to ban it.
How would you forbid general AI research? It's not like it takes anything else than brains, whiteboards and computing power.
I think many people wish they could forbid AI research, but they're not trying because:
1. Even if the US/EU forbade it, China probably wouldn't, and then China would get AI first and it would be terrible. Even if China said they were forbidding it, would we trust them? Even if China said they were forbidding it and told the truth, Russia? India? Some secret team at Google? Three guys in a garage? We can't even coordinate well around climate change, which has way more political activism behind it.
2. People concerned about AI risk probably don't have the political clout to do anything like this, especially if people likely to profit from AI (eg Facebook, Google) fight us.
3. Right now the AI industry is very friendly to people concerned about AI risk, people move back and forth between industry and alignment academia freely, they agree to take everything seriously. If alignment people declared war on industry, probably we would lose pathetically and also turn lots of allies into enemies.
4. These things might change in the future and then it might be strategically correct to declare war on the industry and try to win.
Thank you for all the answers so far .
I get that fighting AI development is insanely difficult but its not
impossible . (@ a real Dog : Developing general AI is not something that one genius can do in his garage , GPT3 was developed ( from gpt2 ) by 31 engineers .
And China shows that you can ban Crypto Mining , so you can also ban AI research , for starters concentrate on clusters
of computers and coftware exchanges that tangents AI research )
And i get that we would miss out on AI benefits , but the risk is not as with domesticating horses , that you might suffer the occasional horse kick ,
its total annihilation , humanity cant get a little bit extinct , its all or nothing , that risk / reward is just not in our favor .
Scott's point 3 is a good point that i had not considered , thanks .
But more to the point : if you consider AI threat an existential risk AND think you cant prevent it ,
why not , like Tom said ,get out of the car ,why don't the AI safety researchers try to apply for work with Elon Musk on his rockets
(this having the added benefit of working on another existential risk : Asteroids ) instead of working on seat belts ?
GPT-3 was only 31 engineers? That's not a literal garage, but it's still tiny by software industry standards. That's "one-story building in an office park" sort of small, and there are a lot of office parks on this planet.
Scrutinizing "clusters of computers" sounds even more impossible, since AWS, Azure, et al basically make a business out of selling chunks of computing power on demand. Restricting access to cloud computing would basically nuke the tech industry from orbit - everyone from tech giants like Netflix and Facebook to little one-office-block startups to individual college students makes use of these platforms.
(Bitcoin miners are a very specific type of computer cluster, and one that doesn't normally make use of cloud services because they need much cheaper computers to be profitable. It doesn't generalize.)
I think the option of preventing research is generally regarded as impossible, because AI research can be conducted in secret and there are a large number of actors who believe conducting research is in their self-interest. (And even if it weren't logistically impossible, it is probably also politically impossible.)
Sometimes it is proposed that the first superintelligent AI should (or would) immediately be used to suppress all competitors.
>i think this is like discussing seat belts in a car headed for a cliff
I don't think the people in alignment research necessarily see it very differently. Eliezer Yudkowsky seems quite pessimistic. I think they just also believe the brakes on the car (ability to stop AI research) are not only broken, but were never installed in the first place.
Can you blow up the car, preventing it from careening off the cliff? Probably, but that hardly seems better. Can you escape the car? Maybe, see: Elon Musk.
Well certainly a global ban on AI research, enforced via extreme vigilance and deadly force, is one way to handle AI risk, but comes at the cost of losing out on AI benefits. Handling AI risk is like domesticating horses; we could have never bothered, but we'll be infinitely better off if we can make it work.
I have the feeling there is a difference from other dangerous technologies, in the sense that the prevention is not only not enjoying the benefits, it's replacing many of the risks by similar (but better known) risks: AI risk is being ruled by despotic, non-revocable AI, and possibly killed by it. And an effective ban seems to imply much more authoritarian global government/regulation. A despotic, difficult to revoke, human organisation. Trading an unknown devil for a known one. Never a happy choice....
Re Meditations on Moloch:
Tyler Cowen's Convsersations with Tyler had a really good interview with Richard Prum, an ornithologist. (It was very interesting to me, someone who did not expect to be into birds.) At some point Richard made an argument that seemed to represent an anti-Molochian process,. Somehow, flowers genuinely competed on some beauty axis, because that was the best way to attract pollinators.
It suggests to me that there is some way to structure competition such that Moloch does not always win. Or that there is some countervailing force that also exists if we do not insist on being pessimists. Is there anybody with expertise on evolution able to explain why we have beautiful flowers instead of ones that optimized for some invisible pollinatableness trait over everything else?
If you're looking at cultivated instead of wild flowers, bear in mind that people have chosen to cultivate flowers that they deem appealing and then that they selectively bred them for centuries.
It's true that cultivated flowers are particularly optimized for human appreciation. But most wild flowers that bees and birds find visually appealing also look pretty appealing to humans (except in cases where it depends on ultraviolet vision). Smells don't seem to have as much cross-species shared experience of beauty, but that makes sense because smell is so closely connected to specific dietary requirements. It's interesting that certain kinds of visual appeal do appear to have not just interpersonal agreement but cross-species and even cross-phylum agreement.
Flowers aren't free for the plants. They are caught in the Moloch of more attractive. (How big are your petals?)
Tentative: Moloch is about trying to optimize simple factors.
Gregory Bateson wrote something interesting about rain forests not trying to optimize anything. I think he was snarking about money.
If we wanted a system which resists optimization, what would it look like?
That contrasts with what I've heard of rainforests as fiercely competitive, with trees growing immensely to obtain sunlight. I would think of a small island as being relatively uncompetitive.
Well, I'm wondering if it's maybe a result of many optimization targets. If you keep increasing the number of vertices on a polygon, it eventually begins to look like it doesn't have any vertices at all (it looks like a circle). Similarly, could a system that's trying to optimize for so many things begin to look like it's not trying to ruthlessly optimize for anything. The system achieves metastability via the tension of many optimization targets.
"Is there anybody with expertise on evolution able to explain why we have beautiful flowers instead of ones that optimized for some invisible pollinatableness trait over everything else?"
I work in the field of evolutionary biology and have wondered about this as well. Among the things to explain, you can add that humans seem to really like some of the smells that flowers produce to attract pollinators (but not all of them, some fly-attracting smells are appalling!) To my knowledge, we don't currently know the answer to this question. My personal bet would be the one proposed by Dweomite, that it is man who has evolved to find beauty in flowers, not the other way around, as loving flowers has benefited us in the past.
Hmm well we are now breeding flowers for what we find beautiful. But before that flower 'beauty' must have been all about attracting pollinators. That we happen to like some of the shapes, colors and odors seems like semi accidental. (What else makes any sense?... what if we didn't have three color sensors, but only two?... we were all color blind. Oh sci-fi story where someone genetically hacks a fourth color sensor into the eye... a different beauty)
"That we happen to like some of the shapes, colors and odors seems like semi accidental."
I don't think it's accidental, in the sense of random. If we consider only insect-pollinated wildflowers, I think the vast majority of them would be between somewhat and very attractive to humans, which means that some sort of general explanation is needed.
And two general explanations that come to mind are (1) being visually salient for pollinators automatically makes a flower attractive to humans because the same properties are involved (for example being very colorful) (2) humans were selected to find flowers attractive.
I do notc see another obvious explanation, which of course does not mean thare there isn't any!
I always assumed that pollinated flowers are pretty (on the whole) due to a combination of needing to advertise visually (which invokes the same evolutionary logic as poisonous creatures in producing visible, contrasting colours), a general tendency towards repeated components and symmetry caused by being made up of plant parts, and a sort of weak anthropic effect where we select particularly pretty (to us) flowers to focus on when we have these sorts of discussions.
As an interesting tester for the rule, we have a few species of orchids that have evolved to be specifically pretty to bees/wasps via mimicry, and look like a plant's impression of a female insect. These are not generally considered to be the most beautiful of flowers.
I totally agree about the need for visual advertising, but it doesn't seem entirely clear that being visually attractive to insects automatically translates into being visually attractive to humans. However, humans and many animals seem to really like color (sexual selection, when direct function is not very important, usually produces colorful patterns), so perhaps a flower just needs to be colorful to be both visible for insect and pretty for humans.
I also think that the beauty of flowers varies with their pollination mode, and I've been wanting to check this with data for some time. Maybe I'll get there one day!
It seems to me that bee mimicking orchids are often beautiful, see for example the obligatory xkcd below.
https://xkcd.com/1259/
As a casual forager, I've noticed that if you come to an area with a lot of flowers and memorize it, you likely will be rewarded eventually with fruit. If you notice the flowers easily, you will find that area easily. However, it's not us with our big brains and long memories that most flowers evolved to attract--it's pollinators like bats and insects! Flowers and pollinators co-evolved and in so doing settled on their signals. Because of that, we're chasing the beauty in the eye of the bee.
"Because of that, we're chasing the beauty in the eye of the bee."
I love it! What a both very descriptive and poetic way of puting it!
If you want to attract pollinators, you can't be "invisible" to the pollinators; you need to do something they can detect, or else they can't be attracted. (Though Google says that around 7% of flowers have ultraviolet markings, visible to bees but invisible to humans.)
As for being "beautiful", are you sure that's not a case of *humans* evolving to find beauty in however-flowers-happen-to-look? (Living in the same general location as flowers seems like it could plausibly be advantageous, e.g. because it increases your chances of finding fruit, honey, or herbivores to eat.)
However, the point of Moloch isn't that you can't ever get anything you want. For instance, you probably like civilization more than barbarism; and lo! we are living in a civilization; and the reason is that civilization is legitimately more efficient. The danger is that if something *even more* efficient came along, Moloch might push you into it even if you don't like it as much.
But in the-world-as-it-currently-is, there are actually quite a lot of things that are pleasant *and* efficient. (And in at least some cases, the *reason* they're pleasant might be evolution programming you to like strategies that have worked in the past.)
>As for being "beautiful", are you sure that's not a case of *humans* evolving to find beauty in however-flowers-happen-to-look?
No, I am not, this is a good pushback. I have implicitly assumed that, in the space of possible ways plants could appear that there seemed to be some convergence of ability to "look like a cool flower" and "ability to provide pollen" such that the former acted as a proxy for the latter given that, as you say, "you need to do something they can detect," and the easiest something was something else desirable (to us at least).
So, are flowers just optimizing for appearing distinct-yet-mathematical, and that looks good to pollinators (and us)?
I think that's right, though I think humans have evolved to find the *particular* distinct-yet-mathematical appearance of certain natural phenomena (such as healthy plants) attractive.
(Note that I'm not any kind of expert, and evo psych is hard to test without a million years and a very big Petri dish).
Part of it is that we share elements of the same signalling language - we find symmetry attractive in people and foods because it signals lack of infection and low mutational load. Similar, we find bright colours salient because plants and animals use it as a signal, and we've evolved to be aware of such signals. Since there's no counterpressure to _not_ find these characteristics appealing in flowers, we do find them so almost by default.
Secondly, as Dweomite says, there are specific reasons we might find healthy and diverse plants appealing - it indicates abundant food and fertile soil - and we evolved from arborial species that relied on trees for survival (healthy trees offering better cover and safer climbing). So there was actually active pressure to find evidence of plant health appealing.
Hey. Do psychiatrists ever have really treatment-resistant patients? How do they manage them?
Asking as a prospective mental health provider.
Scott's answer applies to biological treatment (drugs, etc.) of psychiatric problems. People who do talk therapy also view some patients as treatment-resistant, and there are as many ways of thinking of and working to undo these folks' resistance to treatment as there are styles of psychotherapy -- which is to say hundreds, many of them silly. And yet there do truly exist therapists who are exceptionally good at helping deeply stuck people change.
Yes, all the time. Usually you reserve more complicated therapies for treatment resistant patients, eg ketamine or ECT. If someone resists literally everything (not really possible, there are hundreds of things, but sometimes you do kind of lose hope), then you see what you can do to make their life better (I sometimes use Adderall in an almost palliative way; it doesn't exactly solve treatment resistant depression, but at least it helps TRD patients do more things and live more normal lives). If someone is still unable to live a normal life, you work with them on things like getting disability and otherwise trying to live the best abnormal life they can.
Scott mentioned that he's not as consequentialist as he used to be. What changed? What's the anti-consequentialist case?
> What's the anti-consequentialist case?
There has been considerable work on this topic, see this thread for some references:
https://www.reddit.com/r/askphilosophy/comments/21ambv/criticizing_consequentialism/
Some moral duties arguably resist consequentialization, and consequentualism kind of perverts your relationship with your values in a way. As an imperfect analogy for this perversion, suppose you're a real space nerd with little to no interest in other sciences, and you fully support investing in NASA and similar space-oriented projects.
Your most cherished desire is to visit Jupiter. It's virtually certain that nothing you can do will let you directly fulfill that wish in this lifetime. However, it's possible that life extension research could. So despite no interest in biology, consequentialist-type thinking would lead you to completely ignore space science, and petition for others to do the same, and to go full-bore on life sciences.
You'll probably hate doing this with every fiber of your being. You're not just working in a subject that holds no direct interest for you, as it is only contingently valuable, but because you would be fighting so hard to divert money *away* from the one thing you do value. I'm not sure many people would or could actually ever do this, so consequentialism probably doesn't well describe how people actually reason about their values.
This seems to accurately describe how most people feel about things like jobs and cars and school. Instrumental value is real value, if it actually helps you get what you actually care about. Although different people have lots of different terminal values, the fact that most people have so many shared instrumental values is why things like cities and societies and governments exist, and can be so successful.
This is a critique not of consequentialism, but precisely of failing to be a consequentialist. "Hating something with every fiber of your being" and "diverting money from the one thing you value" all sound like pretty bad consequences; so a consequentialist would want to avoid those. If you end up like that, you were not a consequentialist.
Clearly in the scenario as described, you can swallow the bitter pill of doing something you hate anyway if you value achieving your desired result enough. But like I said, it's an imperfect analogy meant only to show that consequentialism doesn't necessarily match our intuitions at the extremes. You're better served reading the papers referenced since they go into more detail rather than quibbling over the specifics of my imperfect analogy.
He did? That is something I would greatly like to learn about.
A while ago, Scott wrote about Cost Disease, why some things like education and health care are getting dramatically more expensive. [1]
For those of you who are interested, Alon Levy has a recent 7 part series on institutional issues leading to Cost Disease in public transit over at Pedestrian Observations: Procurement, Professional Oversight, Transparency, Proactive and Reactive Regulations, Technological and Social Change, Coordination, & Who is Entrusted to Learn. While they're specifically talking about public transit, similar institutional problems (and solutions?) can be found in other fields.
[1] https://slatestarcodex.com/2017/02/09/considerations-on-cost-disease/
[2] https://pedestrianobservations.com/2021/11/16/institutional-issues-procurement/
https://pedestrianobservations.com/2021/11/18/institutional-issues-professional-oversight/
https://pedestrianobservations.com/2021/11/20/institutional-issues-transparency/
https://pedestrianobservations.com/2021/11/22/institutional-issues-proactive-and-reactive-regulations/
https://pedestrianobservations.com/2021/11/29/institutional-issues-dealing-with-technological-and-social-change/
https://pedestrianobservations.com/2021/12/11/institutional-issues-coordination/
https://pedestrianobservations.com/2021/12/31/institutional-issues-who-is-entrusted-to-learn/
Hmm, reading this reminds me of the Triffin Dilemma: https://en.wikipedia.org/wiki/Triffin_dilemma. Is this just a restatement of the Triffin Dilemma?
I think that, in order for this argument to explain a significant effect, the dollar would have the main export for the US. Is this the case? Or is the US trade in goods more important than the US trade in dollars?
This may be oversimplifying it (not sure if this accounts for flows from financial instruments) but US Trade Deficit in averages north of 50bn depending on the month so on balance dollars are flowing out of the US, making dollars our most important export.
It appears as though total US exports per year are about $2.5 trillion, while total imports are about $3 trillion, leading to a $500 billion trade deficit - which is about $50 billion per month. (Rounding a lot everywhere.)
If we treat the trade deficit as exporting dollars, dollars would only count for about 1/6 of the total exports.
Dollars would probably count as our largest export - because the other exports are quite diverse. But I don't know if that's large enough to explain everything we're seeing. We're certainly not a petrostate where a majority of our exports are a single commodity.
The "petrostate" moniker oversells it - we're obviously a much more diverse economy than that - but there is a very real sense in which developed, money center countries are primarily net exporters of credit and net importers of physical goods. The US is the most systemically important provider of credit in the world, in the form of printing the reserve currency for the global financial system.
That’s a fascinating theory, and the folks at Alphaville are very smart. My only immediate question is how that simple model would be impacted by the split between demand for dollars and demand for for US Govt and Dollar denominated debt?
I recently came across a claim that Scott's Reign of Terror moderation mode is intended to obfuscate the exact criteria of whether an offense is bannable or not (to prevent intentional skirting of the rules in bad faith), but after some searching, both on SSC and on the old Livejournal, I couldn't find a satisfactory description of what the Reign of Terror was or is supposed to be.
If anyone can point me towards one, I'd be grateful.
(I confess my reason for this is mostly wanting to add Scott's justification for it into my personal Catchy And Funny Quotes Collection.)
The Reign of Terror started here:
https://slatestarcodex.com/2015/10/05/ot29-popen-thread/
There wasn't a blow-by-blow description of what such a reign entailed, more "If I want to ban you, I will; be warned from now on". Those who decided they could not live under that left, the rest of us bowed our necks and remained. There was some discussion in the comment thread, but you'll have to read down a long ways to find it.
I can't find one from Scott, but here's Eliezer Yudkowsky's Reign of Terror:
Eliezer Yudkowsky's commenting guidelines
Reign of Terror - I delete anything I judge to be counterproductive
I will enforce the same standards here as I would on my personal Facebook garden. If it looks like it would be unhedonic to spend time interacting with you, I will ban you from commenting on my posts.
Specific guidelines:
Argue against ideas rather than people.
Don't accuse others of committing the Being Wrong Fallacy ("Wow, I can't believe you're so wrong! And you believe you're right! That's even more wrong!").
I consider tone-policing to be a self-fulfilling prophecy and will delete it.
If I think your own tone is counterproductive, I will try to remember to politely delete your comment instead of rudely saying so in a public reply.
If you have helpful personal advice to someone that could perhaps be taken as lowering their status, say it to them in private rather than in a public comment.
The censorship policy of the Reign of Terror is not part of the content of the post itself and may not be debated on the post. If you think Censorship!! is a terrible idea and invalidates discussion, feel free not to read the comments section.
The Internet is full of things to read that will not make you angry. If it seems like you choose to spend a lot of time reading things that will give you a chance to be angry and push down others so you can be above them, you're not an interesting plant to have in my garden and you will be weeded. I don't consider it fun to get angry at such people, and I will choose to read something else instead.
Wow, this guy is such a dick.
I'm going to assume charity and take your comment as purposefully being extremely ironic.
So, good job, I laughed!
OK and I'm thinking the opposite... wow this guy is such...
Spoken like someone who has never needed to moderate a comment section.
Sounds like someone who is quite ruthless about pursuing his actual interests, and debating politics and policies of online discussion is not one of them.
Since I'm currently working on my own, which I was basically forced to make because the existing options were not pursuing the direction I'd have liked, what is the thing you most wish you could so in a political strategy game?
Play a despot in Star Dynasties?
Really get into the Britishness of Ariaselle in Sovereignty?
Create a Socii style alliance system in Imperator or Field Of Glory: Empires?
Actually engage in deep and fun connected conspiracies in CK2?
Perhaps you want a political strategy game in a fantasy world with actual magic? Divination, Charms, Enchantments, magical assassination?
Find a game that actually has enough intrigue to feel like Game Of Thrones?
For me the various strategy genres have all moved away from deep simulations. Even the recent Paradox games have felt more like static reruns than innovative new genre makers. Perhaps Vicky 3 will crack the trend.
I've long wanted a sequel/remake for Master of Magic. I've seen some games that had a few similar features, but it really didn't feel the same. Key points include a detailed magic system that allows you to mix and match types to research different spell chains (in the original, you had a certain number of build points which you could spend on getting very high level magic in a particular field, or mixing two or more fields - the catch was some spells required access to mixed fields and some spells required high level). There was also a mixture of city management and army management that was pretty similar to Civilization, but that worked very closely with the magic system - buffs, summoned units, etc.
My favorite space game is still the original Master of Orion. Simple enough to be accessible, yet complex enough to feel different over multiple replays. MOO2 added some neat features, but had a lot of the pacing all wrong and many games didn't feel like there was any good balance. MOO3 was a failure because they made it too complex and the required workarounds to make it playable also make it possible to have the game play itself (and unless you loved mix-maxing, that was often the better choice, especially while figuring out how to play...).
Axioms has Dominions3-5 +++ complex magic. And there are some interesting restriction mechanisms.
As far as MoO have you played Remnants of the Precursors? Personally I don't think we needed a free MoO1 remake but supposedly it is a big hit with the old MoO1 fans due to nice graphics running better, and failthful mechanical adaptation.
I tried Dominions and it didn't have the same feel to me. I think it was the missions instead of a more open world.
I had not heard of Remnants, but I'm checking it out, thanks!
Missions? In Illwinter's Dominion 3-5? I don't follow. Do you mean the separate Dominions card game game?
Oh, it turns out I was remembering the wrong game. I love the concept of Dominions but got really frustrated by trying to essentially write code in order to do combat. I haven't seen Dominions 5 yet, I'll see if that's any better on that regard. If the game worked more intuitively and had a better UI, I would have loved it. I think the last I saw was Dominions 2 or 3? It's been a while.
Oh, and thanks again for the tip about Remnants, it's exactly what I wanted from a MoO remake!
One thing I'd like is to seriously cut back on player omniscience. Many games will do that w/re the placement of enemy units on the map. But will still give you the morale and supply status of every visible unit to two decimal points, and *accurate* economic statistics on every segment of every provincial economy, and full details on the progress of cultural development and religious conversion, and the exact extent to which every other faction or character likes or dislikes your own, and maybe even the extent to which they like or dislike each other.
First off, probably the biggest challenge in kingship (or whatever) is that most of that information is unavailable, or costly, or unreliable, or costly *and* unreliable. Second, trying to use all that information is tedious, but trying *not* to use all that potentially valuable information is annoying to gamers who are either trying to win or trying for an immersive experience.
So, make C3I (by whatever name fits the theme) a finite resource. Every time you open a new window, you spend some number of "C3I points". And get an infobox with vague descriptive terms like "good", "fair", and "poor" that you can expand on with another C3I point. Give an order to a unit or a factory, that's a C3I point, or more for a fancy order. When you're out of C3I points, your turn is submitted with whatever orders you've given.
Or come up with a different way of achieving the same effect.
Master of Orion 3 tried that and it was a spectacular failure iirc.
I think the idea has merit. The problem is, if you make decisions / gathering information a spendable resource, the game has to be able to "play itself" reasonably well (otherwise every part you don't monitor constantly will fall apart). In MoO3 you could sometimes win the game just by pressing "end turn" a bunch.
It's also an interesting way to introduce roundabout administrative capacity / sprawl / wide tax.
Perhaps fluff it by having characters/heroes that you control manually, but you cannot control all of them at once - so you spend a turn wearing the science advisor hat and developing a science plan for the next turns, another being a planetary governor at the industrial hub, yet another being an admiral and coordinating the invading fleet - with the information you can access in a given turn being limited to what the character would know.
EDIT: there is also the problem that unless you want the player to make notes, you'll need a way to revisit previously accessible "stale" information. If 3 turns ago the player could check (for free) that a factory can create 2 units per turn, they should still be able to see it now, even if the factory got upgraded in the meantime and creates 4 units per turn instead. Many games do that with fog of war and enemy state, doing that with player state would be... wild.
When you make the game "guess the other guy's internal model and values," you have to make the game significantly easier. And when someones *does* figure out their model and values[1], it becomes really easy.
That's not necessarily a bad thing. I've played games where I just get a *feeling* that I'm improving certain things, or not, and a lot of that might just be in my imagination, but that incorporates my imagination into the game, and makes for a fun experience, if done right.
It's hard to get a lot of replay value out of that, though. And you aren't going to get a forum of obsessives giving each other advice.
[1] And this is easy to reveal in a FAQ, so the game spoils very easy
Star Dynasties took a huge amount of heat from people mad about the obfuscation of data in the game. Not least from me but then I mostly just savescummed on key decisions so I wasn't quite as salty to Glen as I would otherwise have been. Plus at least he tried some different things so I didn't wanna put him off.
This was especially an issue where diplomatic actions had red 0%, blue 100%, and yellow 1%-99% color coding. Traumatizing I have to say. Much save scumming commenced. Especially since that game uses a very harsh and limiting Action Point system. Actually the final Attention Point system in Axioms got some modifications just from how painful the SD system was.
The worst part was the game told you immediately afterward, if your action failed, what was wrong. Even worse was there were some bugs so sometimes the breakdown afterwards said it should have worked, sometimes even by decent margins.
Always gotta balance realism with fun. Also another limit is player expectations. If they expect one thing and get another thing it can hurt you a lot.
Attention Points and the Intelligence Network in Axioms aren't quite as intense as what John suggests, maybe 70% there. Gonna take a lot of play testing to balance.
Well this is a post by someone begging to read a very detailed design blog on the "Intelligence Network" mechanic :)
I talk about it in a few posts but one with much more detail would be helpful at some point.
Axioms has an Attention Point system which I did write a moderately detailed post about. This relates to actions/agreements/decisions rather than information. Attention Points limit you direct control in a flexible and fun way. You can't do everything you need to delegate and trade materialy and informational resources with the other characters.
The Intel Network isn't quite as intense as your suggestion but it is maybe 70% of the way there. All of the information about a province is gated based on your investment in your intel node in that province. This includes resources, geography, where populations and characters are located, and pretty much everything on military intel.
There are things you need to move around to do because your personal presence matters. Doing a "Grand Tour" as Karl Franz is depicted doing Warhammer Fantasy or the rulers of Silverberg's Majipoor isn't *required* on ascension to a title but it provides strong benefits. Of course it has obvious costs. Holding a feast obviously requires characters to be present. The same for Hunts and other stuff.
All the Intrigue requires deploying coins, materials and items, populations raised similar to the army units, and ideally if you can spare them 1 or more other characters as agents or network leaders. Missions can involve Surveying a province, Watching a character, Counterintel, and various other stuff.
The Opinion system is more similar to Paradox vs Star Dynasties, you might actually like how Star Dynasties works, in that you know what people think of you generally. But you can learn about Secrets, Secret Desires, and various other stuff. If one guy knows about some amazing blackmail on another and you don't you would be surprised when someone who you think you are on good terms with acts against you. Similarly fulfilling the desire of a character can get them to act against their otherwise strong feelings.
Basically there is public stuff which you know as long as you have a consistent low tier node in a province and then more that you get from low level character observing. So a small amount of cash, 10 spies raised from a population on a province and less cash and 5 guys watching key characters. Vs a big investment with named characters and gear/items and a pile of cash to spend.
Star Dynasties actually uses a very opaque system. A red result means no way, a blue one means 100% yes, and anything in between that is yellow. Actually you can't even make proposals like this marriage or allegiance only the AI can sometimes say I want a marriage and you can pick options you are given like vassalizing them or money and such. Similarly they can ask to be your vassal and you can take the blue yes option or run a risk and ask for money or a marriage and if they say no you get nothing.
>Well this is a post by someone begging to read a very detailed design blog on the "Intelligence Network" mechanic :)
Yes, I am. Well, asking politely to read that post. When you get around to writing it, give me (or maybe all of us here) a link!
Various other posts on my substack discuss the intel network when relevant. The Attention Point post may or may not be interesting to you.
https://axiomsofdominion.substack.com/p/intelligence-and-networks
That post doesn't have a ton of details about the very low level stuff but it is still pretty detailed and focuses primarily on the intelligence network and associated mechanics.
Thanks; that does look interesting. Particularly if it covers things like e.g. your own economic statistics, to the extent that they are a thing in that game.
The game that came closest to doing things right for me was King of Dragon Pass. To make a really detailed simulationist game, I think it's valuable to keep the scale very small, both so the player is able to keep track of everything and so that your compute time doesn't explode.
Interpersonal dynamics are the thing that IMO adds the most replayability, and intrigue is fun, but it shouldn't all be vicious backstabbing, there should be just as much matchmaking and forging bonds of brotherhood, etc. The promise of Crusader Kings is a good target, but the implementation was not great, and I think mostly because almost no characters actually matter
Crusader Kings generates a ton of characters that don't really do much for sure. There were reasons they did it that way but their system could have been a lot better.
I actually designed the character interaction model to be very positivity based. Of course, assuming I get the AI worked, the Intrigue is much more detailed and with more possibillity both mechanically and narratively, but friends, allies, co-conspirators, long running alliances between families and much more vassal interaction/relevance was a key focus.
It does have to be turn based for performance and coding complexity reasons, though. I have to rewrite a few of my blog posts, especially the one on friend, allies, retainers, and such. But the relationship system is quite flexible and deep.
Crusader Kings once infamously found that the reason for a substantial slowdown in gameplay was that after a particular update ~70% of the clock cycles were devoted to every single Greek-culture character looking at every other Greek character and reevaluating from scratch each day, "should I have this person castrated?" So, yeah, could have been a lot better.
But there's nothing wrong with generating far more characters than will ever matter, because there's no way to predict in advance which characters *will* matter. And it's hard to retcon them into fully-fledged existence when they do. So you just need to have a sufficiently simple and abstract way to handle them before they rise to prominence.
FYI, the King of Dragon Pass devs made a spiritual sequel a few years ago, called "Six Ages: Ride Like the Wind".
Oh, this reminds me, I've got to go replay Divinity: Original Sin II 😁
Certainly, a game that has enough intrigue to feel like ASoIaF would be a prime contender for my gaming buck. And if I understand you correctly, then I strongly agree with you about the lack of deep simulation in the genre. I would like to feel from AI opponents something akin to what I feel from a chess engine: more low-level situational analysis and fewer prefabricated decision triggers. Sadly, this kind of innovation is unlikely to come from Paradox, which seems to be in something of a consolidation phase. (see, e.g. the shutdown of Imperator)
One thing I have yet to see in a political strategy game is a good treatment of irrationality and demagoguery. Such as the phenomenon whereby a simple, crude, and ineffective policy solution is more popular than one that's plausible but complex and difficult to implement. Everyone in strategy games - even blue-and-orange-morality aliens, even supposed Neronian tyrants - is far too reasonable.
I think this might just be a little too low level for games made at our current technology level. Like you can gives characters and populations ideologies and have them act on those but it is hard to give large groups the kind of treatment you'd give important characters. Plus most strategy games are pre-industrial so demogogues are limited since there is no voting. Axioms will have a propaganda system so that you could sort of role play "a simplistic policy proposal". But that would take some amount of imagination and I can't think of any other game that can even do that.
Have the population's ideologies and sympathies make sense and gameplay impact.
Stellaris did a token but overall underwhelming effort there. Endless Space 2, despite being a minmaxy 4X, had a really interesting political system where parties and their laws were incredibly impactful, corresponding to supercharged Stellaris traditions. Unfortunately, things that affected political leanings of your population were super opaque and you usually ended up rolling with whatever build emerged naturally (or becoming a dictatorship and just deciding for yourself).
I found the way post-expansions Sins of a Solar Empire approached diplomacy really unique - to make an alliance you needed to not only make the other faction friendly to you, you also needed to make _your_ population friendly to _them_. Hence the need to send envoys, give each other missions and requests, and overall grease the wheels. The implementation sucked but I think they were on to something.
Well this is definitely the kind of answer I was looking for. Though it remains to be seen if there is a sufficient interest in such things to constitute a market.
A proper sequel to patrician III would be cool. With more polish and depth. Where game starts with economics/trade and ends with more and more complex politics/warfare
How much of a "sequel"? Just that you can do all the things you did in Patrician 3? I think I lost my digital copy thanks to the whole Stardock/Gamestop mess. But I did play it for a couple hundred hours. Do you want a literal "sequel" titled as Patrician or a spiritual one? Literal sequels are doing bad these days. Ubisoft royally bait and switched with their new The Settlers for instance.
I think proper sequel. But with 2d sprite graphics. With more fleshed out market systems, town politics and combat (like Stronghold). And not having to rely on the stupid trial and error of finding captains in pubs.
I don't think there is a game out there currently that combines these different elements well. An RTS like game, mixed in with politics (with different social hierarchies that you can ascend or be ejected from) and concept of having an inner circle with npcs you can make do various things depending on your status/wealth. And a somewhat sophisticated economic system where your character can trade and invest. With markets that go beyond just random moving prices that go down if you sell too much and vice versa.
Don't think this will happen anytime soon though.
Not hyped about Patrician 4 eh? Not shocking, it was dumbed down and then they did their stupid CD key nonsense. A hypothetical Patrician sequel is probably up there with a good Majesty 1 sequel in terms of you can get 80% of the features in some games but probably not 100%.
I have been thinking a lot about this, and share your complaint regarding deep simulation! I really want something that combines a deep economic with a deep political simulation. I've definitely been following along with the Victoria 3 dev diaries and have high hopes for it, and I've also been working on some economic simulations of my own which may some day evolve into a game.
I recently discovered X4: Foundations on Steam and thought it might be my dream game, but I've ultimately been kind of disappointed. It does a lot of things right, in that it has a mostly-real economy that you can participate in both on an individual level as a merchant or on a strategic level managing a business empire. But it's really got no political or diplomatic component, so ultimately it ends up feeling flat and more like playing Satisfactory/Factorio than like playing a Paradox title.
One of the things that disappoints me about pretty much every economic game out there (including X4 and apparently including Victoria 3 also) is that the money is exogenous. To feel like a real economy I want a game where the money is actually conserved and subject to policy (e.g. fiat vs. precious metals, with different currencies) rather than using typical video game money sources and sinks, which inevitably leads to brokenness and unrealism (ideally I'd also like conservation of goods, which Victoria 3 apparently isn't doing). So to your question, I guess I want my strategy games to include a monetary policy component? I, uh, realize this probably puts me in a fairly small demographic.
Exogenous prices bug me too. X4 defines a price range for each good, and Victoria 3 is apparently giving each good a baseline price. This probably helps avoid degenerate game states, but it limits the dynamism, and I'd prefer the simulation be robust enough to price things itself rather than relying on exogenous tuning.
I'll definitely be following along with your game!
I don't think the current model can support "real" prices but it does have a citybuilding style resource model, just without actually doing the building placement minigame. My previous project had been modifying Glest(the Advanced Engine fork), into a Majesty1+++ game with a citybuilder economy, granted focused in the end on RPG equipment but it did have food and stuff. I wanted to keep that type of system for Axioms. Nothing that has been done in a political strategy game before.
I intentially cut a lot of features for a potential sequel to stop feature creep. I could *probably* do a good monetary simulation in the sequel because computers, both for devs and players, would be at a high enough level of performance to handle that by the time a sequel was feasible.
Can anyone explain to me, a non-engineer, how Usenet clients work? I was hoping to find some Usenet posts I'd made as a punk kid at some point in the 90s- I know the forum, it will take some searching to find exactly when I made them (I'm a bit fuzzy on the year). Basically, they have sentimental value to me. (Kinda like all my Hotmail emails from my youth that Microsoft apparently deleted for all time because I didn't log in often enough..... If anyone has any tips on recovering these, I'm all ears).
I searched through the Google Groups archive last year, I didn't find exactly what I was looking for, but I got pretty close. Did Google Groups archive all of Usenet, or just some of it? And what's with these 'clients'- I need a desktop or third party app to search old Usenet content? (Which is hosted where, exactly? If it's hosted on x website, can't I just go directly to that website?) Anyone have the big picture here on Usenet time traveling back to the 90s? (Does anyone even still use Usenet?)
Deja News archived Usenet and then Google Groups bought Deja News.
Lots of posts could've been lost, either from existing before Deja News started archiving, or because someone asked for posts to be removed at some point along the way.
> "Does anyone even still use Usenet?"
This is a question I've recently become curious about. :) Preliminary searches seem to suggest "Yes." (I am being slightly lazy and have not yet tested this.)
I would love more info on the present state of Usenet. (Though if someone says "FOFY," that would feel very '90s-internet, too.)
This looks useful - https://archive.org/details/usenet
I need to buy a car soon. I've never bought a car though "normal means" before (always gotten used cars through deals with friends and family) but, obviously, this is an exceptionally weird time to be buying.
Any advice on how to approach this?
Normally a new car would be completely out of the question, but now I'm not so sure?
If you wanted to buy a modest car within the next 3-or-so months, what would you be doing?
(I'm sorry for how nonspecific this is)
Personally speaking, I had the choice to buy a new car recently, and I decided not to in favour of an e-bike. If it's not a hard need (i.e. need it to get to work) then my advice on how to approach this is don't, consider not having a car. It's totally possible in many cities, albeit I will admit it can be very challenging in some.
More directly answering your question though, I'd suggest scoping out what you want with something like https://www.kbb.com/. This will give you an idea of what the market is like in your area on a variety of axes, and if you decide to go with used, can even help you locate a car that's both within your budget and needs.
As for approaching new-car sales: I don't know if now is the time. The market is up like crazy, and it's very hard to determine the long term cost of a new car. Between regular maintenance, financing, insurance, etc. I personally think affording a new car spirals out of control pretty quickly. Plus, unless you have a clear picture of the history of the model you're buying, it can be pretty hard to preemptively know what could be recalled, what the biggest issues could be, etc. Some things worth considering during your purchase:
1. What is the best weather you will expect to drive in ~70% of the time? Worst?
2. Do you need snow / off-road / chained tires & separate rims in addition to the car? Sometimes you can convince a salesman that you'll be a purchase on a new vehicle as long as they throw in a set of rims & tires for cheap or even free. It's within their power and cheap enough compared to most new cars that it is very rarely a sticking point.
3. What kind of warranty / pre-service commitment are you willing to sign up for? Lots of new purchases come with a 1-year service agreement for all tire rotations, fluid changes, etc. You can get out of this if you want, but understand that over the lifetime of your car you could be paying up to 50% in the car's value for the sum-total of work. With a used car, you won't get this, but maintaining a used car is a bit of a crapshoot regardless (because you don't know what the previous owner did to it).
4. How much do you expect to drive per year? In what environment? My recommendation would be to get a smaller class vehicle if you're going to exclusively be in the city, because parking sucks in most places. If you're planning on lots of longer drives, a larger vehicle can be nice, but understand that is going to affect your energy pricing as well as contribute more carbon to the environment.
5. How much are you willing to spend relative to rent / mortgage? This is probably the key question, but you should be able to represent how much you spend on your car (and all externalities like insurance and gas) vs. housing (probably your next largest bill) as a percentage of your income. Do not compromise here - be honest with yourself about what you're willing to spend and how much you're giving up. This is probably the #1 thing I never see friends do when making big purchases, and they always end up regretting it later.
Other than that - have fun with it but understand that if you go into a dealership they will be trying to up-sell you in a lot of ways. If you can avoid financing, I suggest it, but you'll probably get a lot of resistance for trying. Dealerships don't necessarily love customers paying cash, which I'm not sure is well thought through, but salesmen aren't there to think about opportunity costs.
Lastly, have fun with it. You're committing to a large purchase that will stick with you for potentially years to come, so don't rush your process due to stress.
Find the type of car you want, colors you'll accept, etc.
I have bought new and used. Unless your ego can afford it, buy used if at all possible. You lose about 30% of the value the instant you drive off the lot. There are exception especially in crazy market times like this. For instance leading up to the last crash around 2006 things were going south. The actual bottom was 2008. We were looking at Chevy Tahoes. Used about 5 years old were around $30k. Most of the used cars we looked at had electrical problems. We found a newspaper ad (those were a thing then). It was late December, the end of the year, and the dealers were pumping to get their numbers up. We bought a new 2005 Tahoe for about $35k.
Otherwise, if you're cheap like me, I consider I'm only buying mileage. In my mind, any car begins to fall apart around 150k miles; and at that point is worth nothing. Your idea may vary, but follow me here. You find the car you like, there are hundreds available within say 500 miles. For several thousand dollars, I'm willing to drive a days drive to get a good deal, so 500 miles pretty much covers everything near me. So say cars.com has 50 cars that fit your target, but the prices and mileage are all over the board, how to figure it out. So, as Google would say, make a formula, then execute the formula. So open a spreadsheet, list the cars price and mileage on the odometer. Now for the formula. In the third column of the spread sheet, the cost per mile for what you're buying. $Price / (150k - $mileage) gives you how much you'll spend per mile to drive that car. Sort the table by this column, and you'll have each car ordered by what it will cost you to drive it for the miles available on the cars. Since you can know nothing about each car, I consider them all equal, except luxury levels, but you'll have to figure that out. So there you are, since the maintenance & operation of any car costs about the same, the only variable is the hardware cost for each mile you get to drive that car. And that cost per car will range from ten cents to a dollar per mile.
Gratulation - and you could easily prolong your honeymoon by posting a link to (all) your old posts. Not SSC, that's easy, but "Jackdaws love my big sphinx of quartz - The Wisest Steel "Man" squid314.livejournal - I find it highly inconvenient/at times impossible to get to those pages. And tiny inconveniences (as in the great firewall of China) do have effects. - Just for me, I am happy with any of the smarter readers to drop me a hint. But consider the wider readership :)
Most (or all; I'm not sure) of Scott's Livejournal posts are archived on the Internet Archive or archive.is, e.g. https://web.archive.org/web/20120909034605/http://squid314.livejournal.com/327646.html & https://archive.is/uwJUc . I'm not aware of any well-organized index of them.
I’d like to read something written by a superforecaster that goes into a lot of detail about the thought/research process that was involved in making a few specific predictions. Any suggestions?
Not written, but maybe check out the Global Guessing podcast: https://globalguessing.com/. They have a few interviews with superforecasters.
E.g.: https://globalguessing.com/forecasting-omicron-juan-cambeiro/
> In a special Global Guessing podcast episode, Superforecaster and Metaculus Analyst Juan Cambeiro walks us through his analysis of Omicron and his five relevant forecasts for understanding the variant
This looks great, thanks!
I don't know if this was already mentioned somewhere, but it'll soon be the end of the one year agreement you had with Substack. What are your future plans for the blog?
I'm in a reading rut. Anyone have a book (or books) you'd recommend that might help me out? I'll read pretty much anything 🤷
I am finishing up The Fate of Rome by Kyle Harper. The subtitle is "Climate, Disease, and the End of an Empire," in case you want to know where Harper is coming from. It's a little dry in some places but still interesting.
https://www.amazon.com/Fate-Rome-Climate-Disease-Empire/dp/0691192065/ref=sr_1_1?keywords=the+fate+of+rome+kyle+harper&qid=1642619463&s=books&sprefix=the+fate+of+rome%2Cstripbooks%2C103&sr=1-1
The Bhaghavad Gita is extremely short, extremely engaging, and extremely profound :)
Recommending book is a passion of mine, so here is a couple more...
Novels : The horseman on the roof, by Giono. The story of a young Italian hussar trying to survive in Provence during a cholera epidemic. Served with one of the best style in the French literature, and descriptions of summer that will makes you sweat just reading them.
Also it's basically 2020-2021 the book.
The Opposing Shore, by Gracq, The Tartar Steppe, by Buzzati, and On the Marble Cliffs, by Jünger. I group those, because they are both extremely similar and completely different. They all describe the hero waiting and somehow hoping for an invasion by a foreign power, but they do so in completely different ways. The Opposing Shore is the tale of a society so rotten by its old age that the existential threat of an invasion seems to be the only way it can finally catch some fresh air. Gracq style is brillant, in slow and powerful sentences ; all the novel smells of rotten swamps and decaying houses. The Tartar Steppe is the intimist story of a young soldier who wanted to earn glory in war, and consumes his life waiting for an enemy who never comes. It is both incredibly sad and incredibly hopeful, and I think I became both a better person and a happier one reading it. On the Marble Cliffs is the symbolic tale of a golden age country under the looming threat of the Head Forester. The writing style is strong, pure, precise, and sharp. It's not a coincidence the book was published in 1939 - by the enthusiast nationalist from my other comment.
Bernanos was a hardcore monarchist conservative catholic French author in the 1920's and 1930's. In 1936, he was living in Palma de Majorque when the Spanish Civil war began. Before the war, his son had enrolled into the Falange - Spain's hard-right fascist monarchist catholic paramilitary party.
When he learned from his son of the political purges and killings done by the falangists, he wrote a 400 pages pamphlet of incendiary rage against Franco, the falange, the catholic bishops, the king of Spain, and any of his former friends compromised with Franco.
"The Great Cemeteries Under the Moon" is a raging cry, short on structure, but strong in emotions. It is also one of the most amazing example of this so rare rationalist virtue: seing the truth, and telling the truth, about the sins and horrors of your in-group, even when your convictions, your friends, and everything you hold dear would push you to close your eyes.
Thank you so much to everyone who's recommended a book, this thread is a gold mine!
We can't know what you'd like, but what are you in a rut about?
It's hard to give good recommandations without knowing more precisely what kind of books you are into, but as you mentionned in an other comment that you are interested in WW1 I would highly recommand Jünger's Storm of Steel and Remarque's All quiet on the western front, both great novels on the World War.
All quiet on the western front is a pacifist perspective on WW1. It's a brillant account of the realities of war by someone who was in the trenches, and it strip down any romantic conception we may have of war.
Storm of steel is the autobiographic account of Jünger's experiences in the war as a young German nationalist with a mystical enthousiasm for war - war makes men out of boy and so on - who found exactly what he was searching for.
The combination of those two books is one of the most disturbing reading experience you can have.
For fiction, _A Deadly Education_ by Naomi Novik. It's good enough that have been reading a rereading it and the sequel while waiting for the final book, due out in September.
For non-fiction, _Alexander the Great and the Logistics of the Macedonian Army_ is quite fascinating. It's a history of Alexander's campaigns with all the battles left out, focussing on the difficult problem of keeping a very large number of people from dying of hunger or thirst and how it constraints his actions. It takes advantage of the fact that the relevant technologies didn't change until the railroad, so we have quite a lot of information on how much men and beasts ate and drank and what they could carry.
I'll second the recommendation for _A Deadly Education_ and _The Last Graduate_-- there's a tremendous sense of interacting systems.
The second book ends on a cliffhanger, so if you care about that, you might want to wait for the third book-- due out this year and intended to complete the story.
Is this the Donald W. Engels one? Sounds like something I would enjoy a lot
Yes.
How to Think by Alan Jacobs. More about how to be fairminded than how to think, imo. Very short, well-written. Somewhere in the middle quotes some bits of profoundly scatological invective from the writings of a couple of 18th(?) century clerics who detested each other's views, had me crying with laughter.
Coup d'État: A Practical Handbook is a brief and concise but very interesting treatment of how Coups can be executed, sometimes literally. If you don't like the first couple pages you won't like it, but I found it very enjoyable and it definitely respected my time.
The Great Game by Hopkirk gave me a lot of exposure to 18-19th century near east/ western Asian history that my US education turned out to be quite lacking on, worth a shot if you're into that sort of thing
I host some of the appendices of said Practical Handbook:
https://entitledtoanopinion.wordpress.com/2008/12/07/gimme-a-bomb-a-molotov/
https://entitledtoanopinion.wordpress.com/2009/09/07/the-economics-of-repression/
A how-to book for coups seems particularly relevant now, will definitely check that out.
Never heard of The Great Game but I'm sure my US education lacked just as much. I majored in American History for undergrad, so anything outside of the US will likely prove illuminating - ty!
The term "The Great Game" seems to have been coined by Kipling in _Kim_.
For fun: _Chasing the Moon_ by A. Lee Martinez, a humor-horror novel. So far, there's been a clever solution I didn't see coming.
Imagine an apparently excellent apartment with a somewhat sketchy landlord-- it's a gateway to a universe of monsters.
I've never even heard of humor-horror, that's reason enough to try it - thank you!
There's a lot of horror with at least some humor. _The Face in the Frost_ by John Bellairs is a classic.
That is a wonderful book, and now I want to go read it again.
Four Thousand Weeks is a surprisingly good book on time management which takes a more philosophical approach (ie no more productivity hacks. True freedom is consciously making choices about what to do with your time, not cramming in more and more tasks which, at the end of the day, are useless).
A time management book without productivity hacks sounds refreshing...looked it up and I see it's #1 in time management on Amazon, thank you!
The Golden Bough, by Sir James George Fraser.
I know people read this as literature, but it's probably still worth flagging the fact that all the theories in it are ignorant and speculative, and rejected by relevant academia.
I'd never heard of this but just googled it - thank you, sounds really interesting and I'm a fan of mythology!
The Fated Sky by Benson Bobrick; a very fine history of a mostly overlooked topic.
I love the stars and I love history - thank you!
Greg Cochran, hardly sympathetic to the Bolshies, says McMeekin repeatedly gets it wrong on them:
https://twitter.com/gcochran99/status/1460088450640531457
Here's an example of Cochran going after a specific claim in that book, rather than the overall thesis: https://twitter.com/gcochran99/status/1432367342474784772
I concur for The Ottoman Endgame. As for Houellebecq, I'd advise against the elementary particle, found it the bleakest and least "fun" of his books. I'd advise to start with Atomized, Platform or Serotonin instead.
Atomized is the official English title for "les particules élémentaires", i am almost sure... So i guess you mean another one. Personally, it's the one i prefer, followed by platform and then extension du domaine de la lutte (not sure how it is titled in the English translation). And those 3 are, in my opinion, much better than all his other novels (and really really good compared to current French authors)...
Love WWI history, I will definitely check this out, tysm!
+1 Ottoman Endgame
Congratulations on your marriage, Scott.
(Assuming your wife is also a member of the tribe) תזכו לבנות בית נאמן בישראל - in whatever capacity that may be.
Investing question: The general standard advice is "just buy an index fund", but while this gives you a broad stock index, seems like you could get better diversification by also covering other asset classes (RE, bonds, maybe crypto, probably some other stuff I'm missing. Some of these can be covered by stock indexes, e.g. Real Estate is covered by REITs, but I don't think all are). Is this a meaningful advantage to be gained by diversifying more, and is there some index func or robo advisor that does this?
I feel like the glaring flaw with typical stock index funds is that, an economic depression may cause you to be laid off at the exact same time as your investments are plummeting in value. Which is obviously a horrible combination of events.
Any US crisis is likely to be global, so holding shares outside US would not be that protective. Government bonds are a good insurance, though
In the particular scenario we face in 2022, buying an index fund may either be the best or the worst decision you can possibly make.
If you are trying to replicate the market portfolio, yes, that is exactly what you would want. In practice, nobody is trying to replicate the market portfolio. You can use this tool to check out some of the difference it might make: https://www.portfoliovisualizer.com/backtest-asset-class-allocation
The classic adage is to hold your age in bonds as a percentage, so a bond index fund would be a way to do that. It's generally understood to underperform heavily over the last few decades, however, but does provide mental stability for a lot of people. REIT returns can be good for some people, especially in tax-advantaged ways depending on accounts, but they still have underperformed against the total market.
So, other than moving to bonds as you age to reduce the risk of a downturn coinciding with when you choose to retire, there is little advantage to diversifying outside of the total market. You can also weight some of your index funds towards specific sectors or whatever; I'm weighted further into large caps than the total market index, because I think their growth looks better long term. But that's mostly just an opinion.
A lot of investors keep 5% or so in fun/risky investments, like crypto or whatever - in the past and unfortunate present (for them), goldbugs still exist.
Why do people use “incentivize” instead of “incent?”
Because "incent" is a neologism that doesn't exist in most idiolects of English. It's not that easy to supplant a perfectly cromulent existing word that everyone already knows.
Had to look cromulent up. First used as a joke in The Simpsons. Funny stuff.
"Incent" is relatively cromuloid, though.
Because incent is only used in the US? (according to online dictionary).
I am not American and have never come across it.
I am American and I've never come across it either.
Utilize instead of use.
Ah, but utilize already means to "convert to utils".
https://twitter.com/slatestarcodex/status/854382497261596676
Not quite the same, but from this day henceforth, I will refer to moving a chunk of code to a misc utilities file as “utilizing”.
I don't know what "utilize" adds to "use".
At least to me “utilize” carries a connotation of “you are specifically taking advantage of some quality of the thing”, whereas “use” may be more incidental.
I had the feeling you were right, but then could not come up with a single sentence where switching between "utilize" and "use" made a whit of difference. Can you? I think maybe saying"utilize"= saying "use" + doing a self-important strut
I think that situations where “use” can fit mostly are a superset of situations where “utilize” can fit, as “use” is largely a more generic version of “utilize”.
I can imagine situations where they don’t both fit, however. Ex: “what brand of deodorant do you use? I use $brand” would not really fit with “utilize” instead. The situations going the other way (like “how can we utilize this byproduct from the manufacturing process?”) could be changed to “use” and still mostly maintain the same meaning.
I agree that there is not much practical difference, and I rarely use the term “utilize” in my writing or speaking.
I think what your examples capture is that "utilize" sounds more formal and dignified -- so the word sounds all wrong in a sentence about someone's deodorant, which is not a dignified topic. Also, note that your deodorant example seems like spoken communication, while the manufacturing byproduct one sounds like an excerpt from an article. We code switch some when we go from talking to writing -- we upgrade to longer and more Latinate words. Sort of comes down to what I said before: "utilize" = "use" said with a strut.
Same reason they use "memorize" instead of "memor"?
no real content here, just wanted to say congratulations on getting married! i don't wish to tell you how to live but i'm hoping that you can wallow in each other's bliss; that the struggles are surmountable and that you build a beautiful rest-of-your-lives together :)
Ran across the following idea last night:
"A new tech startup plans to become “the stock market of litigation financing” by allowing everyday Americans to bet on civil lawsuits through the purchase (and trade) of associated crypto tokens. In doing so, the company hopes to provide funding to individuals who would otherwise not be able to pursue claims."
Full article is here: https://www.vice.com/en/article/v7d7x3/tech-startup-wants-to-gamify-the-us-court-system-using-crypto-tokens
Grotesque? What think ye?
Who can't afford to pursue claims? My impression was that most plaintiff's lawyers eat what they kill - they work in exchange for a percentage of whatever settlement you receive.
So plantiffs have been evaluating claims informally since forever and maybe more formally for the past 10 years as a part of litigation finance funds. It’s probably not a game changer and will likely converge on certain patterns of litigation
Seems interesting--could potentially be used to identify SLAPP suits.
At this point you've probably heard of the game Wordle, but if you haven't then you definitely should play it at https://www.powerlanguage.co.uk/wordle/. It's a lot of fun, especially comparing it to friends!
I've also just finished developing a Yiddish version, so if you happen to speak Yiddish I'd love it if you would try it out at https://greenwichmeanti.me/wordle/
I expected to like Wordle, but in fact I got bored with it after about 5 or 6 rounds. It doesn't seem to tickle the same fancy that makes me enjoy doing the NYT crossword puzzle, or even a good cryptogram. I think the issue is that there's no human connection -- there's no possibility of appreciating some cleverness on the part of the puzzle designer. It's clearly possible to create Wordle games with a computer program, and it's clearly possible to design a computer program to solve them -- I could do either myself pretty easily. When that became clear to me I lost interest. (Mind you, I can see how it would still be interesting and fun to *design* Wordle, or design a Wordle-solving program.)
That was interesting. I used to be very good at mastermind. Apparently this transfers to wordle, or I got lucky.
There's a moderate amount of transfer, but the fact that your guesses must all be real words, and that the answer must be a real word, introduces an interesting set of constraints!
However, this extra constraint makes it a bit overly simple. It's very hard to ever get it in fewer than 3 guesses unless you get extremely lucky, and once you have some reasonable strategies, the difference between 3 and 5 guesses is more luck than the refinement of that strategy. And I think that changing from 5 letter words to 6 letter words would make the game a lot easier (your first two guesses can cover not just all the vowels but also almost all the common consonants) and changing to 4 letter words seems like it would just make the game less fun.
See also https://qntm.org/wordle
Me trying to explain it to my extended family:
the rules are exactly the same, everything about it is the same, just that it retroactively alters the universe to make you as unlucky as possible.
like you guess "stare" and it's like "um, nope, no matches". and then you guess something else and it has to stay consistent with what it's told you so far, like what letters have been eliminated, but it can just keep retroactively changing what the secret word is until you force it to admit you're finally right.
it makes sense when you play it.
as another example, say you've narrowed it down to "date_" and it could be "dated" or "dater" or "dates". it doesn't matter which order you try them in, it's always going to be the last one you try.
it's kinda hilariously clever. like, it's cheating, sort of, but not in a way you could ever prove. whatever the secret word ends up being, you can look back at all your guesses and see that its responses are perfectly correct and consistent for that secret word. maybe i'm over-explaining it now. it's like it holds the game state in a superposition of all universes which are ... [gets yanked off stage by a huge cane]
Sounds like you can use the cheating against it by guessing lots of words with common letters so it excludes all of them.
So if it's "dated" or "dater" or "dates" I could guess "rates" to eliminate both of the final two.
That was fun, managed a lucky win in 5 by striking out on my first two guesses and locking the game out of vowels
Wow, I just tried Wordle and found it impossible to figure out. Even the instructions are confusing. It says "The letter I is in the word but in the wrong spot" but the word is PILLS and the I is in the right spot, so what does it mean by "wrong spot"?
Just tried it. Green tile means "right letter, right place". Yellow tile is "right letter, wrong place". So for "pills", the word does have an "i" in it but not there. Instead of PILLS, it could be ELIDE or IGLOO or other words containing "i".
The instructions are pretty confusing, I agree. What it means with the "PILLS" example is that the secret word has the letter "I" in it but at some place other than the second spot. So the secret word could be "IGLOO" (since it has an "I" in a different spot).
It's a similar idea to the boardgame Mastermind, if you've ever played that.
I only know Mastermind the tv show, and that not very well.
So what about the other letters in PILLS?
So after you guess a word, all the letters turn either green, yellow, or grey. Green means that that letter is in the secret word you're guessing (the eponymous "wordle"), in that same spot. Yellow means that that letter is in the secret word you're guessing, in some other spot. Grey means that that letter is not in the secret word.
In the instructions, they're trying to just illustrate that concept, so they show most of the letters in white, but that's just to highlight the one letter they're talking about. They're not telling you about the other letters in "PILLS", but you can imagine that they're all not in the secret word.
Suppose the secret word is "WRITE" and your first guess is "WEARY". The "W" would become green, since it's in the correct spot. The "E" and "R" would be yellow, since they're both in the secret word but in other spots. The "Y" and "A" would be gray, since they're both not in the secret word. You'd then want to think of another word that starts with "W" and contains "E" and "R" - so you might think of "WROTE" next. All of the letters but "O" would become green while "O" becomes grey, so you'd be able to guess "WRITE" for your final guess.
That's very clear; thank you. Now all I need is to find a web browser that will display the colors. I have some strange setting on my computer that overrides such things.
One more thing-- knowing that a letter is in the word, or in a place in the word, still leaves the possibility open that the letter appears more than once.
It's a shared experience. I somehow enjoy my favorite song more on the radio, half from the serendipity of it showing up unexpected, and half from the knowledge that a bunch of other people are listening.
Related: I teared up watching reaction videos of fans tearing up when [spoiler] in the S2 finale of The Mandalorian [it was a well done, tasteful, big nostalgia]
https://astralcodexten.substack.com/p/theyre-made-out-of-meta
I assume that is your twitter account.
This is interesting– so you can sense when someone you're interacting with is "on something" like SSRIs or Adderall or antipsychotics, although you can't necessarily identify the specific drug?
I've known several people on theraputic doses of psychiatric medication, and I've never been able to tell– even after they've told me about it, I haven't perceived anything distinctive that I associate with the drug. The only exception I can think of is one friend who had been misdiagnosed and was taking drugs for the side effects of the drugs for the side effects of the drugs for the condition she didn't even have... after she got off everything she got noticeably more normal. Now she's on a bog standard SSRI (and nothing else), and still seems normal.
Anyway, I wonder if the "character mouthfeels" you're perceiving are due to something else. On the other hand, it seems like it would be possible for someone to have a sixth sense for what meds other people are on, and this would be very cool, and I'm also curious if anyone here has that faculty.
Handing your kid an Ipad during dinner at a restaurant would be seen by many Menlo Park mommies as a pretty low-class thing to do. I think more affluent parents worry over their kids a lot, and tech is just one avenue they explore. But I think the public schools are using a lot of 1 to 1 ipads etc, so parents are not united enough in their dislike of screen time to actually stop schools from using tech.
I agree. I don't think it has to do with parents working at silicon valley, just that they have money and are higher class (or wish to be). Those kind of people spend a lot of time worrying about parenting, and they often have less kids as well. When you have one it's easier to not use screens then if you have three and you really need to shut a couple of them up for a few minutes.
for online reading, i'd recommend https://plato.stanford.edu/ -- just read stuff on anything that catches your interest (or check the "what's new" link if you don't have any ideas) -- most entries there have the character of analytic philosophy.
Probably the Very Short Introduction series:
https://www.veryshortintroductions.com/view/10.1093/actrade/9780198778028.001.0001/actrade-9780198778028
This is one of the few courses I took where it felt like a textbook and professor connecting various essays really paid dividends beyond just fumbling through some great authors or papers in the field on my own. I feel this way partly because a lot of the landmark pieces stand on their own, but if read in sequence are often a series of rebuttals or extensions of the immediately previous luminary's work, and the writing is a bit topically dense, asking you to shuffle around your core beliefs quite a bit, so it's not always obvious how and when to zoom out and track the broader arguments.
I enjoyed the writing itself though, full of fantastical memorable hypos, and sometimes subtle jabs about "profligate ontologies" so the original works themselves are also worth reading. A textbook, ranging from Frege or Russell up to Quine or Kripke, rich with selected essays, might be your best bet.
Yes, Very Short Introductions are a great series. I just finished the one on Ancient Philosophy, and the one on Philosophy. Will try this one next.
Here's a well-written book review by Jerry Fodor, of a book about Saul Kripke, that goes into a bit of the history of analytic philosophy [https://www.lrb.co.uk/the-paper/v26/n20/jerry-fodor/water-s-water-everywhere].
Here's a pdf I made for a linguist friend once, "Some Landmarks in Philosophy of Language," of which a lot of early analytic philosophy was [https://drive.google.com/file/d/1QRyb7seMy1jETdFqE3u2xY590g3LGtis/view?usp=sharing]
Kripke's "Naming and Necessity" is an influential, and comparatively accessible, classic. His theory of how names work is that they're "rigid designators," not "descriptions," a difference in e.g. how you think they apply to counterfactual hypotheticals. (E.g. pretty much all I know about John Adams is that he was the second president of the US, but even then I'm not sure I remembered quite right. If I say, "I'm not sure if the 2nd president of the US was the 2nd president of the US," that's strange to say. If I say, "I'm not sure if John Adams was the 2nd president of the US," that's a normal thing to say. Why? Is it because "John Adams" rigidly designates a particular guy, across counterfactuals, and is not just a description synonymous with "the 2nd president of the US" even when I'm the one referring to him?)
Note that beyond philosophy of language, most philosophy taught in anglophone western Universities is considered "analytic philosophy"; it's more a style than a topic (trying to use clear precise language, writing arguments that could easily be formalized; compare to the more literary, playfully ironic style of continental philosophy); e.g. moral philosophy can be analytic, metaphysics can be analytic, philosophy of mind can be analytic, etc. (I guess it's also a sociological category of who reads and cites each other, in a pretty unified movement that started with Frege, Russell etc. and the people they influenced; e.g. the style of Aquinas is pretty close to the style of analytics, but he was before them).
I'd say the best introductory book to philosophy in the analytic style is "Just the Arguments: 100 of the Most Important Arguments in Western Philosophy," since it's bare-bones about the arguments, and even gives sketches of their formalization (Premise 1, Premise 2, etc.).
for specific sub-fields I might have better recommendations for books. E.g.
- philosophy of science: Peter Godfrey Smith's "Theory and Reality"
- philosophy of cognitive science: Joel Walmsley's "Mind and Machine"
- philosophy of biology: Kim Sterelny and Paul Griffith's "Sex and Death"
Also there's a good-looking series by Princeton University Press [https://press.princeton.edu/series/princeton-foundations-of-contemporary-philosophy]
Good resources also are
- the Stanford Encyclopedia of Philosophy [https://plato.stanford.edu/entries/analysis/]
- the Internet Encyclopedia of Philosophy [https://iep.utm.edu/analytic/]
A classic, and really short, paper in analytic epistemology (phil. about knowledge) is Edmund Gettier's "Is justified true belief knowledge?"; since like Plato we'd basically all assumed it was; then Gettier gave a handful of obvious counterexamples and everyone was shook. Good example of the attitude and common writing style.
(my favourite similar example: someone tweeted, "When I talk to Philosophers on zoom my screen background is an exact replica of my actual background just so I can trick them into having a justified true belief that is not actually knowledge.")
Annoying of me to focus on my disagreement with the throwaway joke when your string of comments is really useful and great... BUT:
I don't think the example works. Unlike a barn by the side of the road, most people wouldn't have a belief (and if they do, it's not justified) that what appears to be behind you on zoom is really what's behind you, because artificial background images are so common.
It depends on your background image. If your background image is a stationary wall with a bookcase, it may well be that a good number of people withhold judgment, and that the ones that didn't should have, given what they know.
But if the background looks like an ordinary living room, and about ten minutes in, what looks like a husband walks by on the way to the kitchen, and the lighting gradually changes over the course of an hour as the sun moves, then I think most people wouldn't actively attend to it, would believe it is the real background and not an artificial one, and would be justified in believing that it is the real background.
True. I've only encountered anything like the former case. If you had an animated background of the sort you describe in the latter example, that would seem like a Gettier counter!
I don't think that 1 and 2 are self-evident. I think that "utility is whatever it is we ought to maximize" is something like a definition (which does require the assumption that there is something we ought to maximize, though standard decision theoretic representation theorems lead naturally to the idea that any sort of goal-directed action must act as if there is something one ought to maximize).
We need something substantive to move from the idea that if a person desires something, then that is a reason for us generally to try to bring it about (that can be traded off against reasons to do incompatible things). But once we have that, something like preference-satisfaction utilitarianism of some sort or other follows pretty quickly.
Getting from preference satisfaction to the hedonic concept is then inferred from the fact that most people generally prefer happiness to unhappiness, and most people generally prefer the lack of suffering to suffering, but there are counterexamples in both cases. (Someone might prefer the existence of great art so much that they prefer suffering and producing great art to not suffering and not producing art. Someone might prefer knowledge, or a good life for their children, strongly enough that they prefer this even if it leads to unhappiness for themself.)
I think there's a useful way to characterize preference-satisfaction utilitarianism as just the statement that what "we" have reason to bring about is just all and only the things that each individual prefers. An individual having preferences/desires means that the individual has a reason to do things to bring about the satisfaction of those desires. So if we can argue that the reasons a collection has are all and only the reasons had by the individuals that make up that collection, then I think we are there.
I'm not sure what 'self-evident' means. The way I think seems to be determined in large part by the culture I grew up in, the people and ideas that came before me and I was exposed to. Are there any absolute truths? IDK maybe the golden rule. (Do unto others as you would have others do unto you.) (The 'self evident truths' part of the Declaration of Independence is a bit of feel good prose.)
1 and 2 are derived from our moral intuitions.
3 is tautological. "Ought" means doing actions that lead to outcomes where our values are most satisfied/maximized. Utility is an abstarction for our values. By tabooing both words we end up with: "Maximizing our values maximizes or values"
I'm not sure what "self-evidience" is. I think it's clearer to just speak about our moral intuitions.
Someones moral intuition in natural rights is different from utilitarian moral intuition in a sense that it's more complex. You can reduce natural rights to utility maximization, that's the whole point. So the correct framework is not to see utilitarism as an opponent of natural rights but as a gear-level model, transparent box, from which we can deduce natural rights.
Utility maximization isn't a separate value. As Kenny Easwaran mentions, "utility is whatever it is we ought to maximize". It's an abstraction of our values and If people think that some of their values are not captured by it either they are confused, or the utility function is poorly defined. Your example rings like confusion to me, but more details are required to be sure.
There's a popular argument about #2 involving the philosopher getting kicked in the gonads. I find it quite persuasive, though informal. Philosophy (moral or otherwise) should be grounded in reality.
The rest is, I admit, debatable.
#3 can be derived from a handful of axioms, all / most of which seem attractive or desirable on their face. A search on "Von Neumann-Morgenstern axioms" should do the trick.
This is not correct.
The VNM Theorem says, informally "If an agent has 'rational' preferences over outcomes, then *those preferences* can be summarized by a utility function which the agent will prefer to maximize in expectation." (The expectation is with respect to the uncertainty the agent has about the world.) So, VNM-utility is merely a numeric summary of an agent's preferences. On the other hand, ethical-utility is supposed to be a direct measure of moral goodness (which, for example, it might be wrong to maximize in expectation).
I believe Harsanyi has some influential work on the connection between the two types of utility. Maybe under certain conditions they can be shown to be equivalent-- but I haven't gotten around to reading about it yet. Another relevant work (dense, but very cool) might be Lara Buchak's "Risk and Rationality."
If you buy the axioms underlying VNM, then you've got yourself a utility function -- one with the nice feature of being useful in making decisions under uncertainty (i.e., the alternative that maximizes the expectation of utility is the most preferable and thus "ought" to be chosen). But if you don't buy the axioms - perhaps because you're facing an ethical decision and you're a deontologist - then that utility function ain't for you, and it won't tell you much at all about what to do.
If we're talking ethics, and we're consequentialists, VNM will probably do fine; there's no distinction between "ethical-utility" and "utility." If we're deontologists, having a utility function is irrelevant. If we're "rule" deontologists, we'll need something like a utility function and a lot of evidence.
Personally I don't feel words like "happiness", at least the way most people use language, even have a precise enough meaning to come to absolute conclusions through building chains of reasoning on top of them. In my view "happiness" is usually used to describe mental states that people want to stay in, and suffering the states they prefer to avoid.
For value systems, I don't believe there is an absolute basis. You could have a cult that believes everyone should suffer and die, and have its value system be internally coherent.
I think humans are social creatures that usually have some instinctual concern for the well-being of others, and also see their own self-interest in structuring society around certain types of mutual cooperation and behavior, and this is reflected in their value systems. But I wouldn't put any of them as totally self-evident or grounded in some absolute truth.
I have a self-teaching textbook simply called "Russian Tutor: Grammar and Vocabulary Workbook" by Ransome and Tomaszewski
In terms of research into language learning I recommend Stephen Krashen. He has videos and papers.
The summary is to spend 500 hours letting your brain absorb content that is just about comprehensible but above your current level.
Actively rationally thinking about it isn’t the key - letting your neural networks pattern match is. Content with context - so video of real situations, films, podcasts, childrens books with pictures, real world situations is best.
Try and overcome adult need to be correct or socially acceptable. Take in as much content as you can and build practices so you enjoy doing that (so videos and films and podcasts that you like)
That's very interesting. So do you mean that we should focus on this sort of passive consumption, and completely abandon more conventional learning? Or just add the consumption on top? I can't imagine how I could ever actually learn a language without at least some attempt to actively learn and remember rules and vocabulary.
I think that you need both, or that it at least speeds things up tremendously. Lots of input helped me a lot, but if I'd never looked up any words it would have taking me way longer.
I think in theory you don’t need both - young children can learn a new language just by listening then suddenly speak fluently after six months or a year (there’s a talk where Krashen gives an example of this).
And obviously we all learn our first language this way.
However for adults I think you’d get bored and looking up and doing some rote learning of key vocabulary will help give you access to more content.
So yeah - I’d make Anki word lists of the words in the Chinese cartoons and then rewatch them. That kind of thing.
Interesting. My only second language that I've ever been at all proficient in was from both doing a Duolingo type app and living in the country, which I suppose is a form of this mixed approach, so that holds up. I'll have to fire up the Chinese cartoons.
You learned your first language without actively trying to at all.
2 points:
1) That's not really true, is it? Children are actively and deliberately taught a lot of their native language. Children ask what things are called, get corrected by adults, and are formally taught much of their language in school. Children do learn more of their language from that kind of passive absorption and imitation than is typical for adults, but:
2) Children famously have more capacity for language acquisition. It seems intuitive to me that the difference in capacity would be especially pronounced when it comes to this, the only method of language acquisition young children have at all (since some language is required to be actively taught). Relatedly, children spent *all* of their time doing this. So even if it were much less efficient than a more active, structured learning process, we still could expect it to work eventually. But it seems plausible that, with one language already mastered, we can circumvent this inefficient process.
Perhaps my perspective is less common than I assume, but the idea of anyone learning anything important about their native language in school seems silly to me
Well they do in western English speaking cultures that I'm aware of. I was learning aspects of my native English until secondary school!
I had the good fortune to learn Standard English as my first language, so the rules I learned in school all seemed pretty obvious. I think the main benefit of that part of my education was to make me realize that there were rules I was already following with realizing. (Non-standard forms of language have rules too, but you generally won't have anybody explicitly telling you what they are.)
Adults do tend to deliberately teach language, but if you have neglectful parents who don't bother you'll still learn to speak.
The only thing I've ever found that works for me is tens of hours of input. I supplement with grammar lessons and flashcards, but honestly the biggest hump for me is always turning "random foreign language sounds" into "words and sentences, just not in a language I know." Overcoming that barrier requires hearing a ton of the language, and preferably with content made for learners that is slow and clear. "Dreaming Spanish" or "Comprehensible Japanese" are good examples of the class of content I mean.
Unfortunately, I don't have any content recommendations for Russian or German. I'll get to Russian eventually when I decide that I'm functionally fluent in a Romance language and an East Asian one...
Just seconding Tom below, I think Paris is a win condition for YIMBYs.
I think the European cities are actually quite a bit closer to YIMBY than most American cities. They don't have all the requirements of minimum lot sizes and mandatory off-street parking that even most of New York and San Francisco have (let alone every single other US city). I think most YIMBYs are happy with a moderate amount of preservation rules, though the impact of those rules on an older city is higher than the impact of those rules on a newer city. Probably the biggest way in which European cities are less YIMBY than American cities is that skyscrapers are illegal throughout the entire core region in many such cities, while in the United States they are usually legal in a single central square mile.
Vancouverism (https://en.wikipedia.org/wiki/Vancouverism) is probably the only form of residential density that is easier in North America than in Europe. I don't know if it actually achieves density greater than that of Euroblocks (http://urbankchoze.blogspot.com/2015/05/traditional-euro-bloc-what-it-is-how-it.html), or if it achieves equal human density while allowing more parking than Euroblocks. But outside of Vancouver, you don't see that large of an area of Vancouverism anywhere (there's maybe a square mile in Toronto, Austin, Chicago, and Seattle, probably a larger area in New York, and only sporadic bits elsewhere).
Note that many places in which skyscrapers are illegal it's for practical reasons - eg Paris was originally (in Roman times) a mining city, and the ground beneath the city centre is basically hollow, it simply can't support the weight of buildings above a certain height.
Average rent for a one bedroom unit in Paris, according to a cursory google search, is about $1k/mo or $1.4k in "city centre." I would not say that this is "extremely expensive." That's cheaper than Chicago, and a lot cheaper than NYC or SF.
Paris has a population density of about 20k people/km^2, Barcelona at 16k, Strasbourg appears to be quite un-dense at 3.5k. For American comparisons, SF is 6.6k (still in km^2) and NYC at 27k. Fun fact: San Francisco is America's second-most dense city.
As an American YIMBY, I just want our cities to be dense. America has plenty of land for suburbia, there will always be room for a suburb somewhere if enough people want to live in one. If a city's limits grow, the suburbs will move outwards and take over rural land. There appears to be a massive amount of handwringing that the suburbs will disappear when that's frankly unimaginable. I just want the density of a city to be determined by the free market according to people preferences. If some developer builds 100 sq ft units, well they will either find tenants or they won't. If they don't find tenants, they will lose money, and the next developer will build larger units. That's the worst possible scenario of letting developers build as dense as they can.
Should Paris be more dense? I will leave that to the French. I think they did a wonderful job. Paris is a gorgeous city, prices are reasonable, and transit is convenient. In America, I think we have artificially handicapped ourselves from building cities like Paris by restricting density.
> Should Paris be more dense? I will leave that to the French.
As a French, I think we should try to split the "historical center", that should be preserved at all costs, and the "place where people live and work", which should be relatively close and very dense. I think you should also be a bit more precise on what you mean by "density". Paris "Paris" density is 21k/km², for 2 million people and 105 km². Urban paris is 3.8k/km² for 10.7 million people and 2k8 km². Metropolitan density is 690/km² for 13 million people and 18.9 km². I'll leave it to the Parisiens to debate on what counts as being in Paris, how close do you have to be to have the opportunities and salaries that comes with living in Paris. But Paris "I can walk to work" and Paris "my job is 1h30 of commute away" are two very different cities.
If you live in Paris (area or intramuros) your job may be far away but chances are still good that you'll be able to do your shopping on foot. And you'll often be able to commute using public transportation if you are on the right RER lines.
I do not know whether that range is accurate or what's meant by city center, but you should also consider that normal people probably earn less in Paris than NYC or SF. At least that's for sure in my line of work. I lived and worked both in Paris and NYC.
The goal should be that there is NOT a real architectural loss, because the average person views the newer, bigger buildings as the architectural equals of the older, smaller buildings. This is a fairly high bar, but the architecture profession needs to genuinely attempt to clear it. Right now, they don't. Instead, they maintain their ideological commitment to unpopular modernist designs and roll their eyes when the public complains.
Yeah I don't think YIMBY ideology scales beyond the national level. I'm sure there are YIMBYs everywhere, but I view YIMBYism to be a reaction to local deficiencies. When I lived in Iowa, I wasn't a YIMBY, I didn't even know about the movement. The housing stock in Iowa seems fine. I now live in San Francisco, and I see very dramatically the deficiency in housing and the dysfunctional local politics that cause it, and now I'm a YIMBY.
Whether some nation *should* be more YIMBY I think is related to the rest of the nation's politics. For example, in America, the bad housing situation is a big driver of poverty. This might not be as much of a problem in a European country that has a stronger social safety net. Maybe you get priced out of a city, but there is a good government program to relocate your family to some other place that still has a decent school. To the extent that we want to reduce poverty and its related social ills in America, providing better and cheaper housing is, it appears to me, a very effective way at accomplishing that. If there is some European country that really prizes its old architecture and is willing to resolve the social cost of having expensive housing in some other way, I don't think there's inherently something wrong with that, it's just a different set of values.
This is a good point, YIMBY as a political movement is directional, and what's obviously trivially true in an extremist vetocracy like San Francisco becomes much more debatable in a country with functioning zoning laws.