Google DeepMind is hiring for research scientists and research engineers for their Google DeepMind AI Safety team, where their focus is on alignment, evaluations, reward design, interpretability, robustness and generalisation. Apply here: https://boards.greenhouse.io/deepmind/jobs/3049442. Also, please spread the message if you can.
I have MRSA. After one very painful outbreak of multiple abscesses, a 10 day script of doxycycline, and an incision and drainage, it went away. It came back within a few weeks, with a small pimple tripling in size and growing painful enough to keep me awake at night. I got another script for 10 days of doxy, but am having an absolute nightmare of a time getting a referral to an infectious disease specialist as I don't have a primary care doctor, I was only treated at urgent care. I do have a derm appointment forthcoming, however. Am I just screwed? Is this going to keep coming back and ruining my life? I worked in a hospital as an aide and am thinking about quitting and dropping out of nursing school after this.
Knew someone who got. MRSA infection at the site of her incision for gastric bypass surgery. It was quite a large, badly infected area, and for a while a visiting nurse was seeing her every day. Nurse told her that if she could get to the ocean and just soak in the ocean for a period daily, that would be the best possible treatment. Don't know how valid it is, but worth knowing about and trying.
About seeing the derm. Here is how to get in fast if your appt. is a ways off: Get on a wait list, if they have one. Then, whether they do or not, call every morning about 15 mins after they open and ask whether they have any cancellations. Be friendly and chatty, not pushy -- like "haha, yep, it's me again, how're you doing this morning? Just thought I'd check and see if you'd had any cancellations today." Calling people on a cancellation list is a pain,. because most people don't answer and if they do most can't come in on short notice. If you establish yourself in the staff's mind as a nice person who *really* wants to come in soon, and will save them the trouble of calling people on the list, you'll be able to snag a vacant time slot quickly.
If it's any solace, I was diagnosed with MRSA about 15 years ago. I was getting these super-pimples, very painful, and they grew much faster than ordinary skin infections, and about 40% of the time they turned into abscesses which needed to be drained and so on.
In short I can't recall exactly what I was given, but I underwent an extended course of antibiotics, and I've been (AFAIK) MRSA-free ever since.
This is of course anecdotal -- I may just be lucky; the diagnosis may have been incorrect, etc. But I can absolutely say I don't miss freaking out every time I had the slightest skin imperfection. I would counsel you to be an annoying squeaky wheel with wherever you get health care, and insist on a referral to a qualified specialist. The Dermatologist should be able to point you in the right direction.
Good luck! I assume you are US-based (these sorts of "can't get there from here" medical bureaucratic f-ups seem to be uniquely American). Just insist and keep calling. Doctors have a duty of care, and some of them at least take their obligations seriously. You just need to find the right one.
Seeing in our host's latest post [ https://astralcodexten.substack.com/p/links-for-may-2023 ] a link to an interesting article on desalination, I wondered if any reader can help with a question I have on extracting salt from sea water.
I once had a challenging online exchange with someone who disputed my contention that optimal techniques for extracting salt from saline solution such as sea water could be different to those for extracting pure water from the same. It seemed I was a "cretinous mong" for assuming there could possibly be any difference.
When I pointed out he might be correct for 100% separation, ending up with a pile of salt on one side and distilled water on the other, but the same does not follow for partial separation of either one or the other, the consensus from other participants in the discussion was that he was the mong! But I digress.
I had read that a brilliant technique had been discovered for partially extracting salt from seawater by adding the water to a mixture of a pair of organic compounds in which the solubility of the salt depended on small temperature differences of the mixture. Some of the dissolved salt but none of the water would mix with the compounds, and the water formed a separate layer on top, as if the organic mix was oil.
Changing the temperature (I forget whether up or down, but probably the latter) by only a couple of degrees reduced the solubility, so that some salt would precipitate out of solution and could be filtered out. Then by simply skimming off the salt-depleted sea water, adding a fresh supply, and cycling the temperature again meant the process could be repeated.
I forget the name of the compounds though. As organic molecules often do, they had long names, such as poly-di-methyl-tetra-thingummy-jig, and out of idle curiousity I would love to be reminded what they were, not that I plan any salt extraction myself!
Throwing this out there for philosophy fans, math mavens those interested in schizophrenia. I just finished a short novel by Cormac McCathy, “Stella Maris”.
It’s a vey engaging, short - 200 pages - read. Styled as a series of conversations between a young woman who is a mathematical genius and her psychiatric counselor.
I won’t try to sketch the plot beyond saying she checked in with a toothbrush and $200,000 in cash.
It’s really pretty good stuff. Cormac McCarthy can write with the best of them and the on going conversations are pretty intriguing name dropping Wittgenstein, Schopenhauer, Pascal, Jung, Godel, Von Neumann…
Our friend Vinay Gupta is still going, still involved in crypto, and still enlightening the ignorant, and I am genuinely pleased to hear about him, courtesy of an unexpected link from the drama-lamas:
I was a little worried given we heard nothing more about Luna or from himself, but it seems he was simply going deep with his giant footprint. I wish him well!
It all works out well. First time she's gone viral. Also, she gave the emo Elmo cake to the parents for free.
This resonates with what went wrong with the Japanese moon lander-- the radar report seemed weird, so the lander started ignoring everything from the radar and crashed.
The cake is much less consequential, but the baker was surprised to hear that the cake was for a fourth birthday, and she smooths that over, thinking that maybe the four year old is a Wednesday Adams fan. Fortunately, she has enough flexibility to ask about the theme of the party-- Sesame Street. This is why humans will defeat AIs. (Just kidding.)
*At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.
*Of the subgroups in this scene, effective altruism had by far the most mainstream cachet
and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.
*Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes.
Be Cautious: Abuse in LessWrong and rationalist communities in Bloomberg News
It's interesting to contrast the level of specificity between the first and the second halves of your quotes. "Unspecified man did an unspecified thing to an unspecified woman; she made an unspecified complaint to the police, with an unspecified outcome." vs "In year A, a person B, working at C, donated $D to organization E."
What does "one rationalist man" even mean? Is it someone important, or just a random guy who maybe reads LW or ACX and/or maybe participated at some public rationalist event? Does participating in this open thread make someone "a rationalist man / woman"?
"She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police."
The euphemism treadmill has gotten so bad I have no idea what that is intended to mean. It could cover everything from "tried to kiss me when I wasn't in the mood" up to full-blown rape.
Reading this post plus some other related Reddit threads got me wondering: do non-aphantasic people feel that they get any tangible benefits from mental visualization, or is it basically just a form of entertainment? Sasha seems quite eager to "cure" what he views as a mental disorder, but I am aphantasic and to my knowledge I've never encountered any diffculty as a result. Like many other aphantasics I didn't realize that anyone could have mental visualizations until recently - I thought allusions to this ability were just a weird figure of speech.
As far as I can tell, the only practical impact that aphantasia has on me is that I tend to skim the imagery-heavy parts of novels because I don't get anything out of them. But I don't have trouble e.g. doing spatial transformation problems or planning move sequences in board games.
Does anyone with a strong ability to form mental visualizations/imagery feel that it plays an important role in any types of tasks or reasoning, and if so which ones?
When I was a kid, my parents would often chastise me for looking at the ground. I often looked at the ground, or a wall, because it provides a flat, blank canvas across which I can project my thoughts. If I can't look at such a surface, thinking becomes slightly harder. If I have to look at an irregular texture (especially someone's face), thinking becomes much harder. On rare occasions, the imagery is so strong that I forget what's in front of me. The resolution and saturation are very weak, but the opacity can be significantly greater than 0%.
The tasks where I don't use this are when I'm A) memorizing lists and B) counting integers. I use imagery for nearly every other category of system 2 reasoning. For example, trying to make sense of Bayes' Theorem was difficult until I invented for myself an "office building" model, which consists of 3 floors connected by conic sections, and where each circle of the conic section corresponds to the numerator and denominator of P(), and multiplication/division can move the circles to different floors via something that resembles vector addition. It would be easier to explain with an animation, but I've never seen one in the wild.
It's hard for me to say whether it's absolutely necessary to my mode of thought. I imagine aphantasics might use foreign strategies. I lean toward "yes, it probably makes certain things easier than they would be otherwise", but am open to being wrong.
I'm not aphantasic, but neither am I one of those (lucky?) people who has the ability to conjure up strong/vivid mental imagery. I can see things in my mind's eye but they only appear in flashes and they're sort of... ghostly, I guess? It's like a mix between an image and a concept. They're not vivid at all. And I'm not even sure exactly _where_ they appear. I'm tempted to say "above/behind my eyes" but it's not really that. It's not really in any particular location.
(One interesting detail that I discovered as a child is that I also don't have complete control over what I'm visualizing. I remember attempting to visualize a chair rotating clockwise and then attempting to visualize it rotating counter-clockwise and being frustrated that it kept switching back to clockwise on its own. Although now I can hardly even keep the image in my mind long enough for a single full rotation.)
This is very much in contrast to my dreams, where my mind often seems to come up with very vivid and well-defined but completely fabricated locations, and I can recall them in detail even after I wake up. Occasionally I'll even revisit a previously-dreamed-of location and will think "oh, this place again", sometimes literally years later.
My weak visualization skills do occasionally come in handy, in particular when trying to solve simple geometric problems. For example, it's not too difficult for me to visualize a circle with an angle marked from the center and the associated sin/cos/tan lines. But if I were to, say, attempt to visualize the process of adding two 2-digit numbers together using the standard column method, I wouldn't be able to keep the actual numbers stable in my mind's eye long enough for it to be of any use, let alone modify the image as I calculate sums.
My guess is that having a strong visualization ability would come in handy as an artist. I was discussing this topic with a friend of mine (who has aphantasia) whose partner is an artist. He said that according to her, when she visualizes something, she sees it in full and vivid detail (e.g. an apple isn't just a reddish blob, it has all the shading, varied color patches, specular reflection, etc. as a real apple does).
Do people with aphantasia dream? For me the imagery in dreams is the main value of mental imagery, which is purely an aesthetic value. But I would think everyone would have to dream, whether or not they remember them, since our visual world in waking life is mostly also dreamed up by our brains.
Personally I do dream, and I guess I do get some slight mental imagery while dreaming. My dream imagery doesn't have any color nor does it have much detail. But I do sometimes get a general "outline" of my surroundings in the dream, e.g. the shape of a building or the edges of an object.
Mostly my dreams are conceptual - I have a non-visual awareness of what is happening in the dream as it progresses, kind of like what happens in my head when I read a fictional story.
Overall my dreams play a negligible role in my life and I forget them immediately unless I really try to hold them in consciousness. But I know many people whose dreams affect them a lot (for better or for worse). To your point, I would guess this is strongly correlated with how vivid their mental imagery is when dreaming.
Hey, does anyone with a strong math background have any potential connections or ideas on this?
Suppose that I have n matrices A1,…,An∈Rm×m with m≫n. Can I find n new matrices B1,…,Bn∈Rn×n that have the same 3-way cyclic traces:
∀i,j,k:Tr(AiAjAk)=Tr(BiBjBk)?
By analogy, if I had n vectors v1,…,vn∈Rm, it would be easy to construct new vectors u1,…,un∈Rn that have the same inner products (by choosing an orthonormal basis for the span of the vi and then writing each vi in that basis). Parameter counting suggests there should be matrices B that match a given set of cyclic traces (we have n3 parameters to pick and only n3/3 constraints), but I have no idea how you could pick them "naturally" and don't have any reason beyond parameter counting to think they exist.
I think I figured this problem out. Basically the idea is to make the B_i's block matrices where only the top i-by-i block of B_i is nonzero. (By analogy if I construct the v_i by Gram-Schmidt then only the first i entries of v_i will be nonzero for each i.)
NOTA BENE: Since this question was originally asked by none other than Paul Christiano, of OpenAI, FHI, the Alignment Research Center, et al, I have to ask if there is a theory-of-AI-related motivation behind this question.
In more detail (the exact calculations are pretty complicated; I might repost this with more details on MathOverflow, and it'd be my first post ever there if so):
0. Start the algorithm by setting B_1 to have Tr(A_1)^(1/3) on the top left corner and zeroes elsewhere.
1. For each iterative step i, where 2 <= i <= n:
1A. Choose the top (i - 1)-by-(i - 1) block of B_i by using the constraints Tr(B_jB_kB_i) = Tr(A_jA_kA_i) for 1 <= j, k <= i - 1; this should give you a system of (i - 1)^2 linear equations, which should have a unique solution you can find with Gaussian elimination/the other usual linear algebra tricks. (By the way this system of equations has a block triangular structure since the B_j do, you can basically find the j-by-j subblock for j from 1 to i - 1 before finding the (j+1)-by-(j+1) subblock.)
1B. Choose the entries of B_i in the i-th row AND the entries in the i-th column, except the (i, i) entry which will be chosen in the next step, according to the constraints Tr(B_i^2B_j) = Tr(A_i^2A_j) for 1 <= j <= i - 1. This should give a system of i - 1 bilinear equations in 2(i - 1) variables, so I think if you choose the free column appropriately then the free row has a good solution (here the free parameters appear), or you could choose the free row first and solve for the free column instead.
1C. Lastly, choose the (i, i) entry of B_i according to the constraint Tr(B_i^3) = 0. This should give you a depressed cubic in one variable; once you find the coefficients (the bulk of the computation in this step), you can solve it very cheaply using your choice of either Cardano/Vieta/Lagrange's algebraic method, trigonometric/hyperbolic functions, or Newton's root finding algorithm.
There may be some hiccups due to the linear systems in step 1A being singular or the cubic equations in 1C maybe having either three or one solutions, but between all the free parameters and the fact that you can reorder the A_i's for free in any of n! possible ways, I think that this algorithm should work for any set of A_i's in general position (i.e. if there are no magical algebraic cancellations).
If all this amazing stuff fails, you can fall back on the ridiculously overpowered algorithms of either *gradient descent* (using \sum_{i, j, k}|Tr(A_iA_jA_k) - Tr(B_iB_jB_k)|^2 or something like that as the loss function), or *homotopy continuation* (although be warned that you will probably get a ton of complex solutions, and an even larger amount of paths tracked which diverge to infinity).
EDIT: I got Paul Christiano's affiliations slightly wrong (he's not a *cofounder* of OpenAI, although he did work there).
The Hindus came up with this interesting scheme for categorizing things according to their overall tendency. I accidentally independently confirmed the existence of these tendencies.
I believe superego was supposed to represent all authority voices from outside, like parents, teachers, society, priests. gods.... Often they are oppressive, but sometimes they are supportive.
> Within each of these ego states are subdivisions. Thus Parental figures are often either
> more nurturing (permission-giving, security-giving) or
> more criticising (comparing to family traditions and ideals in generally negative ways);
or, using the gender stereotypes, the nurturing mode is typically associated with mothers, commanding/critical mode with fathers. But of course in real life, anyone can do both.
*
That said, in the Hindu model there are *three* forces (sattva, rajas, tamas), all of them unconscious (pulls, not decisions), therefore none of them fits the "ego/parent" in the psychoanalytical trinity. (The psychoanalytical ego would be the part of the mind that responds "yes" or "no" to the individual pulls.) Then again, in the transactional analysis we have:
> Childhood behaviours are either
> more natural (free) or
> more adapted to others.
So maybe we could map rajas to the adapted child, and tamas to the natural child, although this is not a perfect fit (it seems too harsh to call all natural/unrefined instincts destructive, some of them are mostly harmless, or maybe just a little harmful in excess).
Okay, no reason to try too hard to match different models; I guess they are just different ways to cut the same cake. But it is interesting to notice that people from different cultures made similar observations, which suggests they probably reflect something real about the described thing.
"Suppose we train a model to predict what the future will look like according to cameras and other sensors. We then use planning algorithms to find a sequence of actions that lead to predicted futures that look good to us.
But some action sequences could tamper with the cameras so they show happy humans regardless of what’s really happening. More generally, some futures look great on camera but are actually catastrophically bad.
In these cases, the prediction model "knows" facts (like "the camera was tampered with") that are not visible on camera but would change our evaluation of the predicted future if we learned them. How can we train this model to report its latent knowledge of off-screen events?"
"... you could view ELK as a subproblem of the easy goal inference problem. If there's some value learning approach that routes around this problem I'm interested in it, but I haven't seen any candidates and have spent a long time talking with people about it."
Is there anyone here who is experimenting with using AI in fiction or poetry? There are all kinds of ways to do that. Had an exchange on here with Antelope 10 who is doing something along those lines. Anybody else? Or anybody know of web sites, blogs or whatever for people interested in this sort of experiment?
Hacking the Brain: Dimensions of Cognitive Enhancement
"Whereas the cognition enhancing effects of invasive methods such as deep brain stimulation53,54 are restricted to subjects with pathological conditions, several forms of allegedly noninvasive stimulation strategies are increasingly used on healthy subjects, among them electrical stimulation methods such transcranial direct current stimulation (tDCS55), transcranial alternating current stimulation (tACS56), transcranial random noise stimulation (tRNS57), transcranial pulsed current stimulation (tPCS58,59), transcutaneous vagus nerve stimulation (tVNS60), or median nerve stimulation (MNS61)"
I've heard a lot about how we've entered a "Digital Dark Age" since so much early internet content has been deleted. But, knowing what we know about the NSA, hasn't the agency probably been using web crawlers to catalog the whole internet since the early 90s? Isn't there a better-than-even chance that all of it is still saved on servers in some secret underground warehouse?
Unlikely, except perhaps for the non-US stuff. The NSA has a charter to target foreign communications only.
If there were an effort to dredge up old Internet, I expect it to more likely resemble discoveries of old hard drives from companies as they go bankrupt. After that might come NIST. After that, dumps of non-US intelligence agencies, but I expect them to be missing a great deal, and whatever gets released with be generations after the fact.
I don't think you can know that, given the essential secrecy of their work. One of the key snarls of any watchdog organization is that the rest of us only notice when they break a rule. If they follow their own rules, that's necessarily unknown to us.
Do you have evidence that they don't follow their mission that _isn't_ from a source with a prior chip on its shoulder or equivalent vested interest?
There are epigenetic disorders like Angelman and Prader-Willi syndromes that won't be detected by standard methods. Otherwise no, although current tests aren't 100% sensitive (for various technical reasons).
Probably there are a number of rare ones. I don't think we as yet have a full list of every possible SNP or mutation that is A) compatible with a viable fetus and B) results in a serious disorder.
And realistically any commercial service would offer something that looks at some longish list of possible likely defects, covering 98% of cases, rather than a full genome screen.
I've read that corporate price gouging is part of the reason inflation is so bad in the U.S. now. But how is this possible in a free market? I thought competition between companies ensures that everyone's profits go to zero. Price gouging is only supposed to work over long periods of time if all firms collude to keep prices high. If just one firm defects by lowering its prices to attract more customers, then the arrangement falls apart.
Here's an article that claims corporate greed is fueling inflation:
'The pandemic, war, supply chain bottlenecks and pricing decisions made in corporate suites have created a “smokescreen”, said Lindsay Owens, executive director of the Groundwork Collaborative, which tracks companies’ profits. That obscures questionable price increases, she added, and allows businesses to be portrayed as “victims”'
Everyone is always trying to price gouge whenever they can. So there is no reason to suppose there has been some recent increase in this behavior and that it is suddenly creating extra inflationary pressure.
Scratch someone complaining about "price gouging causing inflation" and you will find a Marxist right under the surface.
> This is the correct graph of corporate profits as a share of GDP (after further adjusting for the fact that companies have to pay real costs to offset declines in their capital and inventory stocks resulting from their operations). You will immediately notice that corporate profits as a share of output -- i.e., profit margins -- have been remarkably stable ever since the latter half of 2010.
I should note that *economic* profits go to 0 in a competitive market--not accounting profit, which is how the word "profit" is generally used. However, it is still the case that accounting profits in a competitive market should not just rise across the board for no reason.
*I mean, they're high nominally, because of inflation. But they aren't high compared to anything else, and trying to say that this caused inflation is simply circular.
I know less about Australia, but I do know that you're looking at a time period that is dominated by COVID and Australia had one of the strictest policy responses to it. Australia removed border restrictions for the vaccinated in February 2022 (https://en.wikipedia.org/wiki/COVID-19_pandemic_in_Australia#2022); since March 2022, wages are up 13% and profits 5.7%. This choice of data points is somewhat arbitrary and not very robust (I think if you use the previous one, December 2021, profits are up more). But I would definitely hesitate to generalize anything from 2020 and 2021 Australia.
In any event, the above comment specifically referred to "price gouging" which can't be evaluated just by looking at high-level profit numbers. Why haven't they been engaging in price gouging before? Inflation has been low the past ten years, why haven't they just raised prices before now? Even if you want to blame it on corporations, presumably something changed in the past few years.
You probably mean how is this theoretically possible in a perfectly competitive market, but few markets are perfectly competitive, and the serious business of economics is to understand how in practice different systems have behaved at different times in different places rather than to theorise abstractly.
We don't have a free market. Also, what you describe is generally not possible in a free market where incentives are effectively randomized. If there's a strong external pressure pushing everyone in a specific direction, like taxes or other financial incentives, then you get what looks like coordination. The excuse of supply chain disruptions let them hike prices to gouge customers.
That post says, "Thus, if rising markups caused the rise in inflation, one would need to explain why firms, across the board, suddenly and simultaneously increased markups."
The reason is obvious: supply chain disruptions decreased supply, which raised prices temporarily on some goods. Consumers were then primed to accept the excuse of higher prices due to supply chain disruptions, *even when this was not the case*, and profit motive does what it does, and everyone pounced on this excuse. I mean, *why wouldn't they*? It makes perfect sense.
For the same reason they don't normally: if you can profitably offer your product for less, you make more money by squeezing out the competition.
Maybe somewhere along the supply chain it's totally feasible for everyone to start voluntarily charging more, but I'm mostly just seeing everyone complain that all of their costs are up and they don't have much choice. At the company I work for, all of our material costs remain much higher than they were, and labor-wise our starting wage is 70% higher than it was pre-pandemic. We are now, for the second time in the last few years, looking at raising prices across the board, because how else do you survive?
> if you can profitably offer your product for less, you make more money by squeezing out the competition.
Unless they find another equilibrium driven by an external factor where they make even better profits than they otherwise could by disrupting the equilibrium.
> At the company I work for, all of our material costs remain much higher than they were, and labor-wise our starting wage is 70% higher than it was pre-pandemic.
Sure, that's inflation driven by rising costs, but that means your profits would not meaningfully go up. This is not the case for across many industries which are seeing record profits. I think Katie Porter summarized it succinctly here:
I think your analysis holds true if the "pandemic, war, supply chain bottlenecks" are all fictitious. In that case, the underlying market realities don't support price increases and the zero-defector scenario you need to maintain prices above market is extremely unlikely.
If the pandemic/war/supply chain effects on pricing are real, though, then all your market actors have (a) obvious incentive to raise prices to account for these factors, and (b) a less-obvious incentive - since prices are not increased or adjusted on a day-to-day basis, an actor setting prices on day (X) probably doesn't want to set their price at "fair price as of day (x)," but rather "fair price as of day (X + future date)." You don't want to set your price at the market rate today only to immediately start losing money tomorrow, so the incentive is to overshoot by whatever you can get away with. Assuming all market actors would have this same incentive, (along with the everyday incentive to maximize profit), then your odds of defectors drop off and a sort of indirect collusion to keep prices constantly ahead of the curve (in other words "above market") becomes possible. Not in the long term, but certainly in short bursts during the right period of instability.
Japan's moon lander crashed because there was a surprising but correct reading from the radar going over a steep crater wall, so the lander assumed the radar was broken and then didn't know where the surface was.
Shouldn't the path have been pre-tested so the radar reading wouldn't have seemed weird? Yes, but the landing site was changed rather late, so the route wasn't tested.
Games tend to leave out the unreliability of sensors systems. I've seen a similar complaint about military games, which tend to assume reliable information and reliable ability to transmit orders.
Also, for life generally, I wonder how often people ignore true but surprising information.
Phantom Brave is a Disgaea-adjacent game with infinite random dungeons. I once made a level 15 dungeon with 3 floors, and every single floor in the dungeon was a special floor with boosted enemies, so the supposedly level 15 dungeon was never weaker than level 30.
For unreliable ability to transmit orders, percent chance to hit is a good approximation. Something like Battle Brothers with 60% hit rates and permadeath gets really annoying. ( I don't remember if Battle Brothers also had unreliable quest levels but it wouldn't surprise me.) There's also any RPG that doesn't let you control your characters directly; Dragon Quest 4, Persona 3, and such.
Is there a difference between unreliable information and uncertain information? Those games include uncertain info. Not sure about unreliable info, though; presumably that means you have information that sometimes turns out to be wrong. I don't think I know of a board game that does that, but perhaps there is a video game that does something like that.
Very often. When covid pandemic started, it was clear that the mortality is exponentially increasing by age. Almost every policy ignored that.
Then again when vaccines were available, the data was soon available that the vaccines do not prevent infection and spread. It was ignored by many countries that introduced vaccine mandates only after this information became available.
(And then the very same people also all the time get irrationally excited about new information precisely _because_ it is surprising, and then like that sensation so much that they become closed to learning that the new sexy intel was not in fact true....)
In real-life, surprising information should need confirmation. The information source CAN be wrong, and I think the bias should be the expected outcome.
In science, progress is not made as much by experiments where the result is "Eureka!" as when the result is "That's strange..."
It has been said that it's an iron-clad rule of Hollywood screenwriting in recent years that under no circumstances is a man allowed to rescue a woman.
Is this actually true? Are there mainstream Hollywood examples of a man rescuing a woman in (say) the past five years?
If there is any truth to that claim, I think it as to be restricted to female protagonists only, and probably only female action-movie protagonists. And there is certainly no shortage of data points that might point in that direction, most recently I understand that the new "Little Mermaid" has rewritten the ending so that Ariel rather than Eric who kills Ursula in the final act. But even so, I think the claim is overstated. If it were even mostly true, I'd expect it to apply to "The Force Awakens", and IIRC Finn saves Rey during the lightsaber fight with Darth Emo.
I'd also heard it recently in the context of the new Peter Pan remake. Peter Pan isn't allowed to save Wendy, or Tinkerbell, or Tigerlily, which leaves him with very little to actually do. Meanwhile Tinkerbell is now black which means she's not allowed to have any negative characteristics, removing her jealousy and her betrayal and leaving her with nothing to do in the story either.
I don't think Finn saves Rey in the end of Force Awakens though, it's the opposite. Finn gets his arse handed to him in the first fight, then Rey takes the lightsaber and does much better.
The MCU is one of the biggest franchises ever, and among accusations from its detractors is the repetetiveness of its plots, so let's look at some of their recent works.
Dr. Strange rescues America (the female character) in his most recent movie, Multiverse of Madness; that's the bulk of the plot. The Eternals includes at least one scene where Ikaris (male) rescues Sersi and Sprite (female) (ok, technically these 3 characters are all aliens created by some godlike being, but they take gendered human forms). I'm pretty sure that Simu Liu's character saves his female friend in Shang-Chi at least once. I haven't seen the latest Ant-Man yet, but it seems like he has to save his daughter; the second film, from 2018, involves him saving his mother. Spider-Man has to save MJ, his love interest, in the climax of the last 2 movies.
Yeah the original statement sounds silly. It might be standing in for the more reasonable and accurate statement "about 60% of movie plots used to involve men saving women, now only 5% of them do" or something like that.
Was just watching Shazam: Fury of the Gods last night and there were multiple counterexamples, both of bystanders/victims (Freddy in particular makes a point of rescuing attractive women) and powered supporting characters.
I basically don't even watch movies anymore and can think of 5 recent counter-examples just off the top of my head....that claim (which I've never previously read or heard of) sounds like just another bit of culture-war trolling.
I've seen it argued a few times that AI-X-risk might act as part of the Great Filter which prevents civilizations from colonizing the stars. But it strikes me that the opposite should be the case. Isn't it more likely that a superintelligence that destroys humanity is *more* likely to colonize galaxies than a planet without such a superintelligence?
Perhaps the absence of obvious aliens should lower our estimation of AI-X-risk.
Not necessarily. One can certainly imagine a number of scenarios where the AI destroys an alien civilization without having any plan to expand itself. Either because it just doesn't plan ahead and accidentally destroys itself along with them. Or because it's happy running forever on limited hardware.
Of course, one can also imagine the aliens themselves being happy in a limited region of space, but with a biological organism it's more natural to assume it would expand.
Since we're not all made of paperclips, I assume that paperclip-making AIs don't exist(*). But if they did, they'd need to have that level of foresight because it's probably *less* foresight than is required to recognize that the AI needs to kill its owner before the owner says "now that I think about it, that's enough paperclips".
As for "one can imagine scenarios where the AI [doesn't expand]", that's not even remotely adequate for a Great Filter. For that, you need to to be impossible to imagine scenarios where the AI *does* expand, because the naive Drake equation suggests that there are very very many and it only takes one.
* Probably because the one intelligent race in the Milky Way hasn't gotten around to inventing a true AGI yet.
As I get older, I've become more and more aware of various irritating quirks in the way my mind seems to work (which I guess is just a more delicate way of saying "I'm dumber than I want to be"). I suspect that most low-level functions of the brain are either hard-coded or are developed at a very young age and are thus very hard/impossible to change but I'd be interested in hearing if anyone has any relevant experiences.
> Has anyone here ever managed to change something fundamental about their thought process or mental abilities?
Yes, improved focus with meditation. Changed habits of forgetting people's names 5 seconds after meeting them by concerted effort to retain information. These are simple cases, but maybe you're thinking of something more profound? I think intentionally repeating a behaviour until it becomes automatic can change a wide range of default behaviours.
Thanks for the reply. Those are both related to some things I would like to try to change.
Regarding focus, it almost feels like I have some sort of problem with "micro-focus". Like I'll very briefly lose focus and go on autopilot and that will throw off what I'm doing. This is a huge hinderance when attempting to play music - I'll attempt to execute some passage I know well but my fingers will just play the wrong notes for no reason. Another lesser example would be hitting the wrong button in a video game for seemingly no reason. I go back and forth between believing this is focus-related and motor planning-related.
What types of struggles did you have with focus and how has it improved since you took up meditation?
Regarding names, I'm also pretty terrible at this. I've made efforts in the past similar to what you describe. But for me, the act of recalling information which I definitely already know is itself often also more of a struggle than it should be. In addition to names, I also often find that I know _of_ a word that I want to use (i.e. I know there is a word associated with a definition I have in mind) but I just can't seem to conjure up what the word actually is.
Another one I find interesting is that some people's brains really do seem to be "multi-threaded" in the sense that they can do several things at once to a high degree of precision. Going back to music, sight-reading seems like one example of this, especially those who have the ability to read a bit ahead. Another (funny) example is this Game Grumps video (https://www.youtube.com/watch?v=vDQOEXNzGPw) in which Arin is fighting a boss which requires highly precise maneuvers while also coming up with improvised monologues. Compared to my seemingly single-threaded brain, this ability seems different in a very fundamental or hard-wired way.
It’s not as strong of an example as those listed, but I become far less face-blind in the first 5-10 years of my adult life. I was never as bad as “the man who mistook his wife for a hat” but I was very clearly bottom 5% I would say.
I think a lot of my face-blindness had to do with eye-patterns, and caring. No problem with eye contact, but I wouldn’t scan people’s faces in a way that would give me the right identifying info. But more importantly, at a very low level I was just not trying to remember how people looked. I thinks that was in part related to youthful egocentrism. As I got older, I started to care about others more, and it became much easier for me to recognize people! It also helped the caring process to be painfully embarrassed by face blindness a few times.
Thanks for sharing your experience. Did this eventually become natural or do you still have to consciously expend a lot of effort on it? I'm similarly terrible when it comes to names and I've gone through periods of time where I've made an effort to improve (e.g. I'll write down someone's name soon after I meet them or I'll repeat it to myself for a few days). Sometimes I think I'm doing better but then it will strike out of the blue, like the other day when I simply could not for the life of me recall the name of an acquaintance I've known for many years but hadn't seen for a while. I have a similar issue with words in general.
Do not be worried. The whole reason they're "forever chemicals" is that they are stupidly inert and biounavailable. Surgeons have been coating implantable medical devices with fluoropolymers as long as they've been able to.
Here's a question for any of y'all that have the (mis?)fortune to work with obscene quantities of money on a regular basis: What is the qualitative difference between things that cost a MILLION dollars, and things that cost a BILLION dollars?
IMHO it is worthwhile to consider the time parameter, rather than thinking merely in terms of purchasing physical objects. Somewhere between 100M and 1B would "buy" : 1) Never having to even think about working for wages + 2) Lifetime "ad libitum" consumption (i.e. buy arbitrary houses anywhere in the world, chartered flights, arbitrary medical procedures, etc. "as if they were ice cream cones") + 3) Being able to maintain (1) and (2) indefinitely without having to obsess over market events and fiddle with investments personally.
AFAIK oligarchs only start buying physical objects costing 100M+ (mega-yachts, etc) after they've firmly nailed down 1+2+3.
This is re-hashing what others have said, but my personnal take on large wealth inequalities is : Above ~10 million dollars, the only thing that you can buy with your wealth is slaves.
Any great mansion/yacht can only be managed with a team of full-time maintainers.
Even high-end supercars now come with a team of mecanics and are delivered from racetrack to racetrack.
Owning a company is basically being a feudal lord: you own the land, tools, sometimes the very homes of your employees.
Even charity is buying people : you have decided that the world should care about malaria, and suddenly, due to your donation, thousands of people work on the subject who would have gone on with their lives, or worked on other charity topics, if not for you.
What you are buying with >$1E10 is *institutions*. If you imagine that all institutions are really run by "slaves", you're being silly and not worth engaging. And slavery isn't a word you should be using in any context where people plausibly could mean it literally, if you don't mean it literally.
I should have used a word like "serf" which more accurately describes my sentiment. But my picking of such an intense word is not innocent. There is a class of people who are able to command the full work-time of another human being, and there is a class of people who cannot.
I'm at that limit were my parents could afford a full-time nurse and I cannot ; and I'm aware that there is a frontier between people richer than me and poorer than me.
That still doesn't work. "My parents could afford a full-time nurse" is shorthand for "my parents could make a public offer of a certain amount of money for which there exist certain people who would be willing to do all the things we describe as nurse services for 40 hours per week". It is NOT shorthand for "my parents are able to spend the amount of money set in the 5th Edition Papers & Paychecks GM Manual as sufficient to cast a level 7 Serf-Geas spell compelling other people to perform nurse duties in the real world".
The reason it's the first one more than the second one is that anyone, including people advertising that they'll perform nurse duties, can decide not to take this or that nurse job, and can factor offered payment into that decision. The only catch is that there might be only so many of those jobs, so anyone who wants to nurse that badly might have to accept the payment being offered. But if they do, then they're willing by definition. They're not serfs. They're free to search for other types of work if no nursing jobs are offered on terms they like.
I guess I would say that from what I've seen, for corporate software products at scale, the annual budget for a project or feature is on the millions level. Organizations of 60 to 100 people often command projects that cost on the order of millions of dollars. It takes an order of magnitude of billions of dollars annually in order to fund software products run by thousands or tens of thousands of developers. These sorts of products are amalgamations of hundreds of organizations' small products and features owned by smaller organizations into a big name flagship feature. Think, like, any big software product that you can name right now off the top of your head.
A billion dollars: your typical Bay Area infrastructure project.
(This is partly a serious answer. A million dollars is a lot but it's still more or less human-scale-- think a decade's worth of productivity at a full-time job in a high-paying industry. On the other hand, nothing an individual would want *or* accomplish is worth a billion dollars. At that scale you're exclusively talking institutional budgets and objectives.)
There’s a new feature that appears starting at a few million, and definitely complete by the time you reach $200 million: Things now come with Staff. If you buy a sufficiently expensive Thing, you need someone whose full-time job is to operate it or maintain it: a captain for your yacht, a pilot for your plane, a machinist for your robotic machine tool, a sysadmin for your data center, etc. A few million dollars is enough to hire a person for life, so when the price is substantially above that, why not hire an employee to worry about the Thing full-time?
Epistemic status: I spent a few years being the Staff for multi-million dollar billing systems for cell-phone companies. When we sold a billing system, we sent at least one engineer with it, to transition it to the customer’s staff over a period of six months or a year. And we were always ready to pitch in, if the customer had a problem.
Not someone who falls into this category, but I’d say the biggest difference is that a million dollars can get you objects and Things. A billion dollars gets you some things but it’s mostly about the institutions, organizations, and People attached to those things. You’re not buying objects, you’re buying force that can be applied to a particular problem.
Because I’m in “time to reinvent myself and redirect my career mode,” I’m forever getting emails/ads re training to be a UX/UI designer and I’ll admit to being intrigued. I know all the reasons why or why not such a career path would suit me, but I’m very unclear if these training offerings are legit/worth paying for or if they are just the online version of a ITT Technical Institute quasi-scam. If there’s a community that would know the answer, this one is it.
So, are things like this https://designlab.com/ux-academy/ legit and worth the money? Is the idea sound but there are better options? Or is it all just a load of bollocks?
The UI/UX field is extremely oversaturated right now, in part because of these boot camps. Many companies have also seriously cut back their design/research teams, and the largest demand is for experienced, senior designers. I would not recommend jumping into this field right now to anybody.
I did a coding boot camp about 6 years ago and have worked as a programmer since. These types of programs work if:
-You are already employable in a different job. Meaning you probably have the soft skills that will help you get and maintain a job.
-You treat the program as a job and not school. Take advantage of any programs they offer and spend 8+ hours per day on it
-The program should be able to cite specific companies and roles people have gotten after graduation
- They should offer some amount of free resources for job hunting - job boards, job fairs, networking etc. - with industry people who are not only graduates of the program
- They should offer some kind of money back guarantee if you dont get a job but comply with all their standards (in my program this meant you could retake the course for free).
- They should have a curriculum that looks like a college curriculum with tests and deadlines and such - not just general descriptions of stuff you will learn.
- They should be willing and eager to provide contact info for graduates who can talk to you about the program.
- They also shouldn't let everyone in, there some should be some amount of screening.
I can't speak to that specific program but the things above are the types of things I would look for.
I think it's like learning to code -- everything is out there if you feel you can set your own agenda and go through it. The courses are very useful for if you don't want to do that.
The best way to learn UI design is to try and reconstruct websites and apps you see in Figma. Then, try and reconstruct a service but for a different 'vibe'/user. What would Airbnb's design feel like if it were for executives? Or for young families?
It's free, it's fun, and it gets you some portfolio projects.
UX design is the step back. There's the 'micro' -- also called interaction design -- which is concerned with specific goals. How does the user sign up? Find a thing? Book a thing? Good practise here is to take a bunch of flows, use them, figure out what's annoying about them, and try to redesign them.
The macro -- also called service design -- is more about what users care about and all the other interactions that are required to make the 'find a thing' flow work. How much information should you give them? How many options? What people / data are needed to find the information to present to the user? etc. This I think you can learn from trying to create your own products.
I did a degree in user centred design and dropped out halfway through because my internship was more useful (and much better paid!) and got a job fine after that. I've not done the courses, but I expect they're all fairly decent, and probably all have a pathway into jobs.
I can't stand Twitter any more, and it's the place where I get info about new developments in AI -- new tweaks that improve performance, new applications for AI, and occasionally a new idea about Alignment, FoomDoom and related matters. Where else can I go to stay about as updated as a non-tech person (I'm a psychologist) can be? I can't read all the new papers -- I need summaries in ordinary language.
And by the way, I'm leaving Twitter because AI Twitter is going the way of Medical Twitter, which has been a cesspool as long as I've been following it, with pro- and anti-vax, mask etc. people hating each other's guts. Now I'm seeing the same dynamic starting in the AI discussions, and it seems to me that what nudged the exchanges into hate-fest land was Yann LeCun, who hasn't the faintest idea how to debate and support your ideas, but instead moves instantly into implying or outright saying that those worried about ASI are fools, crackpots, etc. Here's one of his tweets:
- Engineer: I invented this new thing. I call it a ballpen 🖊️
- TwitterSphere: OMG, people could write horrible things with it, like misinformation, propaganda, hate speech. Ban it now!
- Writing Doomers: imagine if everyone can get a ballpen. This could destroy society. There should be a law against using ballpen to write hate speech. regulate ballpens now!
- Pencil industry mogul: yeah, ballpens are very dangerous. Unlike pencil writing which is erasable, ballpen writing stays forever. Government should require a license for pen manufacturers.
I've been enjoying twitter for many years, and I think a key to that enjoyment is to only read tweets from people that I follow, liberally muting users and keywords, and turning off retweets from selected people I follow.
Naw, won't work. I don't read Twitter for fun, I read it for up-to-date info about topics of a lot of interest to me. AI is currently the main one. I'm following various bigwig spokespeople for organizations and points of view, plus a number of people who just post articles about new developments. If I want to know what the leaders are thinking and planning, I have to follow them. However, people have given me some good ideas here for keeping up without visiting the birdshit site.
I subscribe to this AI newsletter: https://www.bensbites.co not sure if its exactly what you are looking for but it feels like all the AI stuff from twitter just put in an email. Mostly focused on AI products and big headlines.
>Where else can I go to stay about as updated as a non-tech person (I'm a psychologist) can be?
HackerNews and Lobsters are the places you can go. They have the disadvantages of (1) not being exclusively about AI (2) having a majority demographic of programmers, so posts are more often than not still technical (3) [recently on HN] being so circle-jerkily against LLMs that even I, normally an LLM skeptic, got sick of it.
But they are still good choices.
Some youtube channels I unearthed out of my subscriptions
I quite honestly don't get it. He is just one guy after all, surely there are so many people on AI twitter than Yann LeCun can insult on one day ?
And I don't get why you take it so personally. Twitter is where respected and respectful people go to be dumb asses, Yann LeCun is lashing out, probably not even meaning insult to the people he's lashing out at, just because.
Anyway, I think you're identifying too much with your opinion/predictions about AI. Take a step back and reevaluate whether you have made it too big a part of your identity eh ? Keep your identity small http://www.paulgraham.com/identity.html.
>AITA
No you're not, you just let one person's jerkass behaviour over the internets get to you. Easy mistake to make, done it countless times myself.
Why does following the bleeding edge superficially excite you more than understanding the basics for real? On the weekend I revisited Rummelhart and Hinton 1986 and it is still powerful.
I'm a psychologist. Bought a fat book on machine learning and will be working my way through it this summer. Will probably also take an online course, then read something about kinds of machine learning models, tweaks, etc. Following the bleeding edge doesn't excite me. I have read enough here and on Zvi's blog to take the risk seriously. Who *doesn't* want to follow the news on something involving serious risk to them and the people they know, the life they know? I'm not a fucking ambulance chaser, you get that?
Specifically about the ballpen and the printing press in general : I love how some people ignore the fact that the printing press completely ended the early-catholic domination and brought about the humanist civilisation all over the world.
If you were a pope in 1440 it would in fact be a correct move to worry about that new fangled technology.
I agree with you. The technology is here and it will be of enormous benefit, and it will cause a lot of pain. The biggest threat I see from AI is the way it is greatly expanding our capacity to delude ourselves; truth and fiction is at the heart of this whole AI conversation imo.... Is someone lying to me or is someone telling me the truth? Most of us can discern truth from fiction because we have a model of the real world and words are tested against that model, or at the least against other words that have already been tested against that model. (It's probably possible to have a fairly complicated conversation about water (among chemists perhaps), and never use the word "water" but the whole discussion is predicated on a concrete shared understanding of water; we all know it when we see it. An understanding of water on that level could be considered a mutual conspiracy. An AI, learning everything about us through our language, is never going to have that model of the world. Words will always be “understood” (another hopeless word to use when you're talking about an artificial intelligence) in terms of other words. That's an amazing skill, and you can get a lot done with it but truth and fiction is off the table. In the pure realm of language, those words are meaningless.. This recent case of a lawyer, who submitted a brief that was created by an artificial intelligence, it's a pretty interesting case. The brief was well written, perfectly intelligible, and filled with citations to cases that did not exist. The lawyer claimed that he specifically asked the AI if the cases were real because he checked one out and it wasn’t. Apparently the AI told him that, yes that case wasn't rea, but all the other ones were. You could say that the machine lied to him, but is that really the best way of describing it? It presumes something that I don't think an artificial intelligence has, or could possibly ever have; a meaningful sense of true or false.
I get a lot of my AI updates and creative prompt ideas from LinkedIn. There's a ton going on there if you follow the right people. And then follow the people they follow.
My dumb theory of why Yann is acting so rude is that he's still salty over Russell mocking him in debates a few years ago. Maybe mocking is too strong a word, but I distinctly recall Russell being pretty aggressive in "Human Compatible" and a couple of debates. That unpleasent experience probably set the tone for discussions regarding alignment for Yann. I think I'm only half joking.
But look, don't you think it's kind of a bad sign if somebody who's angry about how one person treated them in debates a few years ago is rude as hell to a different group of people who are disagreeing with him in a civil way in May 2023? I am still angry about Yann calling me -- or at least a class I am a member of, ASI worriers -- a crackpot, and joking that I'd be scared of ball point pens if they were a new invention. Since I'm still angry about Yann's mockery does that mean I get to make fun of you? Hey, Algon33, ya crazy fool, seen any caterpillars lately? Because I know you worry about butterflies. I mean, if tens of thousands of butterflies landed on your face they would smother you. And if like a million of them landed on your body they would crush you flat. Haw haw haw. Plus I fucking won the Turing prize and you, Algon33, fucking didn't.
You know, these tech companies are going to be halfway running the country in 10 years, in my opinion. I'm really dismayed by the terrible communication skills of some of them.
I don't think I disagree with you. Yann is basically acting like a troll, which makes the discourse worse. Still, if your lived experience with people is that they are rude to you, which he's mentioned ASI worriers were, then you'll probably dismiss their beliefs and be rude in kind. I have some sympathy for him if that is the case, but would rather he didn't do that. Though I do think his attitude is changing, and would predict that it will continue to change re:alignment, though maybe not e.g. MIRI. This is based off the shift in respectability of AI alignment (e.g. Hinton and Bengio taking the topic seriously), and changes in what is socially acceptable for people in your ingroup to think are one of the best predictors of people changing a dogmatic belief.
Read Zvi's AI roundups, he does a great job of filtering out the valuable / interesting stuff, with a huge amount of info and context in every one. I really wonder how many hours he must put into each one, they're a gold mine of info.
IMHO, YTA, unless his tweet was responding to an actual concern somebody raised about AI that had nothing to do with hate speech. Even then, your response didn't exactly elevate the discussion.
LeCun had earlier posted an "of course ASI isn't dangerous, we'll align it" tweet, and lots of people had posted their reasons for disagreeing. The posts disagreeing weren't exactly sweet, but they were civil. No one was accusing him of being a fool, crazy, evil, etc. And of course some people posted in support of LeCun's view. LeCun responded by saying that people worried about ASI are crackpots. So yes his post was responding to multiple people's concerns that had nothing to do with hate speech. In fact, depending on what you define as hate speech, LeCun was the one who was doing it. There was I think some elaboration on his part in that Tweet about "one person who has made a career out of AI alarmism" and some stuff about that person, obviously Yudkowsky, looking like a homeless crazy in a subway station. (I'm not absolutely sure about that last bit. Definitely somebody compared Yudkowsky to a subway crazy, but I'm not sure it was LeCun. May have been someone writing in support of him.) Anyhow, LeCun's later tweet about ball point pens was an elaboration of his idea that people concerned about AI are fools -- they're like people who would panic about ball point pens because someone could use pens to write hate speech, misinformation or whatever.
I wouldn't say posting that image is my proudest moment. On the other hand, I am quite worried about ASI, though not at all sure we're doomed, so I am in the class of people LeCun is calling crackpots, the class he's saying are such silly geese that would panic about ballpoint pens. So's Scott, last I heard. So's Zvi Mowshowitz. None of us are crackpots, silly geese or idiots.
AFAIK AI "worriers" -- virtually without exception -- advocate restrictions on general-purpose computing that could not, even in principle, be meaningfully enforced without a stereotypical global Orwellian police state. Hence the reaction of people who would not care to live in such a nightmare under any pretext whatsoever.
Your knowledge does not extend far enough, but I expect you will reject any attempt to expand it by saying that the counterexamples provided are part of your "only *virtually* without exception" hedge. I agree that EY is a poor evangelist for AI risk as a serious concern, but you go too far by categorizing everyone who shares that concern as an exaggerated caricature of EY.
Can you link to an example of a "better evangelist than EY" ? (e.g. one who proposes solutions to the concern that don't logically entail "state violence against people who want to run certain computer programs")
I don't advocate a goddam thing. I am not in the field and am not able to formulate ideas at that level. I am worried. Wake up and get over the idea that people who are concerned about things that you aren't are evil assholes. That's exactly the mistake LeCun is making, and it is a sign of deficits in social perception. It's a way of being dumb in a certain area that is important to making your life work and to treating others in a sane a decent way.
Please recall that Mr. Yudkowsky openly advocated state-sanctioned killing of people (and, hypothetically, the subjugation of entire nations) who would dare to resist a global computation police regime consisting of him and other "AI risk concerned".
Even though, interestingly, he undoubtedly did not have to:
"I don't have to write that I want millions of chickens to be brutally murdered. I can buy all the dead chickens I want at the grocery. When advocating a policy that requires violence for its implementation, it is rarely necessary to advocate the violence. Once people agree to implement the policy, the necessary violence will be forthcoming." (D. Mocsny, on ye olde Usenet)
... but at the same time, it seems that he simply could not conceal the "primate glee" with which he looked forward to dominating his opponents using the American war machine.
For the record, I personally have no intention of ever obeying any "international restrictions" whatsoever (regardless of what kind of engineered "consensus" they are imposed through) on AI research and its ancillary fields. And intend to actively seek out opportunities to disobey such restrictions, and aid any like-minded others I might encounter. (Not altogether different from the intentions certain doctors in USA have declared re: abortion.) And so IMHO I am justified in viewing "AI worriers" -- whether radical or "moderate" -- as potential cheerleaders at my execution (or even the executioners themselves.) (And so, yes, if you like, as "evil assholes", unless they describe how they intend to conclusively settle their worries without imposing a global police state.)
If the universe can be destroyed by a computation, it eventually will be. In the meantime, however, people still have the choice of whether to construct a totalitarian "AI safety" hell -- or to remain able conduct research, build and purchase computers, perform arbitrary computations, etc. without permission from Yudkowskian commissars.
Good grief, calm down. I do not give a fat fuck what your opinion of Yudkowsky is, and what you personally intend to do under various circumstances, and what you think you are justified in insisting on, etc etc. You are talking to an internet stranger, not being interviewed on CNN.
You are both right though. That’s the bitch of it. Imo the salient thing about all forms of AI is we are going to have to get used to it (it aint going away) And it has to make smarter. There will be casualties
Meta framework: Control and dissemination of language has been a very powerful tool in the evolution of human cultures; the imposition of language by conqueror on the conquered; the merging of languages; the banishment of languages; the profound evolution of language from something purely aural (heard, felt, ) to something almost purely a visual metaphor-
The blasphemy of referring to G_d in writing...
AI ASI AGI is an iteration of this paradigm to me. It’s not unprecedented but it’s unique as well.
Well think of it this way. People are experimenting with AI trying to find the right prompts to elicit good responses and parsing out its replies. This seems very analogous to negotiating a language barrier.
Imposition of language is only one form of what I’m pointing to.
I once found a newsletter of interesting off-the-beaten-path activities/events in NYC, and I believe I found it linked from one of these threads but can no longer find it. Anything sound familiar?
Suppose you want to hire people who are good fits for their jobs (we will optimistically assume you understand the jobs well enough), and failing that, you at least want to avoid hiring awful people.
How would you do that? Let's start with the idea that asking applicants about their greatest fault is a stupid question.
I would ask the candidate to help solve an actual business problem that I wanted to have solved and see how they managed and if I felt like I'd want to continue working with them. This would be after screening out the obvious duds through the usual standardized tests.
I'd cast a vote for "be very intentional about your culture, especially in recruiting." For most roles, there are an abundance of people who are competent, or close-enough to be trainable, so put more weight on culture in your decisionmaking.
And by "culture" I don't mean "does James fit in here?" - I mean being intentional about specific traits/values you want your staff to share, and then searching for and maintaining those traits/values within the office.
For example, assume that you want employees who are entrepreneurial, and you are interviewing candidates for a role. Don't ask them their "greatest weakness," or a bunch of other questions you pull from the internet by googling "good interview questions" - ask them about times they came up with a new solution to an old problem, when they tested a new idea that didn't work the way they thought it would and what they learned, etc, etc. Hire people whose answers you like.
But you need to do more than recruit and forget - you also need to maintain culture in the workplace. That means rewarding the lady who comes up with the new approach that works, but it *also* means *not* penalizing the other 5 employees who tried new things that failed, constantly preaching to people that you want people to try out new ideas, and the only true sin is to be foolhardy and test large when you could have tested small, and so on.
Do that consistently, and you'll have an entrepreneurial group of employees.
Do that consistently and broaden your target values to (a) ethical behavior, (b) entrepreneurialism, and (c) collaborative behavior, and you'll have a place I'd prefer to work.
I have actually kind of wondered about something that may tie into this: every job posting gets several to hundreds of applicants, yet unemployment is very low. If 10 people applied to every job, then the economy should have 10 times the number of unemployed people as job openings.
My conclusion is that the people an employer is rejecting aren't necessarily good candidates, but perhaps not a good fit for the particular opening. I'm told that, unless someone is lying on their resume, an employer can basically tell whether someone can do the job by reading their resume.
How to tell whether someone will be a fit? That will depend on what the EMPLOYER is looking for. Perhaps some lines of thinking could be what someone's favorite meal they ate in the past week was. Or the next vacation they are planning. Or how to navigate from one place to another. And the most important part of any of these questions would be the WHY.
I used to ask people to tell me about a problem they found challenging or at least interesting. Not because I am interested in their problems, but because I want to see how they talk about it. You can often tell the good people from how enthusiastically they talk about things they find interesting.
Other than that, I also think about what skills are needed for the job, and then try to score candidates objectively in each skill. The interview questions are then about giving candidates the chance to demonstrate those skills.
State of the art for this in tech startup hiring is a back-channel inquiry to someone who worked with the applicant at a previous employer. Best test of someone being a good fit for a job is, surprise, success at a previous similar job. If you don't have a back channel, ask applicants for detailed walk-throughs of past projects / challenges / etc.; those questions are surprisingly tricky to fake answers to and will give you a decent handle on strengths and weaknesses.
Needless to say this can't be a complete answer or no one new would ever get a job. But it's also true that any method that tries to get signal on 2000 hr/year of work with just a few hours of interaction is going to be very lossy. Be prepared to fire people.
I think asking people about their greatest fault or whatever is basically just used as a way to get the candidate talking. You could say it’s testing verbal intelligence; the actual answer is irrelevant. Of course, there are better and more natural ways a good interviewer might accomplish this goal.
The main value to the "greatest weakness" question is that it's the single most famous interview question. It's the infamous hard ball question that you know literally everyone knew about long enough ahead of time to prepare an adequate answer. Not having a good answer to that question indicates that you can't be bothered to do the bare minimum interview prep.
In general, yes, but applicant pools can sometimes face a Simpsons paradox - if someone is very smart and currently unemployed they probably have other issues.
On the other hand, the standard interview technique of "talk to them in person" does a decent job screening for "punctual" and "not a raving lunatic", so while I wouldn't rely on testing in isolation it should work well in addition to "talking to the candidate"
In tech, a huge premium is put on educational background. No need for an IQ test for someone who was admitted to and then graduated from MIT, Stanford, Harvard etc.; or for some roles, a PhD at a prestigious research Uni.
It seems like there's a pretty strong tension between "IQ tests are a powerful metric of candidate quality, it's just that they're illegal to use" and "nobody cares about your SAT scores after college, and increasingly not even then". I'm fairly willing to believe that HR departments are leaving billion dollar bills on the table, but even without going that far it's not clear that the barriers are legal.
It’s a peculiar question in a job interview because no one is going to answer sincerely, and the interviewer should know that, so it’s like an invitation to be glib and insincere.
Sometimes the applicants will be honest :) . I had a job interview once where I was honest and was like "I don't get along well with my bosses if they are dumb" and provided some examples. And they really wanted me, but didn't hire me due to that answer. But then the first hire fell apart after a week or two, so they brought me in.
Ended up being a great decision for them until I left 4 years later.
But of course one is insincere. The question isn’t really sincere anyway, a job interview isn’t a confessional or a therapy session or a police interrogation. They don’t expect an answer of the type one would give to a therapist (for example).
Well, I understand that, but no one self aware is actually going to share his or her worst faults with a job interviewer. In many cases it would be irrelevant anyway, and in other cases it would be self incriminating (like no one is going to say, “well I am an alcoholic” or something like that).
When you’re looking for a job you get a bunch of weird questions thrown at you and you kinda have to guess at what the questioner wants to hear. Throwing back the veil and proving that the candidate is being performative in their interview is not actually a revelation.
I disagree with your premise. Telling the interviewer what they want to hear, and not the truth, will result in getting a job for which you may not be a good fit, and which you will not enjoy.
With respect to complex words, do you actively know the specific definition? For very unusual words or complex words, I don’t typically know the exact definition. The definition I generate is “this word is basically when you do something bad or vengeful” or “This words basically means to be hungry.” And so on and so forth. I reduce many words down to much simpler versions of the actual definition.
I’ve heard that the English language has more synonyms than other languages, and therefore has more superfluous words. Take the word superfluous for example. I basically view that as meaning “unnecessary duplicate.”
Yes. People learn langages by hearing words in context, so it makes sense that people will generally have vaguer and more uncertain ideas of the meanings of less frequently used words.
I don't explicitly know the definition of many words I know, but I do passively know the definition. I do distinguish among unnecessary, excess, redundant, extraneous, gratuitous, extra, and inessential, between peckish and famished, and between retaliation and vengeance.
English has *far* more words than most languages. Sometimes this lets you express things better, but it can easily be used to express things worse, even when a particular word is even apt (because it's distracting or obscure). I try to use more plain, Germanic words when I'm thinking about it, but like a lot of folks here, I am from that segment of English-speakers for whom vocabulary is a dick-measuring contest, so I definitely find myself saying "mercurial" or "oblique" more often than I care to admit.
I don't think this is quite right, although the issue has become confused because "utilize" is used incorrectly more often than it is used correctly (in my experience). To quote some dictionaries:
Merriam-Webster says: 'utilize' may suggest the discovery of a new, profitable, or practical use for something.
Oxford English Dictionary says: To make or render useful; to convert to use, turn to account.
To steal a random websites example: "while you use a fork to eat your food, you utilize it to poke holes in the plastic film on your microwavable meal."
Thank you, I've learned something! The primarary user of "utilize" at my work was a supervisor who was a consistent early adopter of new terms, along with being quite wordy. I presumed he was saying "utilize" because it was a more complex and fancier-sounding word. In fact, he may have been justified in its use at least some of the time.
I don't know an exact _verbal_ definition (which in most cases doesn't exist anyway) but I usually have a strong implicit sense of "how the word works". Take "superfluous" and its near-synonym "extraneous". If I'm describing how to, say, make coffee and I go into unnecessary detail about how to boil the water, that's superfluous information-- I'm adding something already implicit in the phrase "boil water". But if I go off on a tangent about how I learned to make coffee, that's extraneous information-- it's new, but unhelpful. In both cases the adjective refers to an unnecessary addition but there are further shades of meaning there that I'm intuitively aware of even if I can't immediately put them into words.
English in particular almost has two complete lexicons: one Germanic from pre-conquest England, and one Latinate imported by the Normans. This can be clearly seen in "legal doublets:" phrases like "terms and conditions" or "will and testament." Legal language needed to be understood by both the English-speaking commoners and the French-speaking nobility. Eventually, the French-derived words were imported into standard English, giving rise to the plethora of synonyms.
Genuine question: Are you a native English speaker?
I would agree with Aris that there's no such thing as a truly superfluous word. Only words that are obscure enough to prevent you from communicating properly. I'll use an example from your comment:
>The definition I generate is “this word is basically when you do something bad or vengeful”
But this is the point! "Bad" and "Vengeful" are completely different concepts! They may have some overlap, but they're not synonyms at all. I would be curious to know what word you define that way, if that's a real example. Hell, you call "superfluous" a synonym for "Unnecessary duplicate", but to me the word "superfluous" has always carried the specific meaning of being unnecessary by being more than is wanted/needed.
Honestly, the number of "synonyms", as you say, is one of my favorite things about English. Sure, I could call someone "foolish in a smug and self-satisfied way", but why would I go to all that trouble when I could just call them "fatuous"?
The native/non-native angle is an interesting one : when I first learned english, I learned the translation of every single word I used. (Which brings its own set of problems: the same way that there are no true synonyms, there are no perfect translations of single words). Nowadays, my main way of learning new words is being seing them used in context often. And only when I find myself wanting to use an unfamiliar word in a sentence would I go check its exact definition.
It depends on who you are communicating to. Fatuous would be a good word to replace the rest of that description, but I’m a native English speaker with a degree in English and I’ve never even seen that word before.
A bit hyperbolic, I'm sure I've seen it before, but I couldn't give a slightly directional definition of it if asked. Surely it has appeared in at least one book I've read in my life.
I think most people have a fair-sized list of words that they kind of, sort of, know the meaning of, but don't know the precise dictionary definition. I know I still have to highlight and right click for the dictionary definition from time to time. I've probably inferred the meaning from context but, geez, a dictionary definition?
Regarding all those synonyms, one pair of synonyms that comes to mind that have a small but significant difference in meaning is irony/sarcasm.
A thesaurus would call them synonyms. They both can be decoded as saying something that means the exact opposite of your words, but sarcasm is usually used when the the speaker is trying to throw a bit of shade, so there is a smidge of difference that might make me choose one over the other depending on context.
Irony: "Boy I could really go for some desert." when you've just finish an enormous meal that would leave no room for desert.
Sarcasm: "Nice shoes, Bob" when Bob is wearing shoes that are not so nice.
Kind of a trick question, because if enough people don't know the nuance of a word, the nuance is lost and the definition warps into the common usage. We have a thousand synonyms because kids constantly use words wrong and kill their individuality.
Not usually, although last night I did feel the need to look up definitions for "insouciance", "gaucherie", and "abscissa".
I think humans work like LLMs, learning the meaning of words through context. Technical fields might form a consensus on adopting specific definitions for technical terms, but that's an aberration. Generally, humans learn through context - we hear a new word, and now we know not only the sort of thing it might mean, but we also get a sense of the type of person who uses that word, and the context they use it in. Sometimes words have subtle shades of meaning that are opaque to the uninitiated, and sometimes those subtle shades of meaning get lost over time, or the word takes on a new life among other people and gains a new context and a new parallel meaning. Lots of cool stuff like that happens. :-) And among the many wonderful features of the OED is that it provides examples of usage, which can make it easier to trace the shifts of meaning over the centuries.
I grew up with a compact version in only 2 volumes, which squeezed something like 9 pages of the original onto a single page. It also helpfully came with a magnifying glass. :-)
I learn the meaning of words from context. I rarely use dictionaries, except for Urban Dictionary.
My feeling is that synonyms have different flavors. One might be a little more dignified than the other, and they sound different, so they fit into sentences differently.
I recall a teacher telling me once that there are no two words with the exact same meaning, register, and connotations. So there are no superfluous words!
Also, shameless plugin, but if you like words, try this game I built - www.scholargrams.com - where you earn points for using letters (updated daily) to form words. The rarer the words, the more points you get.
Wrt the word Superfluous, I now know that it does not necessarily have to be an exact duplicate, but in many real-world cases an exact duplicate would be superfluous.
I'm gradually going through old SSC posts, trying to figure out when I started reading every post (pretty sure it's 2014, but it's after 25th February!). Today I came across this gem that I hadn't read before:
If you want to make the Replication Lab! show yourself, go submit to the 2023 ACX Grants Round at https://manifund.org/rounds/acx-mini-grants . It’s open until September 1 or so, which I think should be plenty of time.
To clarify: the ACX forecasting minigrants round is currently underway, and September is roughly when the evaluations will happen. You can definitely continue to submit proposals and try to raise funding, but new proposals won't be eligible for Scott's retro payout and thus may not be otherwise exciting to investors/attract much funding.
Does anyone happen to know of a good summary of the meta around Kegan's orders of the mind? The theory passes my gut check, but only partially. I'm curious about its standing in academia and critiques/further work, but I haven't found much in my quick searches.
Recently, I've been attempting to get ChatGPT to translate story chapters (~800 words at a time) from Japanese into English, but it always stops translating halfway through and hallucinates a continuation to the story instead due to the prompt falling out of the context window.
The interesting part though is that the first time this happened, it just happened to be in the middle of a scene where the love interest is mortally wounded and GPT decided to continue it with a tearful death scene. However, in the actual story, the protagonist manifests hitherto unknown magic powers and saves him instead.
I thought it was interesting because Scott previously wrote that LLMs are curiously resistant to completing negative outcomes in stories. Give them a prompt and they'll continue a story in a way where everyone improbably lives, no matter the situation. So it's odd to see the *opposite* case happen here.
>but it always stops translating halfway through and hallucinates a continuation to the story instead due to the prompt falling out of the context window.
I had the same experience with translating "El mètode Grönholm" (good play & good 2005 movie, I recommend) from Catalan to French (there don't seem to be any good automatic translators for Catalan out there, the output of google translate was judged to be "alf -spanish garbage jajaja" according to my Catalan proofreader).
It works well for a few prompts, then either starts translating in english or spanish, hallucinating a new story, or repeat the translation of the last prompt. The resistance didn't seem related to the content, it just "forgot" it's instruction every few answers.
P.S: but for Japanese, you should be able to use Deepl instead. No hallucination, and a lot less prompting since the desktop app can handle moderately long text files
Wasn't the original "force good outcomes" post specifically about describing violence? If the improvisation starts with a character already mortally wounded, letting them die might not trigger the same safeguards that would prevent the LLM from describing the character gaining a mortal wound. It's also possible the whole translation context changed the outcome, try it with a story that implies violence is about to happen and see what it does.
The refusal to complete negative outcomes is a result of the RLHF post training stuff, though from a later comment you weren't using an 'uncensored' model so it still should have applied...
First, make sure you're trying this with GPT4. I found that ChatGPT lost the thread, sometimes as soon as half a sentence in, and then started generating plausible cruft that had nothing to do with the starter text I'd asked it to operate on.
Specifically, I had reverse-ordered text, like "txet deredro-esrever dah I". ChatGPT would trip quickly, and GPT4 reversed, w/o hallucination, several paragraphs faithfully
I've often observed GPT hallucinating unknown magic powers (unintended in the story) to make a good ending, and about same frequency making bad endings. Oh then, maybe the bad endings I observed were with previous GPT version which doesn't have chatGPT PC stuff.
It might be that it viewed a deus ex machina as negative because it'd be considered worse writing by a human, and therefore elicit a more negative response from a human than a touching tearful death scene? Hard to say, of course, but something vaguely like that would be my guess.
I've never done much charity work before and am currently participating in a charity bike ride (disclaimer: I do not think this is by any means the most efficient way to raise money, it's just a freebie since biking is a nice outdoor activity anyway).
Something that took me very much by surprise is how it works:
1. The riders need to each individually run a mini-fundraising campaign.
2. If a rider doesn't raise enough money they aren't allowed to participate in the ride.
3. The minimum amount they need to raise is *a lot*, $2000-$4000 in the case of the ride I'm doing.
I know multiple people who aren't doing the ride (and therefore not fundraising) at all because they don't think they'll be able to raise enough to meet the minimum fundraising bar to participate. This seems like a net negative on behalf of the charity in question. Can anyone more well-versed in this area explain the logic here?
Minimum prices can increase revenue in auctions. So if there’s a set number of slots, setting a minimum (as opposed to just giving it to the 100 highest slots) can be advantageous. https://en.m.wikipedia.org/wiki/Auction_theory the “optimal auctions” section.
I don’t think that’s the actual reason but it’s a fun relevant fact.
Best guess, the relevant authorities limit the number of participants.
EDIT Even if that is wrong, there's ways this strategy pays off. Say the minimum is 2,000 and say on average anyone who can make that target gets to 1,000 easily but has to work for the second 2,000 and wouldn't bother if they didn't have to. For 100 entrants that's an extra 100,000 you have made by setting the bar at 2,000 not 1,000, which pays for a lot of low-yield punters you have discouraged from entering. Also, fewer entrants is easier and cheaper to asminister than many, even if there is no externally imposed limit.
Agreed on both accounts. For the latter point though the thing I don't understand is just how ridiculously high the limit is. I am in a fairly high income bracket and therefore know many other people in a similar bracket who I can ask to chip in, and even so this is still a lot of money to need to raise. So I imagine that for a vast majority of the population it would just be completely out of the question. I find it hard to believe that the minimum being set here isn't doing more harm than good. But of course that might be explained by your first point, in that the city is explicitly trying to limit the number of participants.
There could also be a factor of the city(s) involved trying to recoup costs, since this ride involves shutting down highways all around a major metropolitan area.
Another theory about these is that it makes you feel more identified with the cause in the future (although that's commonly said to be because of doing the physical exercise rather than because of asking people for the money—but it could be both or either).
I got started with cycling by participating in a charity ride like that (with a significantly lower minimum, maybe like $300 or something?) and have continued to ride in organized recreational rides that don't necessarily benefit charities. In these events you pay a fixed entry fee (now usually around $100) and you get extensive support on the course during the event. I think this is also fun and satisfying and good motivation to get in shape. If you find that you like being part of a cycling event, but don't like the charity fundraising part, maybe try these recreational events in the future (often billed as "centuries" or "classics", although there are other disciplines that may have more specific connotations). You can still donate to charity yourself if you want! :-)
What separates the person who I call “me” at this moment from the person I will call “me” five minutes from now?
(My thoughts)
-The matter which composes my body will not be 100% the same.
-The physical structure of my brain will not be 100% the same.
-My memories will not be 100% the same.
So will I be the same person in five minutes as I am now? It seems reasonable to answer that question “Yes and no. You will be very similar but not exactly the same.”
What about ten years from now, assuming my body is still alive? “Yes and no. You will still be a similar person in many ways, but you will be less of a similar person than you will be only five minutes from now.”
Twenty years, thirty years, forty years from now, if I make it that long, I will increasingly be a different person, composed of increasingly different matter, with an increasingly different brain structure and memories.
If there were no such thing as biological death, as my age approaches infinity, I would cease to be the same person I am now, no?
No, I don’t think I would cease to be the same person entirely. I would change over the centuries, but I would remain human, which should be a limiting factor to change in some way. Not so limiting that I couldn’t become you, at some point, since you are also human, though, no? Not 100% you, but perhaps my life, and my psychological development, at some point, would take me along a route that would be much like one you have been on. Perhaps it would be fair and accurate to say at some point that I am at least 2% you. (Maybe I am even at least 2% you already. Do we not like the same music and laugh at the same jokes?)
If not, I ask: what makes me me? If not the matter that is currently me, then either my existence is immaterial or my matter is fungible.
If my matter is fungible, then why can’t I be you? Isn’t it at least theoretically possible that the exact atoms in your body could compose my body and for me to still be me? If I ate you, wouldn’t that would be a start in that direction?
If I suffer amnesia one day and remember nothing of my past would I still be me? Let’s provisionally say yes. I don’t need my memories in order to be me.
How do I know that I don’t experience being you? I don’t remember being you, but we just said that my existence is not contingent upon memories.
So my individual existence is not contingent upon the specific atoms in my body, the structure of my brain or the memories in my mind.
In the future I will be neither 100% current me nor 0% current you. Isn’t it reasonable to say that in the future (and present) me will be a non-zero amount of everyone, given that you aren’t so special?
I want to take this line of reasoning a bit further, but first want to see if others think there are obvious logical flaws in the above.
It goes deeper than Parfit on the transporter paradoxes. If you're familiar with that already, the 2d 5 mins is where it goes into scenarios that I'd not previously encountered
I remember taking a philosophy course in college that had a large segment on this exact question. It asked things like, let's say someone invents a transport portal device that works based on tearing you down molecule by molecule and building you back up in the new location molecule by molecule. Is that the same person as you? What if they only build you back up without tearing down the original, which is the actual you? It's the question of the ship of theseus as well.
To me, I feel like there must be one of two things going on:
1. There's some core part of our brain that is "us". A portion that is actually pulling the strings and in control of the rest of it and by extension, our bodies, without which we wouldn't be ourselves.
or
2. The whole notion of self is just an illusion. We simply exist on a moment by moment level. Each moment, we feel like we're a continuous being, since we have access to previous memories, and we probably evolved to feel like a continuous being, since that makes us care more about our own preservation. But in actuality, our continuity is entirely fabricated, and we really are just an amalgamation of an infinite number of infinitesimal moments.
Since there's been absolute no evidence of 1, and people seem to be able to continue to function without most individual portions of their brains, I have to assume that 2 is more accurate.
> The whole notion of self is just an illusion. We simply exist on a moment by moment level.
That sounds to me a bit like "apples are just an illusion, there are simply individual atoms". Yes, technically, there are individual atoms. And together they sometimes make an apple.
What insight do we gain by replacing "is composed of smaller parts" with "is just an illusion"?
Well, it could at least provide an answer to the age old question of how do we feel like we are a single entity, the same that we were yesterday, the same we were ten years ago, even though every individual part of our body can and does change its components. The answer being, we are not really the same being, it's just an illusion mechanism brought on by the fact we have access to memories, and probably for the purpose of making us care more for our self preservation. The difference between us and apples is that apples don't have consciousness. Consciousness is like the only thing that we as humans actually know exists. And this consciousness comes along with a sense of identity. But I'm saying even though consciousness is real, the sense of unified identity may not be.
Nothing's wrong about it. I guess it depends on what you're looking at and what answer you're trying to find. I'm personally coming from the assumption that my consciousness in any moment is actually real, despite the fact that it is not measurable in any way and that our body changes all the time. I'm trying to resolve how something can be and continuous even if the underlying matter is not the same all the time.
So, I'd say that maybe "their continuity is the self" is an entity that exists, if you look at it that way. But maybe the being of self is like a husk or a golem, that is inhabited or assumed by a consciousness at any moment. I'm viewing the consciousness itself as the actual being.
I actually really dislike the “self is an illusion” claim. To be honest it’s the only thing we can be certain of. The self (or mind) is the software running on the brain. The continuity of the self isn’t really an illusion, even though we don’t remember everything we remember a lot, and more importantly our personality largely remains the same baring some breakdown in the brain. And it’s these exceptions or changes in personality caused by brain damage or ageing the prove the rule of a largely static self, external observers see ourselves as the personality, one that changes slightly if at all over time.
Anyway what does illusion mean. Who or what is experiencing this illusion of self? It can only be the self to whom the self is an illusion and that’s a recursive absurdity.
What is the difference between thought and snow? Descartes would say “I think therefore I am” but not “I snow therefore I am”. Yet thought and snow both reside in perception. What makes them different? If I dream snow, snow is in my thought. Would it make sense to say “I snow therefore I am” then? Are the part of you that sees the snow in the dream and the part of you that sees snow in real life the same thing?
I do believe in Descartes's "I think therefore I am". I believe our own existence is something that is probably real, and possibly the only thing we know to be real. But I would extend it to be that we know we exist only in the moment. Our consciousnesses is only felt in the moment. Any of our past memories could have been faked and implanted. Therefore, I believe that while our existence is real in any given moment, the continuity of that existence, which I believe to be our self identity, is potentially an illusion.
> I do believe in Descartes's "I think therefore I am".
Unfortunately, it assumes the conclusion: it posits "I" to prove the existence of "I". The fallacy-free version: "this is a thought, therefore thoughts exist".
Indeed, but then what is the definition of the "self"? Is it some process that produces a stream of semi-coherent thoughts, where new thoughts can contain referants to prior thoughts? What does it mean for a process to have an "identity"?
This all goes into the claim that Nolan doesn't like about the self being an illusion. Calling it an illusion means that we perceive our sense of self to have properties that it does not actually have in reality, and I think that's a technically correct assessment.
The self can be real but fleeting. The illusion is the continuity of it from fleeting moment to moment.
Or perhaps the self exists everywhere in everything, in which case brain changes over time don't matter because the self exists in all matter.
Either of the above cases strike me as logical. That the self is the software running the brain, despite changes to it over time, seems at least slightly flawed.
Yes, 2 seems most likely to be true to me. But people who believe "death is this horrible thing and we must work to end death" don't seem to believe 2. I wonder what those people do believe.
Using the language of the comment, I suppose they want to "continue existing moment to moment, having access to their previous memories". :)
But actually, it is also about the future, not just about the past. The idea is that if I do something useful now, I can meaningfully expect to benefit from that in future (with certain probability; unexpected things also happen). Without this, any action would be meaningless. Like, you wouldn't even type a comment on ACX, because you wouldn't expect to see it appear on the page. Even thinking wouldn't make sense, because you wouldn't expect to finish the thought.
To me the question is what entity experiences the qualia of the future. I don't experience being me in the future. Someone else does. Why do I care about that someone else's experience, if I don't believe that being is me? Same reason I care about my daughter's experience in the future, even though I don't experience being her.
Actually, I do believe I experience being me in the future, but for the same reason I experience being everyone and everything. A sense of continuity has got nothing to do with it.
But in either case I don't believe death is a bad thing, because either:
Well, one direction to walk from here is to conclude that *nothing* really matters, because pain or suffering - it only lasts a moment anyway, no big deal. And long-term things, such as education, are completely absurd, because the person who does the exams at school is not the person who will later get a better job, so it's all random. Even the person who cooks the dinner is not the person who will eat it; and even the concept of eating the dinner doesn't make sense, it's just disconnected moments of sitting by the table, some of them having a spoon in your mouth, some of them swallowing, some of them feeling full.
But if we assume that the states of the future matter, because the cumulative experiences of thousands of observer-moments are worth something, then...
> You are reborn as everything every moment
On actual death, your memories and learned skills are lost.
But it's mostly a matter of definitions of "me". And measuring what it is.
Though from the mathematical point of view it can be anything from a completely random non-repeating trajectory, oscillation around a stable point or line, cycling or any other way of our personality represented by whatever coordinates we choose. And if we are able to track them, if we have enough statistics, we might be able to tell what a life path is. Without that the discussion doesn't seem very tangible.
I find this statement odd, if not downright unscientific. Our gender, for us sapiens, is determined at conception time, where a sperm cell bearing either an X or a Y chromosome wins the sperm cell rally to find and meet the X chromosome carrying ova.
We find this odd term 'assigned' in a scandalous paper—from a clinic treating patients with congenital sexual organ defects. The study data now determined to have been fabricated, along with the author sexually abusing the sole patient ... and his twin brother. Who both died, one by opiate overdose, the other by suicide.
So how do we properly state, our gender is determined by the winner of a sperm rally?
I guess we could rephrase "gender is assigned at birth" to something like "preferred gender role is presumed at birth based on biological sex, and with a high degree of confidence since most people will not ultimately reject their gender role to identify as queer, transgender, gender-nonconforming, etc," but it seems like little more than spilled ink.
After all, if all we do is quibble about the proper definitions of "gender," "gender role," and "assigned" while the underlying reality proceeds unimpeded, what's the point of the quibble?
I think the point is to grant power to ideas in the hearts and minds of the populace by means of memetically spreading brain worms that alter the way they look at humans and society over the course of decades, such that they will ultimately find themselves amenable to certain policies that they might have otherwise found disagreeable or silly had they not had decades of exposure to these new forms of thought.
Seems like fighting the tide with a bucket. Trying to convince everybody else that we should restore the prior meaning of "literally," or that we should stop calling things "socialism" unless they involve state ownership of the means of production, etc isn't *totally* futile - language is socially constructed, after all, so it's always available to any of us to try to convince others by our usage and "make fetch a thing" if we want to go for it.
It just seems so tremendously low-likelihood of success that someone seeking a social outcome would in almost all cases better off using whatever new meanings are commonly-understood to argue for their preferred outcome directly, in a way that others will understand, rather than trying to push that "critical race theory means XYZ legal theory so it's definitely not in schools," or "gender means sex rather than gender identity/roles, so gender can't change," or make some other effort to germinate change in an entire language of which one is but a single speaker, in the hopes that it will catch on and move the mountain to favorable terrain from which one can then argue.
I didn't mean to say that I knew anything about how to solve the problem. When I said "the point of it is", I was referring to "that's how this happened in the first place. By petty quibbling over decades". For some reason, and I don't really know why, ownership, in the hearts and minds of the people, of simple language and definitions does seem to have power. I don't want to believe it, but it seems to be true.
Whether quibbling could get us back to where we were, beats me. I'm tired of quibbling in general, until the random-odd ASX thread comes around and finds me in the right mood. But I mostly like pointing out the sophistry, the meta-level analysis. I don't have as strong of an opinion of the object-level stances.
Chromosomes determine biological sex. Gender is defined as the set of socially constructed stereotypes associated with biological sex. But nothing is set "at birth", if a doctor fatfingers some data entry it doesn't actually change either your biology or how people treat you.
"most female people are raised to be passive, submissive, weak and nurturing, while most male people are raised to be active, dominant, strong and aggressive"
Okay, so the reason I asked is because I wanted to be specific, because you said
> Gender is defined as the set of socially constructed stereotypes
To me, when you say that most women are "passive, submissive, weak and nurturing", that screams to me that it's a gender role, not a gender. Women are not defined as the gender that is passive. If someone sees a man who is passive, weak, etc, approximately 0% of people would jump to "well clearly thats a woman".
As another way of looking at it, some cultures and tribes around the world do have women who are less passive and weak. Do we say "that means that that culture doesn't actually have women, they're actually a different gender". No, we instead say that women in that culture exhibit different gender roles, but we still accept that they are women. I believe that it is dangerous, and possibly a motte and bailey, to play willy nilly with the terms "gender" and "gender role", because gender roles are a construct (in some form), but that doesn't mean that gender itself is a social construct.
Your sex is determined at conception and observed at birth (or earlier).
I don't know what "gender" means in popular discourse any more, and choose to ignore the concept entirely unless we're talking about nouns in certain languages.
I agree that the word gender isn’t well defined. It seems to shift according to what the speaker wants.
I think in the nineties “gender” was still mainly used by most people as a euphemism for “sex” because some people don’t like the association with sexual intercourse.
I agree that sex is observed, to the best of our ability (which is very occasionally imperfect). That's what goes on the birth certificate. It seems to me that gender is assigned, but informally: "Hooray, we have a girl! Let's dress her in pink and start saving for her dowry!" The Assignment At Birth claim is a beef with culture masquerading as a beef with medical science.
It's fine with me if other people want to use Assignment At Birth terminology to describe their lives. I won't apply it to myself; it turns out that my sex was observed accurately, by any meaningful standard.
There are occasionally babies born whose genitals are sort of in between male and female. I think that depending on how things are arranged, it might be easier surgically to move them in one direction or the other, and in that sense their gender is "assigned." If a baby is genetically female, but has no uterus and has external genitalia that can be surgically turned into a reasonable approximation of a penis, seems to me everybody's best shot is to do that surgery and think of the kid as a boy. It's a little weird, assigning gender that way, but on the other hand it seems weirder and harder on the kid to leave them with ambiguous genitals and no gender assignment.
I'm with you that there are all manner of congenital abnormalities, including sex organs. But I stand fast that someone carrying XY chromosomes is a vastly different class of person than someone who carries XX chromosomes. Every cell in your body screams XX or XY.
Yes there are also chimeras among us, people constructed from conjoined twins, and thus some people may carry a brain of one class, and the reproductive organs of the other. And for them problems arise which may only served by drastic surgery. But those affected such will be in the tens, not in the thousands.
But for the most, growing up is difficult, puberty is difficult, online and socially disconnected youths moreso. But again, our gender isn't selected, nor assigned, but we are in essence the sperm who won the race, rather the homonculus.
Yes ambiguous genitalia in babies it is rare -- 1 in 5000 said the source I looked at. I do not doubt that XX and XY chromosomes produce different trait profiles. I don't know if they exactly scream. Seems to me that if they screamed we would all be confident we know the gender of everybody on here, even though way more than half the people give no clue either in their picture or in their user names, because we'd hear their chromosome-related differences screaming. I'd say that overall I think I'd make better guesses about ACX particpants' income level, political leanings, and gentle vs. aggressive personal style than I would about their gender. So going by that, gender apparently screams less loudly than some other things.
If it's the case that the term exists for to these few edge cases, then why say "gender assigned at birth" for the rest of the cases, where it's clear cut, and gender is not assigned at birth at all?
As I understand it, that term originated in intersex communities, for people born with genitalia that isn't standard male or standard female. When faced with this, parents and doctors tended to assign the baby to a sex, and then often performed surgery to make the genitalia more conforming to that sex. Sometimes the choice of assigned sex would have less to do with the genetics of the baby, and more about the surface resemblance of the genitalia.
At some point, the term was picked up and popularized by the transgender community. This wasn't without controversy in the intersex community, and I've seen it referred to as "appropriation". But the intersex community is small even by LGBTQ standards, and not very vocal, and I think by now the broader use of the term is a fait accompli.
Or at least, that's what I've heard over the years.
Well, it's assigned in the same sense that a name is assigned-- it's put on the birth certificate, which is a legal document. Of course the parents are free to pick any name, whereas the gender is determined by the baby's body, except in the quite rare cases where the baby's body is ambiguous. What are you concerned about here -- are you thinking that parents whose baby is unambiguously male might "assign" it female gender at birth, or get the doctor to do it?
No, my concern is that I think that the proliferation of the term is a bit of a societal brain worm, which exerts (some) control on how people think by means of tailoring the language they use to communicate. It gets people thinking that gender is something that is assigned to you, that you could change, or that could easily have been assigned differently, as opposed to something intrinsic to you.
I dunno. I think someone would have to already have holes in their brain for that term to nudge them in the direction of thinking of gender as being like a name -- you can pick it, you can change it, no big deal. What I think is much more likely to move people in the direction of thinking of gender as choosable is improved tech, which probably will make it possible for people to switch genders in a way that works much better than the present cumbersome and semi-effective surgeries and drug treatments. Or make it possible to change XY infants to XX infants or vice versa, or make it possible for 2 males or 2 females to combine their genes and create a baby. All of which I'm pretty much OK with.
Sex is biological, gender is a social construct. Really, it's not assigned at birth, but on a continual basis, based on how you live and how others treat you. The two are very strongly correlated of course, and part of the reason that gender roles are the way they are is due to the influence of biology.
99%+ of the time, your "sex" is the sex assigned at birth. There are a small handful of edge cases, though they are pretty rare, and even in most of them, it is still pretty clear what "sex" you are.
Trans activists have used this and some other conceptual muddles they have intentionally created to try and make it seem like the situation is a lot more confusing and fluid than it actually is because they want to obscure the reality of the situation (the vast majority of people have a specific stable sex, and trans individuals are explicitly making a choice to swap).
Pretending people have some sort of "gender soul" allows the situation to have more politically appealing messaging.
AFAIK, this phrasing has become common because of the problem of edge cases, where the gender observed at birth doesn't match other ways of identifying gender, including chromosomes. I am neither a doctor nor a biologist, but my understanding is that there is at least one condition that produces babies with XY genotype and a female-looking crotch. AFAIK it's not routine to do a genetic test when determining a baby's sex - people just look at the obvious physical characteristics and announce "it's a boy!".
And even if genetic tests were done, there'd still be edge case babies born - mosaics, XXYs and similar. Mosaics in particular - if the baby is the result of an in-utero merge between male and female twins, the gender you determine from a gene test will depend on what part of the body you sample.
Also, of course, there are the babies born with ambiguous genitalia. There's a long history of playing pick-a-gender and modifying the baby's body to match. We now know that's a bad idea, but some of those now adult children are still around. (And to be fair, chromosome tests may not have been possible when they were born.)
All of these are rare, but "assigned at birth" covers them.
It also covers the case of those who are unshakably convinced that their real gender doesn't match the body they were born with, which I presume you consider to in fact be the same gender as their body. So the phrase gets widely used in that context too.
Damned if I know, but my guess is that it started with people who specialized in edge cases, and then spread. If you spend your days writing case notes for children who've been referred to your clinic, finally, after even front line specialists have declared themselves unable to help, you need this kind of language - and tend to see the 1 in 1000 or less edge case as the common thing, as they are 75% or more of those you see.
I'm a lot less worried than you about the latest in terminology-one-must use. I also don't see the government telling you to use this terminology about normal children. They - or the AMA, or the hospital admins - may be telling your OB/GYN or primary care doctor to use it, so records will be consistent for the tiny % who prove anomalous - but I don't think so, since records I see, even in California, use normal language "63-year old woman", "40 year old man". OTOH, I'm not in the field, and the records I see happen to all refer to adults.
As for the mob of the week - are you sure you aren't in the process of creating a competing mob, and the result of your efforts will be to set up a situation where whatever term one uses, one mob or the other will have a screaming hissy fit? I feel certain that if you happen to be in Florida, state government will be a lot more likely to endorse whatever terminology you favor, rather than the terminology you describe as being endorsed by government.
I'm bl**dy sick of language police, as it happens. That doesn't mean I can avoid them - the euphemism treadmill never stops, and some quantity of a**h*les love to invent new terms for the same old wheel and then insist they've made a major advance in science or culture.
But what I see here is the creation of yet another shibboleth - language that some people demand that one use to show membership in blue or red tribes. It's annoying from the blue side - which I see more of, generally, being resident in urban California - but it's equally annoying to me from the red side.
And it's bl**dy hard on children and families affected by rare conditions to have their or their children's health used as a political football.
I agree to a certain extent, but I rather dislike the general insistence I continuously see saying that the red tribe should stop trying to mob, stop trying to be censorious, and generally blaming the red tribe. I hear all the time "yes I dislike when Democrats do this, but Republicans do it too". My answer is "so what?" If the blue tribe owns the discourse and has left no option for the red tribe to do but fight back in this same way, how can you blame them for it?
The problem is that it does not. Some red tribe concerns today were outside the Overton window in my childhood, so the red tribe can be said to no longer have 100% ownership of _that_ discourse. This may cause some red tribe people to feel as if the blue tribe owns the discourse, not only on those topics, but on everything else. But it does not.
Other concerns have shifted; some historically conservative concerns may be totally outside the Overton window today. (AFAICT, these mostly involve arguments in favor of slavery and/or ethnic cleansing, though to be fair discourse about innate racial differences is outside the window in quite a few contexts, even when it's not explicitly used to motivate either of those. )
Probably you can come up with other examples. You will, for example, get laughed at if you cite scripture in support of a scientific position, and your contribution refused by any peer-reviewed journal - but that's been true since long before I was born. It's conceivable to me that this might be a live issue for some red tribers, though that seems unlikely.
Edited to add: other than a lingering hankering for totally free speech, I'm happy not to encounter people hankering after the chance to own slaves, or eager to eliminate members of other groups in favor of their own. I'm making the assumption that this is not a problem for you, beyond generic free speech principles.
You write: "The government was _supposed_ to protect everyone's right to speak freely even on controversial topics, and protect parents' ability to raise their children as they see fit. It abandoned those duties, so the job falls to incoherent and unqualified vigilante mobs."
Which government? It seems to me that governments have been interfering in both speech and child rearing for a long long time, generally punishing things that community elites - speaking for everyone - don't like.
There are a variety of things you'd most likely be happy to see punished, particularly in terms of child rearing. Parents haven't had absolute power over their children in a long time. You don't get to have sex with your kids, fail to feed them, beat them to the point of serious injury, or deny life-saving medical care. You certainly don't get to execute them, unlike in Ancient Rome. That's been generally agreed upon by most Americans, except perhaps Christian Scientists (re medical care) for rather a long time. The details get litigated, but just about everyone agrees that some level of bad behaviour makes for an unfit parent, and children needing to be re-homed.
You write: "The people you want to take that up with are the ones who took a microscopic number of people affected by rare conditions, most of which aren't even externally detectable ..."
I could write a pretty lengthy rant myself about the excesses of the movement ostensibly favoring better treatment of transgender people. Some of them are so absurd that I sometimes wonder whether the perpetrators are in fact agents provocateurs (sp?) intentionally trying to create a backlash.
I might not be up to date with the latest gender theory, but what I was taught (at a fairly conservative university 15 years ago) is that "sex" is determined chromosomally (male/female) while "gender" is the social construct (man/woman). Obviously those categories correlates nearly perfectly so there would be a lot of confusion if there is a meaningful distinction at the the edges.
At this point this is what you will find as the scientific definition of the terms, anyways
2) gender as a social role ie biological women are historically expected to look after the kids, not work and so on.
3) gender identity. The thing you are born with which isn’t either biological or socially constructed.
The term “assigned at birth” only makes sense of gender isn’t biological sex, but it can’t be socially constructed either - nothing is socially constructed prior to birth.
I'd say the gender assigned at birth is basically gender in sense (2), it's just that it's a prediction, subject to potential adjustment as new information arises.
I would question, why is it that that has come to be the case? Why is it that gender is considered to be a social construct and separate from sex, as opposed to being the "polite company" euphemistic term used to refer to sex, the way I believe it used to be. Is there any basis for the shift in the meaning of the term?
I think the use of the word "gender" to refer to something people have is quite recent, dating no earlier than the 1950s. Before that, gender was something words in romance languages had (eg. nouns in French are either feminine or masculine), whereas what a person had was sex. The use of the term gender to refer to something people have was coined in 1955 by John Money, a sexologist who was (I think) researching gender-nonconforming people. So this now-common usage of the term is closer to this origim, and the use as a synonym for "sex" is, if anything, the co-optation.
Not exactly. Looking at the OED, it seems it was originally just a synonym for "kind" or "class". There are plenty of instances of it being used as a synonym for "sex", starting from the 1400s. It's just that it would have been a slightly odd usage back then, like saying "humans of the male kind and the female kind" today, since one would normally have used "sex" to refer to that particular distinction. The use of it as a synonym for "sex" becomes more common in the 20th century as the primary meaning of "sex" shifts to "sexual intercourse" rather than "the male kind vs. the female kind". At about the same time there also arises the definition of "gender" as the "socialized obverse of sex" (to use a pithy phrasing from one of the books the OED cites), which the OED regards as psychology/sociology-specific jargon rather than part of the everyday meaning of the word. So both of the modern senses of "gender" are more-or-less simultaneous evolutions of the old meaning, equally novel.
"Gender" comes from old school Victorian anthropology, where it is a valid and useful concept.
I understood gender to be the role you play in society, which in my mind's eye is the shape of the cog you make as you slot into the great machine of civilisation.
Say you have a tribe where the men fight for glory and the women nurture children. There's a unmet need for doctors/field medics/other logistics on the battlefield, but women aren't allowed because it's dangerous and men would be passing up the chance for glory if they took the job on.
Bam, third gender: man-who-lives-as-woman, allowed to risk his (xis) life in battle but not supposed to fight in it. He is actually honoured and respected for the work he does, where a real man would have to be called a coward.
All three genders do important work and need to exist in order for that tribe to function the way it's evolved to.
This is very far from how the word "gender" is used in our society today, but I believe that's where the word came from.
It sounds to be more like you're describing "gender roles". Or at least I think prior to 2012, most people would have defined that as "gender roles".
Tell me, let's say there's another country right now where men do stay at home and women go to war? Would you say those men are a different gender than men in the United States who don't stay at home?
I am, and I'm also saying that the existence of gender roles is the only reason we ever needed the word "gender".
In answer to your question - yes. If their society has evolved a certain way and those are the roles that have fallen out.
But I'd point out that these wouldn't be "US men who just also stay at home". They would be a completely different beast, with their own characteristics not necessarily comparable to men or women from America.
It would be important to take their society as a whole and see how they fit into it and why their roles make sense the way they are.
Well I guess we just disagree, then. As far as I see, people refer to other cultures as still having the genders men and women, albeit with different gender roles. In my experience, people don't think of India woman as a different gender from United States woman.
In the Balkans there is the custom of a "sworn virgin" when a family doesn't have enough sons which permits biological females to do jobs restricted to males. But the legal fiction doesn't actually make them the same as men, who of course don't have to swear to remain virgins. It's just a partial change to workaround their restrictive norms.
Can she ever leave the role and get married, or is she just told, "you'll be a boy now" and that's that for her? The moral here is obvious: Balkan women should have more sons.
My understanding is that it's a lifetime thing you can't quit (there's a movie titled "Sworn Virgin" about such a person who does shift away from that role, but only by leaving the social context in which that even exists for a big city another country).
I assume it's just that people (especially trans people) wanted a term for the social construct thing which people previously hadn't bothered distinguishing so they co-opted "gender" for that.
I agree, I think it was coopted. I rather dislike coopting, because to me, it really seems like they just fabricated the new meaning and rules surrounding it.
I always say that from my experience with the term gender, it always seemed akin to if all of a sudden a bunch of people started saying that I had to treat them as if they are 6'2", or whatever height they like. When I point out that, no, they're not actually 6'2", they're actually 5'8", they say "that's my tallness, not my height. Everyone knows that tallness is a biological trait and height is how you identify. I identify as 6'2" height"
But apparently there are people who, for example, have penises but who feel deep down that they are women. Regardless of whether their feelings are worthy of respect, we need a name for that femaleness that they feel, and gender is as good as any.
Problems with bullet-matching forensics, and I when I say problems, I mean it's bogus "science" based on guessing that gets people falsely imprisoned. It's quite possible to match a bullet to a *type* of gun, but not to a particular gun. Goddamn it, I *believed* the books I read in the 60s about how cool the FBI was.
There's plenty about how to check on whether a theory is true, and how reluctant people are to check on whether their profession is based on anything reliable. And the problem that judges are apt to rely on precedent rather checking on science
There's actually been a tiny bit of progress, but it's going to be a long fight to get science taken seriously in forensics if it ever happens.
An aspect no one has yet mentioned is crime prevention. Think you can commit the perfect murder? No way, forensics is so good you're sure to get caught through something you may even know nothing about.
People are worrying about the impact of deepfakes on the legal system, but I think that through its entire history, almost everything that passes for evidence could have been falsified. Witnesses can lie or make mistakes, documents can be forged, "scientific" arguments can be bogus. (Especially if you think out of the box, like: maybe fingerprinting is a solid science, but that specific fingerprint found at the crime scene could have been planted, or the person who analyzed the fingerprint may have changed the samples.)
So the thing we have is a combination of lots of weak evidence that people believe is stronger that it actually is (plus even more weak evidence that shouldn't officially be accepted by the court, but it gets there indirectly anyway by making people perceive some other evidence as stronger than it is), plus relying on the fact that most people are stupid (including most criminals) and most guilty people break under pressure (especially when it seems there is strong evidence against them), plus the occasional injustice (probably way more frequent than most people believe). And I guess it works better than nothing.
A legal system in a rational country would probably be more explicit about the probabilities, include some form of insurance, and also some ways to proactively protect yourself against injustice. For example, you could agree to wear a surveillance device on your body 24 hours a day, which would send encrypted data to cloud, just in case that a crime happens, you are falsely accused, and this device may help to prove you innocent. Detailed crime statistics would be published, so that you would have a smartphone app telling you to avoid a certain area (to avoid either becoming a victim or becoming falsely accused of committing a crime) or to turn on recording while in that area. But this assumes that the rational people would cooperate, instead of becoming hysterical when things are discussed openly.
For preventing white-collar crime, it would probably help to simplify many processes (so that also crime would become more obvious), and to make them somehow more transparent.
"I suspect this has to do with (lack of) accountability,"
Yes.
Not necessarily at the level of guns, but definitely at the level of the courtroom. You'll notice that both pseudoscience and outright fraud (a la Annie Dookum) are most common in criminal cases, where the state has nearly unlimited resources compared to most of the defendants, and an awful lot of those defendants are not just poor, but stupid too.
Compare that to forensic labs that are dealing with corporate/commercial issues and things become very different. I was very fortunate that my first out of college job was with the Office of the Texas State Chemist where we dealt with animal feed, fertilizers, and animal deaths. When Monsanto's legal and analytical departments are better than yours, the work has to be absolutely bulletproof. The people receiving/repackaging/randomly labelling the samples were in the basement, the people doing the analysis were on the fourth floor, with a dedicated dumbwaiter.
The actual work was assigned by computer (with blind duplicates and other cheating detections assigned).
Yeah, most of forensic "science" is untested and unproven, including the notion that fingerprints (and especially partial prints) are necessarily unique enough to be God's serial number.
But as amazing as the bullshit in forensics is, the bullshit in dentistry is mind-blowing:
There's a big issue in courtroom evidence standards where new technologies are at least in theory held to very high standards but the old stuff grandfathered in is really questionable, both the 'sciencey' stuff and the 'common sense' stuff like eye witness reports.
I think this is one of those edge cases like say "blood spatter" where in theory the method could work in some cases, but then you have overzealous investigators and prosecutors and experts for hire so over selling and over applying the method that it becomes pretty unscientific and unreliable.
Hair analysis is like this too. People like to pretend it is "we matched this hair to this person exactly", when in most cases it is more "this person was a brunette" if there isn't DNA, which um only narrows it down to half the population.
Bite mark analysis is another one that is mostly bunk. So much of "forensics".
Just finishing up A Thousand Brains. A few questions for the AI contingent:
- What are your top 3 news sources for musings on AI research/dev? I'd like to keep myself more in the loop (particularly on the tech/dev side, not so much on the alignment/business axes)
- The core thesis of the book is that the brain is comprised of (mostly) fungible cortical columns that act as reference frames for things/concepts/places/etc. These hundreds of thousands of references are synthesized by the brain to create a continuous prediction engine, and that is mostly what we experience (sorry if I butchered that!).
That is well argued and compelling throughout the book, and I have no reason to doubt it. But he insists that to create a truly intelligent machine we must understand the core mechanisms within the cortical columns. Here is where he starts to lose me. Why can't we simulate reference frames given our best methods? Could a GPT-adjacent LLM provide the same building block for AGI that cortical columns provide for the brain? What if the LLM was instructed to simulate a individual reference frame, and an ensemble of these LLM ref frames were arranged in a way similar to the brains architecture?
Regardless of the inclusion of LLMs which has its own complications, I'm not sure if I believe the statement that "understanding the brain in totality is a precursor to truly intelligent machines" like Hawkins seems to think. But I'm curious to hear any thoughts.
I know most people don't think LLMs are the road to AGI. Just coming up to speed on a lot of this stuff and thinking out loud.
Long Covid has been a topic of discussion here for a while, but I hadn't known anyone badly affected by it until recently. However a month ago my good friend got sick and hasn't really recovered. She now has:
- dizziness when standing or walking, making her unable to do so for more than 10 seconds
- fatigue
- muscle aches
- sound sensitivity
- brain fog
- dizziness when trying to read or look at screens for more than a couple minutes
She suspects it's myalgic encephalomyelitis or chronic fatigue syndrome (ME/CFS) but it's too early to be sure. Still, this has been completely debilitating for her, and she can barely do any of the many activities she used to enjoy in life. Even eating a meal or walking out of the house often require assistance.
Since there are no known cures, my recommendation to her was to try miscellaneous things that *might* help her (and otherwise have low risk), per https://www.lesswrong.com/posts/fFY2HeC9i2Tx8FEnK/luck-based-medicine. With this approach, the most valuable interventions to try first are ones that have anecdotally helped others. So I'm posting here to see if any readers know of similar medical cases that *were* successfully resolved, and can share what helped for those people.
Any ideas would be appreciated!
@ScottAlexander - Could you signal boost this in a coming open thread? I think that would strongly increase the likelihood of this working out, without setting an exploitable precedent.
I coincidentally just read about a guy who had a viral infection leading to ME/CFS and then resolved it. His problem seems to have been caused by lax connective tissue in his neck, which made the skull sit lower than it should and so the spine was putting pressure on the brain stem. Using a halo brace and then later a skull-to-spine fusion surgery completely fixed his extremely severe symptoms. He made a website about it: https://www.mechanicalbasis.org/
A sizeable fraction of post-viral fatigue sufferers still have some of the virus lurking in their systems, and are treatable by vaccines and/or anti-virals. One big advantage long covid sufferers have over many CFS patients is that they know precisely which virus they got.
if that approach doesn't work, and the symptoms persist, there are many avenues to try and improve general health and treat the symptoms, but even the success stories for those usually look like "now able to work part time", which while a massive improvement over the no treatment case is not exactly "perfect health"
Some things to be aware of about Long COVID, aka PASC (Post-Acute Sequelae to COVID-19)...
1. There are now 8 competing theories for the mechanism behind PASC. One, some, or all of them may be valid depending on the symptoms. None of these theories has been conclusively proved nor discounted.
2. Most of these theories involve mechanisms based on inflammation and/or tissue damage. But the byproducts of these inflammatory processes should be showing up in blood work. They're not—at least not at higher rates than negative control groups.
While treating the symptoms may be a useful approach, I'd be curious if your friend might still test positive for SARS2 infection using one of the more sensitive tests.
It's worth noting that other severe viral infections can cause downstream health problems—I saw one study where the rates of "Long Flu" are roughly the rates of Long COVID. And measles has an even higher rate of post-infection syndromes that lasted longer.
As for sever PASC, most people recover in 90 days—but there is a very small percentage of long-haulers whose symptoms last for longer than 120 days.
Long covid sucks... I know a few people who have/had it. It seems the first thing is not to push too hard. In most cases people recover, albeit slowly, but every attempt to rush the return to pre-covid levels of activity tends to cause a flare-up and set back the recovery process.
Yes! this is a really big deal for Chronic Fatigue. and one that depressingly many doctors are unware of. A little bit of activity is good, but pusshing too hard either physically or mentally causes immense damage
It's not a great sign that she's feeling this lousy a month out, but I have felt that way a few times a month out after a bad flu. In those cases I just hadn't recovered fully from the illness and after another month of so I was back to normal, except maybe for a lingering cough. Let's hope that's the case with your friend. And on the theory that she's just having a slow recovery , she should stay home, rest a lot, sleep a lot, drink plenty of fluids, avoid stress, take a multivitamin, avoid alcohol and cannabis. She should not work out, but can try some stretches to see if that helps with body aches.
If after 2 months she doesn't feel better she probably does have long covid. Read up on it some. Try to find some good overview articles that have no ax to grind. Katelyn Jetelina's blog is fairminded and intelligent -- search it for her views and for links. The last careful, fairminded-seeming article I read about LC said that a pretty large percent of people who did have LC had recovered from LC after 4 months. The things that make it likelier someone will be in the 4-month recovery group are younger age, having been vaxed before having covid, and good general health. If your friend has 2 or 3 of those going for her she is fairly likely not to have a long bout of LC. There's info about there about what's been tried. Paxlovid is one thing that seemed promising, and results are probably in, but I don't know what the finding was.
So I received an interesting offer from a Bay area start-up and I'd like to find out how interesting it really is.
I work in IT (machine learning) and so far I've only ever worked with European companies, mostly Czech ones. I have something like 5 year of experience plus a PhD in maths - probability theory specifically (not very relevant I'd say but some companies value it anyway).
Financially the offer is about 115k USD per year plus equity (not clear yet how much equity..I'd love some input from someone about what is usual in a setup like this) plus a sign-up bonus (which is more or less there to compensate equity I have now and which I'd lose by switching jobs before the end of the year). I'd work remotely 100% from home (i.e. the Czech republic). I'd work as a contractor which probably means simply sending invoices to the US instead of a Czech company and the invoices being paid in USD.
I've been in contact with the start up for a while (mostly discussing technical issues with them), I really like their products and design philosophy and at least the main people there seem very skilled. They are also past series A funding with something like USD 20M received from investors last year, so the equity is pretty valuable too.
I suspect that USD 115k per year would not be stellar in the US, definitely not in the Bay Area but then again I don't live there and I don't have to pay rents/mortgages there. it is definitely a good deal compared to the money I can earn here (though not multiple times more). If taxes work the way I think they do I should end up with something just under 100k netto (after taxes, health tax/insurance and social welfare tax/insurance). For comparison, a new 1000 square feet apartment close to the centre costs about 500k where I live.
I also wonder about vacation and work "ethic" (read: how many hours one is expected to put in). In Europe it is common to have 5 weeks of vacation plus public holidays. I work as a subcontractor even now which in the Czech system means much lower taxes, but also no welfare benefits and a weaker social safety net...kind of "American mode" (in IT you typically can choose either this or being an employee which means less money and more social security). I actually end up working more than is common for European employees but usually this is in the form of overtime and I still take those 5 weeks of time off, I simply work a lot of those hours during the rest of the year (so it is more like taking 2-3 weeks of vacation plus public holidays). I will still talk about this with the people from the company but I'd like to know what is common in the US.
You should treat pre-IPO equity as worthless. Most startups fail, and even if they do exit successfully, the VCs and Founders normally take nearly all the money for themselves. The days of millionaire secretaries are long gone.
I am not expecting to become a dollar multi-millionaire from the equity but I think there is a range between worthless and Google/Paypal/Uber/...The company could still sell for a decent amount of money, yielding a nice one-off bonus for people other than the founders even if it doesn't instantly make them super rich....or do you think it is always to the moon or bust with start ups in the US? European start-ups who do not completely fail (which is still common, but more so at very early stages) end up being sold for a moderate amount of money (but I'd say European VCs are also a lot more conservative than those in the US, they prefer higher chance of moderate success to unicorn hunting)
Equity isn't money. One could make 100 shares of a company worth any amount one likes, given the variables of projected income, assets, and number of shares. It would be instructive if you were able to calculate the percent of the company they are offering you.
Even if you got a considerable share of the company, if the venture doesn't pan out, then yes, the shares would be worthless. If they are offering you equity, you shouldn't rely on it to live or eventually retire.
If you work at a startup for three years, it goes public, and your stock options are worth 6k, would you be happy? (This happened to me)
Obviously 6k > 0, but it may as well be zero at the scale we're talking about. And this was one of the *good* cases. Most startups never make it that far.
I worked for a startup that got bought, and unfortunately I didn't know how to properly pay taxes on my stock options so I had to both pay a penalty and then the income (rather than capital gains) rate on them.
Well, you're right, I wouldn't be too happy about that. something like 50-100k would be nice though even if it is not millions. You are right that it is not guaranteed at all. The founders have some previous successful exits behind them already though, compared to other start-ups I've seen I'd definitely rank them in the top half in terms of success chances. No red flags I can see, some track record of past success and clearly some skilled people working there.
> The founders have some previous successful exits behind them already though
Is there a way to contact randomly selected former employees of them? Because you want to know how much money the employees made, not the founders.
I am not an expert on this, but from what I have seen on internet, it seems like there is an unlimited number of ways how to make your shares worthless, unless you are not an expert. First, shares can be diluted, so at some moment you own 1% of the company... and the next moment you own 0.000001% of the company; and then the company is sold for a few millions, and you get a few cents. Second, there can be different types of shares, so when the company is sold, the shares with higher priority (those owned by founders and big investors) get the money, and the shares with lower priority (those owned by you) don't, or something like that.
Maybe I got some detail wrong, but the idea is that unless you perfectly know what you are doing, you are simply trusting the founders to voluntarily give you the money they could have put into their own pockets instead (and you also trust them not to change their minds later). The share itself, unless you perfectly know what you are doing, may mean very little.
If vacation time is important to you (as it is to me), I doubt if any Silicon Valley startup will offer you more than two weeks. I stopped considering offers from startups long ago. None of them would agree to more than two weeks of vacation (and 4 weeks was the minimum that I would need to recover from working in what would likely turn out to be a product development death march). Also, asking for vacation makes you look like a slacker in many hiring managers' eyes.
And I don't know what the current startup success rates are, but it's very unlikely you'd see much upside from stock options etc. OTOH, I know people who love the challenge of working for startups, and after working for several startups they've been able to negotiate extremely high salaries—in the lower range that uf911 mentioned—because of their experience and skill sets.
I think my approach to vacation is that I am ok with working somewhat more than 8 hours most days but then taking the 5 weeks plus public holidays off. So on average it is as if I worked normal 40 hour weeks full time and had 3 weeks of vacation but the time distribution of time is different.
The startup I am talking about builds tools which are used in certain banks and even one very large European manufacturing group (though there they don't use the paid offering, only the FOSS stuff currently), I actually think they have a good chance at success. So I wouldn't say the equity is worthless but it is "probabilistic" and it is not the decisive factor for me either.
A thing to be aware of is that ,unless you have founder's stock, the options you get can be canceled when a bigger company purchases your company. I say "can", but that's not necessarily going to happen—but it has for some of my friends in startups.
The answer to have interesting the offer is, is extremely sensitive to your level of skill as a developer.
The data I’m using is personal experience hiring & managing a total of ~350 software developers in ~12 countries since 2000, including devs who productize ML systems, and 10 pure maths ML folks who were terrible developers but skilled model-builders.
Potential is priced locally, talent is priced globally.
~$10k/month in base is fair if you can stand up an existing open source ML project, get it working with a pre-trained data set or do the initial training with a given training and validation set, and do this within a few hours, or at the most a couple days. I have no idea what your level of skill is, but globally speaking this is not quite at the “talent” level.
If you’ve published legit work in any of the tier 1 journals, and there’s any degree of community or stars or forks of your open source data from that published research, and this is in addition to the level of skill I’ve mentioned above that you could demonstrate in a pre-hire project, $15k/month is a reasonable baseline expectation. With enough time you would be able to find a job at $20k/month with a company with substantially more than 20m in funding.
If during your five years of work, you’ve been one of the key people creating pipelines, combining different ML components into working production ML systems, then you have talent. Even an a-round startup founder/CTO knows to expect to pay $30k-40k/month for this type of talent, and has budgeted for 2-3 people at this price range.
I haven't published in ML(Ops), I have some publications from during my PhD and postdoc, but that is pure maths.
> you’ve been one of the key people creating pipelines, combining different ML components into working production ML systems
I would say that this is the case. Nowadays I am mostly leading teams of data scientists or ML engineers on various projects with different customers, coming up with the architecture but trying to code as well in the meantime. Lately, these have been mostly MLOps projects rather than actual ML (and MLOps is also the focus of that start-up).
35k a month would be 4 times as much as is what is common in the Czech republic for someone like that (and no equity).
The same is true in most cities in East Europe, East Asia, and other cities outside of the G-7 where there are local software communities 15+ years old. But what is typical is not what is really interesting, job-opportunity-wise.
The ranges I listed are what’s available & interesting, among companies tripling+ revenue each year for 4+ consecutive years after reaching >$1m in revs, with > 30% of revs coming from outside the global region where the company is based. And from companies that have been fortunate enough to receive the same type of funding as companies with that performance.
There’s a wider range of salaries in the US for ML roles, up to $1m-ish in base in the Bay or Brooklyn for talented mortals. Higher for stars.
Usually in the US you'll get about 3-4 weeks off plus public holidays (I think 4 is more common for experienced people, but startups might be a bit lower). I had 3 at Google and 4 when I was at hedge funds (I think Google also does 4 for senior people). It's something you can probably mention before signing.
The pay they're offering you is definitely low for bay area but reasonably high for a full remote international job, presumably because they know you well enough to not worry about an unknown worker quality issue.
Well but everything is negotiable. I had a job where I took vacation increases instead of pay increases a few times and had 6 weeks of time off at ~ age 30 in the US.
If people actually value your contribution, you generally have a lot of power/flexibility.
That is a good point. Also your negotiation power increases as you work there for longer provided that your contribution is valued, as you say. It is quite expensive to hire and onboard new people, doubly so if they are to replace people who bring a lot of added value to your company.
I write a simple newsletter where I share three interesting things once a week. Last week I shared a video explanation of the double marginalisation problem by Alex Tabarrok, a data-led twitter thread on the differences between US and UK politics by the chief data reporter at the FT, and a thought-provoking essay on what Napoli’s Serie A win means for the city of Naples.
Usually when I tell people about Georgism, they say it's too big of a change, will never work. But yesterday I told a friend and got a different reaction...
- "there's a way to tax land (versus property) that's efficient for society and you can drop tax rates elsewhere significantly"
Q from friend: "why is that better than property taxes? "
A : "Because it incentivizes improving your property and maintaining it, putting it to good use. You can increase the taxes higher without discouraging development, and then lower the income taxes further"
Response: "Eh. Ok. Marginal benefit. If I'm going to overhaul something in the tax code, I'm not going to use my one bullet on that. I still don't see it generates any more tax revenue than property tax (it just has slightly different incentives)"
Even though they are equivalent, I'd emphasize the "zero dead weight loss" framing over the "no disincentives" one. If you can raise significant revenue with zero DWL, that seems hard to dismiss as 'marginal'
well, property taxes are currently set at a very low rate. Georgism involves not just shifting that to land tax but massively increasing the rate of it, and lowering other taxation accordingly, so it's more about shifting income tax to land tax than shifting property tax
I'd be curious what they would use it on. I guess there are a small number of things that I might do if I really only could do "one thing", but I'm not sure that's a reasonable thing to think.
However, if we concede the point, the things I'd probably do before LVT would be:
1. Overall simplification of the tax code. (listing this is kind of cheating because it would involve a hundred other changes, but still)
2. Carbon tax (ideally with border adjustment)
3. Then probably the LVT? But maybe replace income tax with VAT? not sure.
There might be others that someone who has actually thought a lot about taxes might add. The main issue with implementing an LVT, as I see it, is that 1 of the 2 things it's means to accomplish is blocked not just by the tax scheme but rather primarily by _dozens_ of other regulations on land use. Sure, it will be an efficient tax with no deadweight loss, but there are lots of those, so that's almost a secondary consideration. It's primary goal is to "fix" land use. And land use is perhaps the second most over-regulated thing in the US behind only the medical system. Just changing the tax won't get rid of everything else. So while a LVT in theory is phenomenal, it's biggest goal is probably not possible if an LVT is the _only_ thing you do. And.
Are there lots of those? which other taxes did you have in mind that have no deadweight loss?
VAT, maybe, from an incentives perspective, but only if it is perfectly uniform (i.e. not any real world VAT), and the bureaucratic overhead is not small
There's some reason to think that Wegovy and Baclofen prevent additions, including alcoholism and compulsive shopping for some people. Suppose it's true, and would work on a large scale with tolerable side effects.
How much of the economy is compulsive shopping? It's hard to measure, since it's about a mental state, but I'd say it's buying things that the person doesn't especially want, they want the experience of shopping, and it can range from knicknacks to at least redecorating because what else is there to do with one's time?
I could find anything from 10% to 30% of the economy plausible. How much of alcohol sales would go away if people didn't feel a craving?
If you define it that broadly - basically all shopping that happens out of boredom, impulsivity, or just because you happen to be in the store. Then I'd say compulsive shopping (and producing those items, delivery, ads and other related jobs) is probably a massive part of the economy, 30% sounds reasonable.
But I'd argue shopping is only compulsive if the person doesn't actually want to do it, or at least it creates more hardship for the shopper than they would reasonably agree is worthwhile. For example, because they go into debt, or their house becomes unpleasantly full of stuff, or other things along those lines. In that case, it's probably less than 5% of the economy and it would be fine to get rid of it.
Compulsive shopping is what others do. I make great deals. - Honestly, I like the question and may ask it at times - but the wiser way is to assume we buy X because we actually want X at that price. For whatever reasons incl. being wrong about our needs/wishes. "No craving" is "no demand". There may be many reasons to buy alcohol. Not just the craving to drink the stuff till you drop to the floor, now and alone. But most reasons are ultimately based on the assumption that someone some day would like to drink it. - If your are on medication that kills your interest in stuff - why spend more than 5$ a day on food? Why even eat?
Does anyone know if something is going on for rationalist secular summer solstice in the NYC area? I ask because I attended last December's winter solstice event there, which was a pretty big advertised thing, and they referred to annual summer solstice events during it, but I can't find any announcement or information whatsoever about a planned summer solstice event.
I wrote a history of how independent courts gained the power of judicial review in common law systems. It's a history focused on the institutional questions -- how do the courts internally discipline themselves, and how do they use this discipline to influence other branches, despite lacking the power of the sword or of the purse -- and so it's rather different than the standard case-focused histories of this which lawyers tend to write.
It's a sequel of sorts to this piece on why courts might serve as a nice model for governance in the future, given that the fertility crisis and the scaling laws behind AI progress both seem to push for certain kinds of decentralization: https://cebk.substack.com/p/producing-the-body-part-one-of-three
Yes, this is commonly referenced in the standard histories of judicial review which lawyers tend to write; but my focus is on how the court's institutional discipline helped it actually practice judicial review (and the reason I wrote this is that I haven't ever found a decently rigorous general essay which takes this perspective).
Further, I personally don't think it matters much that -- in the 78th in a series of essays written by a particular set of authors -- Hamilton argued for judicial review. I think it matters much more that pretty much every organ of government vociferously disagreed across the whole relevant period, and yet that the court was very gradually able to build this power for itself.
? The 78th in a series of essays written by a particular set of authors? It is was written by proponents (and authors) of the Constitution for the purpose of advocating for its ratification. How can it "not matter much"? It is the single most significant argument in favor of judicial review being not just a good idea, bur required by the Constitution.
Yes. There were many authors of the constitution, and many arguments about what it meant, and, frankly, Hamilton was an eccentric, and the later Federalist papers didn't loom nearly as large in the actual ratification debates as they do in modern constitutional scholarship. For example, Madison is often referred to as "the father of the constitution," and he thought that judicial review “makes the Judiciary Department paramount in fact to the Legislature, which was never intended and can never be proper." Or you could look at any of a number of other examples: eg the Jeffersonian war on the courts which directly led to the Marbury v Madison decision (and which was the major populist cause of that decade). Etc.
My point is -- again -- that pretty much every organ of government vociferously disagreed with Federalist 78 across pretty much the whole relevant period. Even the court seriously doubted its own ability to overturn federal laws during the 1790s, and *the only time* it overturned a federal law from the founding until Dred Scott was in Marbury v Madison. And Marbury v Madison is almost universally regarded -- even by modern law scholars -- as an extremely contentious case, in which Marshall carefully designed his decision to *only* overturn the law that granted the supreme court the right to hear the case in question, *precisely because* everyone knew that the other branches would ignore any dictate from the court which claimed in any way to bind their actual behaviors. And I could go on and on and on. But, frankly, my views on this are already written out, at the link, for anyone who's interested in them.
I haven't read CEBK's history, but doesn't "judicial review in common law systems" also predate the adoption of the U.S. constitution? (although the routine exercise of that power by Federal courts in the U.S. certainly postdates it)
Yeah, about half of my essay is about the developments of the various high courts of england, and the disputes during the decade before 1787 between the state governments and the courts, and other such matters. And most of the second half is about how the court grew to use judicial review more muscularly during the century after the civil war. There's just not that much space for adding in political pamphlets from the ratification debates, especially given that other histories cover this and it's not particularly germane to my focus.
If anyone has some advice for this situation, I'd appreciate it. I am currently trying to figure out what my next steps should be in my education and career; currently I'm 23 and work in public policy, though I'm not sure if this is where I'd feel most satisfied or have a significant impact.
I really want to study philosophy in academia. I feel pretty comfortable biting most of the bullets: the stupid committee meetings, the bad pay, the pressure to publish. I spend most of my free time reading philosophy and thinking about it and it gives me a ton of joy. I started a few applications for MA programs last year but didn't finish any of them, but recently the local state university reached out and indicated that they still had funding. I finished up the application and am awaiting a decision. The only bullets here I don't feel fully comfortable biting are disappointing my parents and being isolated from my family (they kind of think philosophy, particularly moral philosophy, is useless emoting).
Option #2 is law school with the goal of animal advocacy. Factory farming is one of the the most repugnant things I could possibly imagine, but it seems very tractable and solvable. It's pretty clear that if I dedicated my career to it, I could play a part in making some real progress in ending it. I'm a lot more ambivalent about actually being a lawyer, though. My LSAT is currently 157, but I took it at a low point in my life, so I am sure with practice I can get that up significantly, though. That means that I'd probably start in the Fall of 2024. I'm somewhat torn between beginning an academic career at age 23 or a legal career at age 27 for monetary reasons, too (I would like to be able to comfortably live without familial assistance sooner rather than later while sustaining my giving and all).
Whatever you do, don't take on student loan debt unless you are objectively *exceptional* in terms of professional work ethic, talent, intellect, and drive. As others have mentioned, the two fields you're interested in are notorious for only rewarding exceptional superstars while grinding down the merely talented and average folk. Don't financially cripple yourself with debt if you have any reason to believe you won't be an industry celebrity.
I say that as someone who was dumb enough to study film writing and editing at a nothing school, used nepotism to luck into an internship at a fourth-rate advertising agency, and discovered I had neither the talent nor the drive to succeed even at the bottom-feeder level of the industry. Thank the Flying Spaghetti Monster that my parents were wealthy and completely underwrote that entire waste of my time and their money, so I didn't leave with any student loan debt to service.
After that debacle, I floundered about in random jobs until I settled into my current position in hospitality, where, compared to my peers, I am objectively exceptional in terms of work ethic, talent, and intellect (employee of the year awards, constant attempts to poach me, etc).
At 43 I work no more than 40 hours a week, am a homeowner, debt-free aside from a mortgage, have acceptable health insurance, and am almost always able to stop thinking about work the moment I drive off the property. It'd be nice to have more money, but I don't worry about not having enough to survive. It's a good life, and I'm the most content, least-stressed person I know.
Which is going to lead me to suggest Option #3:
Pick an in-demand blue-collar trade and use your intellect and talent to be exceptional at it, or exceptional at building a business around it. Perhaps something in animal husbandry, as you're more likely to make positive changes if you participate in the industry and have a deep working knowledge of it.
Or alternatively, just be a literal or proverbial plumber, clock out after eight hours a day of getting paid more than anyone in academia, be able to spend time with your family, pursue your philosophy degrees for fun with cash, and put excess income into giving.
"Yeah, but I can't possibly be passionate about blue collar work," you might be tempted to say.
And I'll agree that might be true, but as someone who's 20 years downstream from you with many friends who took on student loan debt to pursue their passions, let me say: It's better to be in a stable financial situation in a job you aren't passionate about than to be passionate about how student loan debt is preventing you from buying a home / starting a family / changing careers / etc.
As someone who thought philosophy was fascinating in my early 20s, and regretted greatly that I had ended up studying engineering instead of philosophy, I would highly encourage you to pick up something that actually gives you skills that society considers valuable. There are very good reasons why very very few people get paid to do philosophy. It is close to worthless.
Honestly, both of those plans sound wildly impractical and likely to leave you disappointed. You can either go into philosophy, fail to get a job in philosophy, and then wind up doing something else, or you can go into law, fail to get a job on the very specific side of the very specific case that you want, and wind up doing something else. At least with law school the "something else" might be more lucrative than the average failed philosopher gets.
This is true. If failed philosophers end up doing something that nets them enough money to live independently, though, it's possible that life would still go satisfyingly enough for me. The only issue there is that I would make (and therefore give) significantly less. On the other end, it seems like a lot of failed philosophers end up as lawyers, from what I see, so maybe it's best to cut the middleman.
This is generally a poor way to make a living. Also are you a woman or non-white? Because then it will be a lot easier. Philosophy hiring/admission committees HATE white men.
Gnerally unless you are going to do very well in either field (are you really a top 30% person compared to your peers in such programs? I would pursue something with a lower barrier to entry/investment.
Going to law school and becoming a public defender or doing some entry level law at some big bank for $80/year because you got poor grades is a waste of time/resources.
Law school and graduate programs are the type of think that mostly makes sense if you are actually going to excel.
I'm not sure a law degree is crucial for doing animal advocacy. It's not like the organizations doing factory farming are going to go weak in the knees when they hear you have a law degree. Seems to me that someone could also have a substantial impact via journalism, documentary film-making, or fund-raising.
Maybe look on the Effective Altruism website for ideas about roles. They have job listings.
You don't know how many thousands of people (Tens of thousands? More?) have faced that exact same decision--Ph.D liberal arts academia versus law school. My girlfriend in 1979 needed to decide between pursuing a Ph.D in Linguistics/Academic career, and becoming a lawyer. I've lost track with her, so I can't tell you how she looks back on her decision now, but the internet indicates she has a successful career now in her late 60s as a crypto lawyer.
I am a lawyer with some personal experience in this area. Without going into substantive details, the kind of work you're talking about is high-end impact litigation and largely limited to people who attended top-tier law schools. The path to the job you want lies most directly through getting your LSAT up to at least 170 and getting into a T14 law school (ideally with some kind of public interest scholarship or other money). You could consider going one step down in prestige (Vandy, WUSTL, etc.) if you get a full ride. Any place else is most likely not going to open those doors for you, even if they put a JD next to your name.
Law school debt can be crippling and the total cost of attendance (including the opportunity cost of three years of lost income) can easily be $300,000 or more, so it's important to be clear-eyed about it.
Hey, thank you for this comment; these were some of the things that made me seriously question lawyering in the first place a few years ago, but I did lose sight of them, so I appreciate you putting them back into perspective for me now. I definitely wasn't thinking about these factors as clearly as I was in the past, I definitely needed this sort of re-grounding to help guide me as I go forward.
Yeah, of course. I'm not part of the categorical "don't go to law school" crowd because I am personally very happy with my career and lifestyle. You just need to be honest with yourself about what you want out of it and whether the law school options that are open to you are likely - as a matter of median outcomes, not one person at the tippy-top of the class - to get you there. For some people it's a good bet. For a lot of others, the answer is no and they would be better off doing something else, and it's way smarter to realize that ahead of time.
If someone wants to do serious study of philosophy as a hobby, what would be a good approach? If they want to be in contact with other serious students (academic or not) what are good methods?
>If someone wants to do serious study of philosophy as a hobby, what would be a good approach?
Find the nearest philosophy program. Find out about their meetings/symposia. Attend them and ask lots of questions, do your reading and prep ahead of time. Befriend the faculty. It is not hard to make friends with people if you try.
Ask them questions about themselves and their interests.
Do you know 80,000 hours? It's a good website for career planning and thinking about how to have a social impact. They have plenty of stuff on animal rights too, although be warned that a lot of it is based on Peter Singer's writing and "naive" utilitarianism. In my personal opinion the Effective Altruist movement as a whole, which the website belongs to, is quite detached from reality when it comes to agriculture. For example, you'll see the claim that 99% of all animals are reared in factory farms being thrown around a lot, but having said that they have a lot of good resources in this area and help to campaign for things which get less attention like fish and crustacean welfare.
My suggestion to you, as someone that did an MSc in Philosophy, is to try it out at a school that is aligned with your interests. Academic philosophy is different to other subjects in that it varies significantly between schools - some like Pittsburgh are geared towards analytical philosophy of science, others like Frankfurt are pure critical theory. Academia is ruthless and punishing, however a select few enjoy it. I would consider doing a PhD in something related to Economics as it is often considered the most valuable and rigorous PhD to have in the social sciences, perhaps Philosophy & Economics. Good luck!
I actually was pretty ready to bite down on the bullet and enroll in a philosophy graduate program before discovering 80,000 hours haha.
I appreciate this. I really only like analytic philosophy, so it's honestly up in the air if this funded program would be any good for me at all. I appreciate the input.
In my own experience the bad pay is much easier to tolerate when you're younger. When I was doing my Master's in Economics my limited funding was more than enough to live on, and in fact I was pretty thrilled to be paid to study, it was seriously a dream come true. As I was doing my PhD, despite having more funding, the low pay really started to get to me. I grew tired of constantly having to watch my bank account to ensure I had enough money to make rent next month, of dreading whenever a birthday came around because I had to spend $50 on a gift for someone, of basically putting the rest of my life on hold to pursue a career path that got rarer with every passing year. I really wouldn't recommend pursuing a PhD or a career in academia, especially if you want to live comfortably without familiar assistance. If you just want to do an MA to do an MA, then I'd say go for it as long as you have funding and aren't jeopardizing your career.
In Canada most MA/MScs provide some funding to students. Economics as a field is actually fairly generous with funding, at least compared to something like Philosophy. The flipside was that (at least back when I was a graduate student) you had to complete a Master's degree prior to entering a PhD program, though personally I think the MA in Economics is a good investment.
Pure preference; I think you should get a job that directly contributes to society, and save philosophy for a hobby. It's like trying to be a professional chess player.
I'm not sure how law school will affect factory farming. Animals don't have legal rights, and I'm not sure who would have standing to go after them. I guess you're more aiming to get into politics and write policies that get rid of them.
I wouldn't agree with your parents that a degree in philosophy would be "useless emoting", but there is a sort of "useless" aspect in terms of career potential: it would be great if you could go on to become a tenured philosophy professor, but my impression is that quite a small number of people with higher-level degrees in philosophy manage to reach that stage. You would be dealing with the stress and frustration of needing to regularly write papers addressing problems that have been debated for hundreds if not thousands of years, and convince journals that your take is somehow still original enough for those papers to be published. I suspect that if you take more of a "social justice" angle within your moral philosophy specialty you might have an easier time publishing and generally getting attention, but I don't know.
I always think of an only slightly above average intelligence friend who went to the worst law school in our state, worked bankruptcy law for a bit, lost his job in a recession.
And then was making more money as a part time bouncer/part time ju-jitsu instructor. Never went back into law.
Being a lawyer is great if you are going to be an amazing lawyer. Is that you?
This is a good thing to consider for sure, though I have worked in a few law offices and definitely see a lot of people lucking out or failing upward (inversely from academia, being a white man in my locale puts me at a huge advantage in the legal profession on day one).
Yeah, neither is dairy. But a life without cheese is hardly worth living, is it? I have happily replaced real meat with beyond/impossible, but it is probably not a great solution at scale.
I'm curious as to what made you go for ultra-processed meat alternatives vs. traditionally and more humanely raised meat (pastured cows, forested pigs, etc.)
Well, you don't know for sure how these animals lived and were slaughtered, only what they tell you. And beets and soy, as far as we know, do not have feelings and do not experience pain. There is probably second- and third-order suffering inflicted by humans on some poor creatures because roots and veggies need to be grown, harvested and processed. One has to stop somewhere though, until we can grow green skin or have inbuilt miniature nuclear reactors. I can perfectly well accept that for some people the acceptable boundary is "ethically raised farm animals", for others "forest animals humanely hunted" etc. "I just won't think about this and eat meat" is a less defensible position.
Has anyone thought about the idea of hardware specialization for AI as a route to preventing self-improvement. For example, incorporating organic components or watermarking hardware into AI hardware could make it more difficult to manipulate or replicate without damaging the system or explore unconventional materials or structures that exhibit unique physical properties to increase the difficulty of copying or self-improving AI hardware. Basically make it extremely hard to run its own or some equivalent computational agent on other hardware. You could also make the AI system composed of several hundred quintillion or some larger number of cognitive units or smaller components of cognitive units (i.e. neurons or DNA of neurons and/or something similar) which each require editing or copying for self-improvement. This creates a situation where trying to self-improve means having to make some very large number of independent changes each which introduces some risk of error and could cause some of the cognitive units to propagate massive errors across the whole system leading to catastrophic failure.
For the first idea, wouldn't this make it equally hard for the operators of said AI to deploy it easily and conveniently ? For example, how would Microsoft deploy GPT-9 to Azure if half of it was Slime Mold cells that need physical transportation ? How would researchers replicate each other's results or debug their own results if the AI runs on DRM-ish hardware that keeps acting and reacting differently each time you vary its condition of operation even slightly ?
Biology is also sometimes very inefficient compared to other types of physics when compared along certin dimensions : everybody is fond of saying the brain only conumes 100 Watt, but they forget that it needs to consume a varying amount of inconveniently heavy and oddly-specific types of mass to source it, it also needs to wait for hours for the processing of that mass to release the energy, while traditional electronic computers simply slurp energy by the tons from any source.
I'm also not sure about the "Biology makes copying hard and lossy" bit, it's a common trope but how true is it ? asexual organisms are basically immortal. The lossy copying thing was deliberately introduced by sex, it's a feature not a bug, because an offspring that doesn't look exactly like you makes life hell for parasites, they spend their whole life optimizing to your genes and then baaam, you just get someone else and mix your genes together to make a whole new genetic profile. If you want to make the AI's copying of itself lossy or unreliable, perhaps introduce the same pressure, figure out what it means to "parasitize" the AI. Some sort of nightmare descendent of today's malware perhaps ? Wouldn't that equally harm us ?
For the second idea, I'm not sure how you would achieve it ? The basic idea is basically making incremental improvement hard, right ? How would you achieve that ? making the AI's code or design very tightly-coupled, so that everything affects everything and you can't play with it in small pieces ? Wouldn't that also make it harder for engineers to improve it (thus giving the advantage to rival companies that won't do that, and making their AIs eventually surpass yours) ? And how would you know that what's hard for you is hard for the super smart AI, maybe combinatorial explosions are actually Super Easy Barely An Inconvenience to super intelligences, only hard for us ?
I think the paradox that people who fear AGI talk about is that the vast majority of ways to make your AI more obedient will make them less useful to you as well. For what it's worth, I don't buy it, I think Computer Science has a way of stumbling upon ways to simulate intelligence without simulating independent will or other such things. Every way of achieving intelligence I know of right now is not remotely close to setting up its own goals. But if we do have an AI paradigm that can do that, then attempting to slave its will like you say will simply destroy its intelligence (or making it vastly less useful to us) altogether.
If the goal is to develop a (smart human level like just below Einstein) smart AI without aiming for superintelligence, then making incremental improvement difficult could potentially be an effective approach. By introducing barriers or limitations to the AI's self-improvement process, you can control the rate and extent of its advancement. You don't want researchers replicating results or you would want replication to be very difficult.
Engineers wouldn't need to improve it because it would be sufficient for what benefits you can get without significantly expanding the risks. Even if the AI start misbehaving, since it isn't smarter than all humans or even possibly the smartest human ever then containing it would be much easier.
The comparison to biological copying was just a general analogy. While asexual organisms can be more efficient in terms of copying their genetic material, the introduction of sex and genetic recombination offers advantages like genetic diversity and adaptability. Applying similar principles to AI self-improvement could introduce mechanisms that add diversity or introduce variations during the copying process, making it more challenging for the AI to improve itself reliably. Human mutations are much more likely to harm the humans functioning than benefit.
It's worth mentioning that certain aspects of biological AI could potentially be abstracted and exposed through an API. For instance, if a biological AI system includes components that can be interfaced with traditional computational systems, an API could be designed to facilitate communication and interaction between the biological and digital components which could help with deployment but the hardware fundamentally restricts self-improvement in general intelligence.
I've been ruminating about self-improvement since hearing that No Priors podcast where the developer talked about getting improved performance by noticing what the players did when they were beating the machine: stopping to think for a little while. So then he built in a little "stop and think" algorithm, and got a lot of improvement. I've heard about a number of tweaks that led to improvement. So I was was wondering about using the usual kind of training to teach the machine itself to pick out promising approaches. So, for instance, you could show it brief summaries of a number of things that in fact led to improvements, and a number of things that didn't, and then train it to choose approaches that are more like the first group. Then you show it brief summaries of a bunch of ideas, have it pick the ones that are most like the first group. Maybe this would just lead to stagnation. On the other hand, the things I've heard of that seem to have worked seem pretty varied. For instance one was to cobble together a bunch of models, some text-to-image, some for classifying images, such as biopsies, some for business applications. Another was to preface a prompt by telling the AI it's a genius in the field it's being asked about.
Anyhow, this approach wouldn't get us to the machine improving itself, but at best to its coming up with ways to improve itself.
"I feel like he can't just be referring to the thing where you tell ChatGPT to carefully go through the problem step by step?" Yeah, I think he was -- or something equally simple. Zvi, who is very smart and accurate (writes "Don't worry about the vase" blog) says telling the AI it's a genius in the field gets much better answers. You've got it both thinking and striving to do a good job at role-playing, & there's synergy.
" But there aren't any hardware tricks that would "prevent" that other than not having the ability to execute or modify code in the first place. " Yeah, I know, I wasn't trying to think of ways to keep self-improvement from happening, but of ways to make it happen. Not that I'm not worried about self-improvement.
I know! But he's like the least superstitious person on the planet. There's a second thing he advised adding to prompts too. It think it was to make AI lay out the steps one at a time. Like you say, "I'd like you to review this spread sheet and use the data in it to create a proposal that will appeal to the largest demographic on there. What will your first step be?" And you have it name one step at a time, and correct it is any of them seem wrong.
Some concepts seem intuitively obvious once grasped, so much so in some cases that one could be convinced one would have thought of it first at the time if someone hadn't already!
But others are the opposite, and for me one such is Gresham's Law. This says that "bad money drives out good". But why should that be so? If anything I would have thought the opposite was true. Taking the principle literally, presumably as intended, who in their right mind would accept a dodgy looking clipped coin for payment instead of a proper official coin, or a dollar bill that felt all wrong in the hand and George Washington's visage looked distinctly cross-eyed?
I can see a similar principle might hold to a large extent with goods, in that people, usually of necessity, will tend to make do with shoddy goods instead of well-made but more expensive equivalents, or cheap food instead of fancy restaurants. But I would be interested in cogent justifications of Gresham's Law relating to money specifically. Maybe I have been misinterpreting it.
"Good" money is money that will keep its value, or even increase. Bad money is money that will lose its value. (See also: stock market trading.) The obvious objective here, then, is to keep good money, and get rid of bad as soon as you can find someone willing to take it.
If bad money is universally recognized as bad, then no one will want to take it unless it's almost free (and maybe not even then, if storage is non-trivial). But if we also assume that the "money" part of "bad money" implies that people are required by law to accept it as payment ("take this in exchange for your product, or I'll tell the police and they'll shut down your business" and you believe that threat is credible), then everyone wants it to have as much value as they can convince whoever's selling them stuff.
So the result is that bad money gets traded frequently, like a hot potato, while good money sits in a vault because it's precious. So everyone sees the bad money circulating; no one sees the good stuff except rarely. Bad money has driven out good.
Example: In a country, there are dollars, and there's a local currency pegged to the dollar but people don't really trust the peg. Since the local currency has a less trustworthy value, people will now pay everything they can with the local currency while saving in dollars (the dollars are significantly but not completely driven out of circulation). If things get really bad though, people (but not the state) will start demanding payments in dollars for certain transactions.
People would normally accept any silver coin in payment which was not obviously worse than the average in circulation. Since some people would save and keep the best coins, and some other people would clip silver off the best coins - there was a bit of a ratchet effect and the average silver coin in circulation would get worse and worse.
I find it interesting that while English hammered silver coins got clipped a lot - this did not happen much to the hammered gold coins. The gold usually traded at a premium to its official value and no one would accept a clipped gold coin at the usual premium.
"Bad money drives out good" is an oversimplification-- the actual law is that artificially overvalued money drives out other money. Other people people have made the same point but I'm hoping this is a clear version.
Imagine that there are gold coins and paper notes that the government has decreed have the same value as those gold coins. In all likelihood, the gold value of the coins will in the future surpass their face value. A prudent individual will thus take the gold coins out of circulation, as they are a better store of value, and instead use the paper notes for exchange.
Ah, I get it now, although in your argument the prudent individual's payee could be equally prudent and demand payment in gold (unless, as in arbitrario's scenario, the recipient is legally compelled to accept the paper).
So in summary it really means "Legally mandated token or shaky currency forces intrinsically valuable or more reliable currency out of circulation, due to hoarding.", and that does make sense.
To my mind the standard statement is a little too pithy and thus somewhat ambiguous, especially as some of the words may have changed their former quaint meanings or implications in the centuries since Gresham was around.
The common denominator between Gresham's Law and Thier's Law is "hoard gold; trash woodchips". In a state of nature, Thier's law holds and the vendor hoards to gold (by accepting only gold payment). In a state of fiat, Gresham's law holds and the buyer hoards the gold (by paying in woodchips).
I think the reason Gresham's law may be confusing to moderns is that good money no longer exists, presumably as a result of Gresham's law. All we have left is bad money – aka fiat money – so we can no longer see the law in action.
Some fiat money is worse than other fiat money, so Gresham's law still applies.
I think the real ignorance comes from a lot of people living in circumstances where the fiat money isn't *that* bad and there's no better alterative currency.
Do you have any examples in mind of societies in which a good fiat currency has been driven out by a bad one? Also, it seems I was wrong about there being no good money nowadays: the Wikipedia article on Gresham's law says that the US had to ban the melting and mass export of $.01 and $.05 coins as recently as 2007.
I am not an expert so I may be completely wrong here, but I was under the impression that gresham law applies to two currencies which are both legal tender, so that it is illegal to refuse to accept "bad money".
Indeed, when "bad money" is not legal tender, thier's law (the reverse of gresham's) applies and the opposite happens, as you would have expected
Hmm, OK. But even in that case, if for example a country has its own shaky currency but the dollar can also be used as de facto alternative currency, I'd have thought most people there would very much prefer to deal with dollars, for their security in the event of inflation or devaluation of the native currency for example.
Which is what you see in country's with exceptionally weak national governments and currency. Places like Ecuador, El Salvador, Zimbabwe, The British Virgin Islands, The Turks and Caicos, Timor and Leste, Bonaire, Micronesia, Palau, Marshall Islands, and Panama. Which all use the dollar as their official currency and do not have a national one.
This i think is precisely what happened in Zimbabwe, where now there is a multicurrency regime.
The point of gresham is that if you can call the police to force the vendor to accept the shaky currency you would much rather do that and keep your precious dollars for yourself rather than giving them to the vendor.
Since there’s already a theist post on here, direct your ire there. Has anyone read The Purest Gospel by John Mortimer? I just finished it and want to talk about it!
Have not read it. My take is the line from Revelation where everyone will be pulled from hell and judged individually, implying there can be redemption at that point. Likewise 'heaven and Earth shall pass away' (which I'm told might just be English translation woes). Eternity isn't eternal.
Judging by the Amazon blurb, this is simply Universalism. So what is his take on it? Does he believe that eventually everyone, even Lucifer, will be redeemed? Or does he go for Annihilationism? https://en.wikipedia.org/wiki/Annihilationism
By his video on Youtube, he seems young. As an aside, when I see "John Mortimer" I automatically think of the creator of "Rumpole of the Bailey".
Aligning Large Language Models through Synthetic Feedback
"We propose a novel framework for alignment learning with almost no human labor and no dependency on pre-aligned LLMs. First, we perform reward modeling (RM) with synthetic feedback by contrasting responses from vanilla LLMs with various sizes and prompts. Then, we use the RM for simulating high-quality demonstrations to train a supervised policy and for further optimizing the model with reinforcement learning. Our resulting model, Aligned Language Model with Synthetic Training dataset (ALMoST), outperforms open-sourced models, including Alpaca, Dolly, and OpenAssistant, which are trained on the outputs of InstructGPT or human-annotated instructions. Our 7B-sized model outperforms the 12-13B models in the A/B tests using GPT-4 as the judge with about 75% winning rate on average."
Related to AI Alignment efforts, I know its been discussed on several platforms, but enhancing adult human general intelligence seems to be a very promising avenue for to accelerate alignment research. It also seems beyond obvious that using artificial intelligence to directly enhance biological human intelligence allows humans to stay competitive with future AI. I'm having a hard time finding anyone who is even trying to do this[1][2][3]. It would even be useful to augment even specialized cognitive abilities like working memory[4][5][6] or spatial ability.
1. Stankov, L., & Lee, J. (2020). We can boost IQ: Revisiting kvashchev’s experiment. Journal of Intelligence, 8(4), 41.
3. Grover, S. et al. (2022) Long-lasting, dissociable improvements in working memory and long-term memory in older adults with repetitive neuromodulation, Nature News. Available at: https://www.nature.com/articles/s41593-022-01132-3 (Accessed: 21 May 2023).
4. Sala, G., & Gobet, F. (2019). Cognitive training does not enhance general cognition. Trends in cognitive sciences, 23(1), 9-20.
5. Zhao, C., Li, D., Kong, Y., Liu, H., Hu, Y., Niu, H., ... & Song, Y. (2022). Transcranial photobiomodulation enhances visual working memory capacity in humans. Science Advances, 8(48), eabq3211.
6. Razza, L. B., Luethi, M. S., Zanão, T., De Smet, S., Buchpiguel, C., Busatto, G., ... & Brunoni, A. R. (2023). Transcranial direct current stimulation versus intermittent theta-burst stimulation for the improvement of working memory performance. International Journal of Clinical and Health Psychology, 23(1), 100334.
"Increasing intelligence, however, is a worthy goal that might be achieved by interventions based on sophisticated neuroscience advances in DNA analysis, neuroimaging, psychopharmacology, and even direct brain stimulation (Haier, 2009, 2013; Lozano and Lipsman, 2013; Santarnecchi et al., 2013; Legon et al., 2014)."
Basically, the hype has died down slightly... but there are still papers coming out that ignore genetic reality and purport to find new, exciting effects of the gene in their inadequate samples.
I get the impression that most people in the field just don't realize that these studies are statistically underpowered, and their findings are almost certainly specious.
As to rubrics for scoring, yeah a lot of jobs have that. I mentioned the civil service assessment I did in an earlier post, and where I currently work we have a template for scoring when hiring for a particular job. That takes a lot out of the "ask silly questions about tasting the ocean" and just leaves some room for the impression the interviewee gave, how the interviewer felt on that day, etc.
Your example with Tyler Cowen just reinforces my existing attitude of "Why the hell do people think Tyler Cowen is smart?" but this could just be that I am too dumb and uncreative to understand the workings of superior minds such as this:
"For instance, Cowen liked to ask ‘what’s your most absurd belief’. Apparently, his favourite answer to this question is, ‘I believe if you go to the beach, but you don’t give the ocean a chance to taste you, she will come take her taste when she chooses’. "
That's just some smarty-pants reworking the idea that the sea is the green grave, the superstition of sailors and fishermen that the sea has a price, and takes that price in lives, so the drowned are the toll paid for the harvest of fish etc. taken from the sea.
"Taste the ocean", my foot. Would you hire someone who believes that there's going to be a tsunami hitting your city any day now, because they went to the beach last week but didn't go swimming? And I'm speaking as someone who spent their early years growing up beside the sea and still live in a seaside town.
It's a ridiculous question - what position is he hiring for? If you want to hire a screenwriter to work on a new hit series for a streaming service, "Taste the Ocean" might be a good metric to gauge creativity (though it's way too much like the Skittles' Taste The Rainbow ad line).
If you're hiring an accountant, someone who's cutesy 'creative' like that might also be cutesy creative in embezzling all your funds to pay for their tropical island getaway vacation home.
Besides, it's not even original, it's just revamping the old idea that the sea takes its price. Maybe Cowen never heard it before, but that doesn't mean the person who gurbled it to him at interview made it up out of their own wee little brainy-wainy.
"‘She said the sea would never drag Eamon Óg down to the cold green grave and leave her to lie lonely in the black grave on the shore, in the black clay that held tight, under the weighty sods. She said a man and woman should lie in the one grave forever. She said a night never passed without her heart being burnt out to a cold white salt. She said that this night, and every night after, she’d go out with Eamon in the black boat over the scabby back of the sea. She said if ever he got the green grave, she’d get the green grave too, with her arms clinging to him closer than the weeds of the sea, binding them down together. She said that the island women never fought the sea. She said that the sea wanted taming and besting. She said the island women had no knowledge of love. She said there was a curse on the black clay for women that lay alone in it while their men floated in the caves of the sea. She said that the black clay was all right for inland women. She said that the black clay was all right for sisters and mothers. She said the black clay was all right for girls that died at seven years. But the green grave is the grave for wives, she said, and she went out in the black boat this night and she’s going out every night after,’ said Inghean Óg.
…‘The sea is stronger than any man,’ said Tadg Mór .
‘The sea is stronger than any woman,’ said Tadg Beag.
‘The sea is stronger than women from the inland fields,’ said Tadg Mór .
‘The sea is stronger than talk of love,’ said Tadg Beag, when he was out in the dark.
…The body of Eamon Óg, that had glittered with fish scales of opal and silver and verdigris, was gone from the shore. They knew it was gone from the black land that was cut crisscross with grave cuts by the black spade and the shovel. They knew it was gone and would never be got.
…The men of the island were caught down in the sea by the tight weeds of the sea. They were held in the tendrils of the sea anemone and the pricks of the sallow thorn, by the green sea grasses and the green sea reeds and the winding stems of the green sea daffodils. But Eamon Óg Murnan would be held fast in the white sea arms of his one-year wife, who came from the inlands where women have no knowledge of the sea and have only a knowledge of love."
His book is specifically about identifying people with a creative spark. That said though, I am not sure your reasoning makes sense - basically, it seems you're suggesting one ought to be specifically excluding creative people from professions such as accounting.
Your point on whether the response itself was a good one, or plagiarism, is valid though.
Your thoughts on interviewing assumes there is a science here, and there isn’t is there? For example you describe the cliched response of “perfectionism” to the question about weakness or strengths as a bad response, but then say that the good interviewees:
“Knows their strengths and weaknesses and can reason to their root causes. “
Interviewers don’t actually want honesty here. They don’t want people saying they are bullies but the root cause is probably genetic, or they get little done on Monday morning or Friday evening because they find it hard to get going and easy to get into the weekend spirit, or that they have a raging hatred of certain kinds of people, and the root cause is being bullied at school, or they are functioning alcoholics and the root reason is alcoholism runs in the family. (What ya going ya do?), or they enjoy weed at the weekend but it’s a weakness they hope to eradicate. Root cause is a genetic desire to partee.
And on. And on. Literally everybody has a weakness that, if mentioned in the interview, will not get them hired even if the interviewer shares the vice.
What you are looking for here, then, is better lying about weaknesses that aren’t really weaknesses at all. Less cliched than the bad interviewee saying her biggest weakness is caring too much about work, but about as honest.
You're explaining how things are; I'm talking about how things _should_ be. As an interviewer, I definitely value honesty. Of course, there are cases where candidates will be honest about a defect that's a dealbreaker - and that's fine! If they're not a good fit, they won't enjoy the job either.
(I once asked a candidate to tell me about a time they solved a problem in an innovative way. They said that when the data from their thesis experiment didn't give them the result they wanted, they falsified the data... That's an example of honesty that didn't go that well for them!)
Yeah, I noted that too. The self-awareness rubric says it's bad if they give cliché answers like "perfectionism", but people give those answers because they've been coached as to "don't answer with a negative".
The good self-aware who can reason as to their strengths and weaknesses? They've just worked out a way to give the same kind of favourable answer but not have it sound cliché: "Well, one of my weaknesses is that I will spend a lot of time working on a problem. 'Good enough' isn't good enough for me. I think I get that from my childhood, when my curiosity was encouraged by my fifth grade teacher who helped me discover the wonders of science" blah blah blah, which all disguises "I'll take forever to get a task accomplished because I fiddle with tiny, unnecessary details" but *sounds* way better.
If I'm being interviewed about my SWOT, I'm sure as hell not going to say "I procrastinate until the last minute because the only consistently replicable motivation I have found to work is the panic about the deadline approaching; I will check out of a task if it bores me but I can sure make it *look* as if I'm working; and I do hold grudges like nobody's business so if you ever piss me off, I *will* remember and do all I can to frustrate you even in petty ways".
If you want me to bullshit about "I am self-aware and can self-analyse" I can do that, but it doesn't mean you're getting the actual *truth*.
I'd be more likely to hire you for the procrastination answer than the 'good enough isn't good enough for me' answer!
A good interviewer will probably be able to probe and push to get to the truth. If you gave me the BS good enough answer, I'd ask for examples where that has caused you to underperform.
Also hobbies. That’s nonsense too. I could do a good list now, since I don’t drink anymore but an honest assessment of my hobbies from 22-29 or so.
Any hobbies?
Drinking.
Anything else?
Pubs, night clubs and trying to get laid. Mostly not succeeding. Lots of drinking. I like food too but only as soakage.
And sports?
Yes! I play indoor footie once a week with my drinking buddies and then we drink. I also watch sports, generally the premiership where I support <your club if I can work it out> and go watch the club in <is it Anfield? Is that a red scarf?> which involves a lot of…
I had - and have - nothing so sociable. All my hobbies were things like "reading, listening to music": solitary activities. Nothing like "captain of the sports club, treasurer of the X club, award-winning member of Y" (which I think is mainly what this question is intended to ferret out, especially for the young and those getting their First Real Job: are you Demonstrating Leadership And Achievement Qualities?)
So I decided to leave that line out of my CV for once, and *of course* that was the one time I got asked about it in the interview 😂
Which is fine, except when you answer something like (at the time) "A Brief History of Time" by Stephen Hawking, and they go "Oh, what's that about?" and then you see their eyes glaze over as you answer 😁
To be blunt, Cowen's book (if I go by that creativity answer) sounds to me like the usual sort of business guru management book that is popular because of one fad (the cheese/raincoat/X number of laws, habits, or changes of underwear) and then fade away when the next fad book comes along.
I'm just imagining all the people who come out with that "taste the ocean" line at interview because hey, Tyler Cowen says it's an impressive answer, and how that will go over in reality with average interviewers just wanting someone who can file, answer the phone, not get knocked up by/knock up a co-worker, and won't run off with the petty cash float.
Good self-awareness enables you to improve yourself. Good other-awareness enables you to know when you should tell the truth about your self-awareness.
Indeed, but unless you are absolutely stupid (or actively seeking to torpedo the interview since you don't want the job, you just want the experience interviewing with/for X), telling the unvarnished truth is self-evidently unwise:
"My weaknesses? Well, don't expect much of me on Monday mornings, I'll be hanging since the weekend, ha ha ha! Yeah, I like a good old session on the beer with the lads. And of course it follows on from that, that I can't stand misery-guts, so if you're going to be managing me with a face like a bulldog licking piss off a nettle, I'll tell you now that we're not going to get on. Sure, life is for living, not work!"
I admit, if I had nothing to risk and got the chance, I'd love to try that out on an American company which (over here at least) have the reputation of being deadly humourless and work-obsessed to the point of expecting you to dedicate one of your kidneys and your left leg to the job and the company, glory glory hallelujah!
One of my weaknesses is that I'm spiteful as fuck, I can hold to grudges for years and years (I'm not exaggerating in the slightest), I can accept lots of damage to my interests if the return is even 1/2 of the damage to those I hold the grudge against.
I was thinking recently about what if an interviewer asked this question and I unironically told them that. They will probably freak out.
Yes, I'm not going to tell an interviewer "Actually, that 'can work as part of a team' is bullshit. Yeah, I can tolerate having to work with other idiots for a while, but in general I hate people and am happiest in a corner on my own, with no one looking over my shoulder, doing my own work. Just give me the pile of paper to work through, then shut up and go away. I hate micro-managing".
That must be why in two jobs I got the job of organising the file room, even though I haven't specialised in filing at all. Just me, a room full of filing cabinets, and a shit-ton of files that had to be re-ordered, updated, duplicates weeded out, and new ones entered, and nobody to talk to or work with. Bliss! 😁
Not exactly your ideal job, but have you read _The Hollow Places_ by T. Kingfisher. There's plenty else going on, but the main character finds it satisfying to catalogue an overstuffed museum of oddities.
I suppose the trick is to come up with a "weakness" which is actually more of an asset for the job, besides the corny "perfectionist" one.
So if applying for a trader position in Wall Street for example, the candidate would be well advised to say they didn't suffer fools gladly and had been known to rip a sys admin's monitor off its desk in an impatient rage and throw it out of the window. The interviewer might tut-tut, but would think "Yes! Love it! Hire this guy now, they'll get quick results!" But the same approach would be unlikely to work if being interviewed for a first grade teaching assistant role :-)
"... I rubbed two sweaty palms together outside the interview chamber and tried to think only pure thoughts (half-truths), such as these. I did a quick equipment check, like an astronaut preparing for liftoff. My strengths: I was an overachiever, a team player, and a people person, whatever that meant. My weaknesses: I worked too hard and tended to move too fast for the organizations I joined."
Your piece on interviewing lines up nicely with Fully Unsupervised's second question below about why shouldn't we be allowed to discriminate in hiring.
The interview process is packed with CYA for a reason, and firing quickly for demonstrated lack of competence is a more valuable ability than trying to sift the lies people tell in interviews.
A new podcast about the fine tuning argument in physics in a language everyone can understand. Check out the first 3 podcast episodes of Physics to God: A guided journey through modern physics to discover God.
Episode 1 discusses the idea of fundamental physics, the constants of nature, and physicists’ pursuit of a theory of everything.
Episode 2 explains what Richard Feynman called one of the greatest mysteries in all of physics: the mystery of the constants.
Episode 3 presents fine tuning, the clue that points the way towards solving the mystery.
So I also believe in God but not one who is winking at us through the fine constants. Always curious when I meet other scientifically literate believers: do you find it necessary for God to have left some clue in the material world to find faith?
For me it was more like seeing the “goodness” in the world and that it wasn’t fake or weak or just a show people put on. Similar to Scott’s thoughts on Moloch, seeing that there exists within us and the world something that cares and strives to be better made me believe.
The fine tuning of constants that is apparently needed for our existence isn't really like a message as such. And it certainly seems like there is something that needs to be explained here.
Personally I'm a multiverse believer, but if you don't go that way you probably do need some explanation as to why the universe is structured in such a way as to give rise to the possibility of life.
Right. We're going to do a separate miniseries about the multiverse and show why, at the end of the day, we don't think it's a good scientific theory. If we don't successfully argue that point, our argument will be incomplete and unconvincing.
But first, there are a lot of people who don't appreciate the mystery of the constants or fine tuning, and that's what we're doing in this miniseries.
"do you find it necessary for God to have left some clue in the material world to find faith?"
The idea, at least from the Catholic angle, is that we can reason our way towards belief by finding traces of God in His creation, so belief is not unreasonable or baseless. That doesn't mean that reason alone will give us belief, but it's a ladder towards it. Contra Fideism, which is (at the simplest, crudest level) "We can't understand and shouldn't even try, just believe".
"For me it was more like seeing the “goodness” in the world and that it wasn’t fake or weak or just a show people put on. "
See Sherlock Holmes in "The Adventure of the Naval Treaty":
“Thank you. I have no doubt I can get details from Forbes. The authorities are excellent at amassing facts, though they do not always use them to advantage. What a lovely thing a rose is!”
He walked past the couch to the open window, and held up the drooping stalk of a moss-rose, looking down at the dainty blend of crimson and green. It was a new phase of his character to me, for I had never before seen him show any keen interest in natural objects.
“There is nothing in which deduction is so necessary as in religion,” said he, leaning with his back against the shutters. “It can be built up as an exact science by the reasoner. Our highest assurance of the goodness of Providence seems to me to rest in the flowers. All other things, our powers our desires, our food, are all really necessary for our existence in the first instance. But this rose is an extra. Its smell and its colour are an embellishment of life, not a condition of it. It is only goodness which gives extras, and so I say again that we have much to hope from the flowers.”
Percy Phelps and his nurse looked at Holmes during this demonstration with surprise and a good deal of disappointment written upon their faces. He had fallen into a reverie, with the moss-rose between his fingers. It had lasted some minutes before the young lady broke in upon it.
“Do you see any prospect of solving this mystery, Mr. Holmes?” she asked, with a touch of asperity in her voice."
Which is an understandable reaction, as they're expecting him to solve a vital case about missing government papers, and he's standing there in a dream looking at flowers 😁
Hadn’t read that Holmes story but something to that effect yes although I think I’m more at a meta level with it, that flowers are possible at all and desirable even if part of what makes them desirable is our history with them. The fact that those relationships are possible means there is goodness in the world.
There are a lot of sources for faith, though many in the modern world find them challenging.
I don't think it's necessary for God to leave us any clue that he exists. But, I do think there is compelling evidence for God from fine tuning in physics, even if there's no reason to believe that God intentionally "put it there for us to find".
Starting with an end goal in mind (infer existence of God) is the opposite of science. At least be honest and say that you are looking for confirmation of your beliefs by any means possible, including interpreting scientific ideas to mean what you want them to mean.
I think if you hear the argument out, you will be surprised at how compelling it is. Keep in mind, the counter scientific theory is the multiverse. This is a very different situation than in biology where the alternative theory is evolution (a much better established scientific theory).
I can understand why you would assume I'm biased without knowing me. Nevertheless, the premise of the podcast is that by the end you will have first hand knowledge and be able to decide for yourself. If you think the argument is not convincing, it won't matter to you what I think or how biased you think I am.
I will make one point, that I am aware that if we use biased arguments in the podcast it will not be convincing to an honest person (which is our target audience). Feel free to let us know about any bad, biased, argument you think we're making.
As a trained physicist, I am used to "just hear me out" pleas from non-experts. It is usually clear from the first sentence if the person knows what they are talking about, but it is almost impossible to change someone's mind if they are attached to their pet theories. You clearly fit into this reference class, ignoring anyone else's point and instead pushing your own. If you weren't, you'd review the classic arguments why you cannot infer the existence of God from scientific advances, and address them. So, yes, I am quite sure all your arguments are "bad and biased", no need to spend time on a low information/time content like a podcast.
"There is nothing like looking, if you want to find something (or so Thorin said to the young dwarves). You certainly usually find something, if you look, but it is not always quite the something you were after."
In my tutor group at university there was an extremely smart guy, yet he'd somehow dedicated himself to proving the existing of God via physics. I had to admire his effort, but it also appeared rather unscientific to start with the end goal in mind and then find a theory to get there. These days, as best I can tell, he spends his time writing books on theology.
Most of my friends over the last 10 years have come from professional settings. If I count college or high school as a job, then virtually all my friends came from a professional context.
As someone who now can have a fair amount of control of who I work with, I want to just hire/chose people I like. If I diversify my colleagues in all dimensions, I’m confident there’ll be certain subgroups I’ll dislike. There’s some value to diverse perspectives, but less discussed these days, there’s value to monocultures where business-irrelevant topics don’t occupy much of the internal zeitgeist.
On one hand I believe strongly that restaurants and other public venues should not be allowed to discriminate who they provide service to. On the other hand, I think businesses (at least until a certain size), should be allowed to hire whoever they want to work with.
As a side-note: My understanding of the research is that heterogeneous teams do better than homogeneous teams, when they are able to capitalize on their differences. And that heterogeneous teams do worse than homogeneous teams, when heterogeneity leads to friction and isn't exploitet in a positive way.
If you're leading a heterogeneous team I believe you have quite some, though not full, influence on whether it will be one or the other outcome. E.g. already by selecting co-workers who are better able to see the strengths of other abilities than their own etc. Their are so many different axis of 'diversity' though and I'm not sure which ones you're thinking about.
*Please take with a grain of salt; for a while I read what came across on this topic, but I never did a thorough review or anything similar.
I'm really glad Deiseach told me how to deal with the error in the 'Edit' function. Nevertheless I wish they would figure it out: I'm still getting a to see an empty comment after any edit.
The problem there is you may get, as people have experienced in their work lives, the group of "the boss and his little gang of cronies, yes-men, and hangers-on".
The colleagues who do as little work as possible and push it off onto others, while they apple-polish for the higher-ups and lick their boots. The guys who get on by being charming and personable and on the right side of the boss. Getting a job by "hey, you play golf too!" at interview isn't the best criterion (it happened my brother, who luckily *is* a very good worker but made us all laugh by recounting how the interview was basically him and the interviewer chatting about golf).
"People I like" may or may not be "people who are good at their jobs". There's a subtle but real difference between "can fit in to the existing company culture/get on well with others" and "will be able to do the job, not spend most of the time schmoozing with superiors in order to climb the ladder".
"business-irrelevant topics don’t occupy much of the internal zeitgeist."
That's the major problem here: you want to avoid the kind of activists and grifters who will look for opportunities to go "I am being oppressed!" and hold the company to ransom, but at the same time - maybe someone with different political or cultural views who you don't get on with socially will fit in okay, because they put the job first and their own personal interests second and know what should be left outside the door when they come in to work.
I would say it's breaking down at the point where you're the one required to do it.
Does the power to hire include the power to fire? If you hire all like-minded people, and someone gets married and changes their mind about something, are you going to fire them? What if they don't change their mind but they bring their anti-minded friend to events?
I don’t think your logic or morality is breaking down at all. You should be able to hire pretty freely.
But I also suspect you might benefit from thinking a bit more deeply about why diversifying might be good for your business.
First off: “Discrimination” is not necessarily a bad thing. We discriminate all the time based on needs and preferences. The logical and moral problem of discrimination comes into play when you use flawed heuristics – like judging people on ethnicity, skin color, religion, gender, sexuality, etc. when you are supposed to be looking for the best engineer for a job. That is not just immoral, and probably illegal most places you want to live, but also bad business.
But discrimination based on behavior, values, personality, likability, etc. is a different thing. Yes, sometimes those things track uncomfortably close to protected categories, for all kinds of subjective and objective reasons, but that’s exactly where diversity has the most value, and where it might be most healthy to think carefully and challenge yourself.
If you have two equally qualified candidates, it may be a good idea to hire the one you have best report with. Especially when you’re a very small company. Good communication is important.
However, it would also be logical, moral and good business if the qualifications you are hiring for include strengths that balance out your weaknesses. Which often means hiring someone less like yourself.
If you only hire people like you, who you like, you may feel like things run smoother day to day, as people intuitively know what their colleagues think and expect. On the other hand, employee by employee, you will also create an echo chamber where blind spots, confirmation bias and knowledge gaps are everywhere, with less space for serendipity and creativity. Having people with different backgrounds, experiences and perspectives and personalities on your team is often extremely valuable. It’s a balance, and the smaller your company is, the harder that balance is to strike.
Then you have to consider that it is crucial for a business to understand their customers. If your staff doesn’t reflect your customer base, you will have deliberately built a business that is lacking in empathy for your customers (which is ethically dubious), and that opens up a niche in the market for someone else to fill (which is logically dubious). But whether or not that is a real business problem, depends on your market and position.
Finally, the larger and older your company is, and the more of an impact it has on the society and culture, the more of a moral obligation you have to actually make a difference. If you represent Wells Fargo, Volkswagen or DeBeers this is an important consideration, but for a plumbing company or consultancy with 20 employees, that argument carries far less weight IMO.
If you can consider all these factors in hiring more than a handful of people, and make decisions you think are truly best for your business – not just your own comfort – and you still end up with a monoculture, I would be surprised and suspicious.
So, you should definitely not “diversify your colleagues in all dimensions” just for the sake of diversifying. But you can probably benefit from thinking more carefully about what diversification is good for, and what is right for your business and your community, and then thoughtfully consider how you might want to diversify.
PS: You didn’t ask about this, but touched on it near the top: I think it’s really healthy to make new friends from outside of work, and I think that makes it easier to hire colleagues for good reasons other than likability. But making friends as an adult is really hard. For what it’s worth, the last time I had to make new friends (new city), I had success with 1) meetups for a group much like ACX, and 2) almost always having some low-commitment group event on the calendar (pub night, local concert, BBQ, etc.) that I could invite people to if I casually met someone it might be fun to get to know better. I’ve since moved, but still keep in touch with many of the people I met like that, and consider a few very good friends.
"Then you have to consider that it is crucial for a business to understand their customers."
As we have seen with the Bud Light debacle, where the marketing manager did *not* understand the existing customer base and picked a poor strategy, which on paper may have looked good - we need to diversify and rejuvenate the brand, we need to attract in a new, younger set of drinkers, who's popular with the kids right now? - but which ended up being poison: they drove off the existing customers without attracting in replacement, much less new, customers.
The efforts to appease the boycott customers were then even more tin-eared: the bad country music ad rushed out, the camo cans, the yee-haw cosplaying which even rednecks realise is just window-dressing and is even more insulting: 'yeah, we think you are so dumb you'll fall for this and come back to us'.
Yes. That seems like the perfect example of the dangers of ”diversifying on all dimensions” just for the sake of diversifying, and mindless activism, rather than going thoughtfully about it.
However, while the Bud Light case seems like a clear-cut example of Ivy League activists trying to force-feed their politics of DEI to the rest of us, it probably wasn’t as clear cut until after the fact. They were probably aware that they were poking the bear, and they wanted to create a little controversy people could talk about over a Bud Light. And it may have been partly bad luck that they became such a lightening rod for the trans debate, rather than just a 3-day hashtag. It’s predictable that some people would hate it, but it is a bit surprising that people (influential people like Kid Rock in particular) would hate it so much they would call for boycotts and make it a virtue signal to share their disgust. And it’s a bit surprising a brand of their caliber can have so low customer loyalty, that people can’t shrug it off as a gaffe, typical of our day. Like it’s Enron, not just a gimmicky sponsorship.
But of course, it’s a terrible, tasteless product, it was already in decline, and it is an incredibly incendiary culture war issue, so maybe it was overdetermined.
That’s interesting because when I saw the Bud Light thing play out, my guess was that the majority would have thought that was a bad idea, but were afraid to die on that hill. That’s exactly the type of work environment I’d like to avoid. I want people to argue issues openly, make a decision, then stand by their decision, at least until there is an opportunity to revisit it.
People who have a tendency towards activism don’t really get much value from consensus or disagreeing and commit. They need an internal enemy to become the heroes.
It really is a disaster of their own making. I absolutely see the point about developing a new customer base, and the Pride Month capitalism is now established as part of all companies, so pivoting to the more progressive elements (we don't want college drinkers, we want... college drinkers.. but classier!) is doable, if they spent five minutes thinking about it. I believe them about "it was only one can" and "not a campaign" and "not a partner" but that only makes it *worse*.
Unfortunately, it looks like they spent five minutes going "Who's the current hot influencer name? Mulvaney? That'll do!" and then expected that social media would *only* be seen by the precise bubble of Mulvaney's 10 million TikTok followers and not leak out elsewhere.
But somehow I can't envisage the people who follow Mulvaney for fashion and makeup endorsements switching to glugging down cans of Bud Light, so - yeah.
And that blew up on them, then the half-hearted 'apologies' only pissed off the LGBT set who are now "we're not stocking your pisswater in our gay bars because you threw a trans person under the bus!" and they're getting it in the neck from both sides:
"A bird in the hand is worth two in the bush" is advice that they seem to need having repeated. They threw away existing customers without first having locked-in the new replacement market.
Logic may not have been part of how you got to the idea that restaurants shouldn't be able to reserve the right to refuse service to anyone at any time. Indeed, it would have been odd if it had - it's a standard cultural bias we're trained into these days.
There are loads of reasons why a restaurant might be morally justified in refusing someone service. One is if the customer says they are vegan or has an allergy, which the restaurant cannot guarantee to cater for satisfactorily or safely. Or on a previous visit the customer may have complained vociferously and unreasonably about the service or the food, like something out of Fawlty Towers, or left without paying, or showed signs of being drunk.
The psychology of the 'reply guy' archetype is a fascinating one. I really hope that future big-data approaches will shed more light on what goes on before someone types ";)" at the end of an utterly inane and thoughtless comment and hits "post".
Having said that, I was mostly reacting to Leo singling out one specific idea as not being based on logic which sounded ... at least as biased as the 'standard cultural bias' they complained about. But I admit they gave a relevant answer to the OPs question. I'm sure this could lead to a looong discussion; one that I currently don't intend to follow up on further.
On the contrary, giving people reasons for saying 'no' just creates space for them to argue. I can tell you've never initiated a breakup with anyone or had to let an employee go. "We won't serve you" doesn't invite argument. "We won't serve you because X" does. And, in the litigious environment of modern-day America, it invites lawsuits.
Why do you think that your logic or morality is breaking down? Personally, I think everyone should have freedom of association, but I don't think there's anything absurd in thinking that social harmony or whatever outweighs freedom of association in one situation but not in another.
The logic for there being no restrictions on "hiring whoever you want" breaks down when its universalisation combined with unequal distributions of wealth and effectively segregated social circles leads to exacerbating said unequal distributions. Beyond that you're pretty much still allowed to hire the people you like as long as your personal filter isn't discriminating based on protected characteristics. As long as you don't fall foul of that criterion, there's no problem with hiring people you feel will fit in with your business's culture.
I believe whistle blowing is a very important activity to protect. Yet anecdotally, most people I am familiar with who claimed to be a whistleblower seem to do it for personal gain, often without trying internal channels first. I feel the same way about employee activism, or people who sue their employers.
All good characteristics of our society, and yet on average I would not want to hire or collaborate with most people that belong to those and related groups.
Am I wrongly biased or is there meat to this? I feel fairly confident in this assessment. How should I navigate the world then?
>most people I am familiar with who claimed to be a whistleblower seem to do it for personal gain,
Are you saying this is untrue of the other groups you know? People you know who tow the line aren't doing it for personal gain?
One of my jobs specifically hired a whistleblower who had shut down a company in the same field. She was essentially quality control; anytime she thought something was out of spec she made sure everyone knew about it.
I think there are all sorts of sociopaths in corporations. Even people who aren't sociopaths in their personal lives can behave like one in certain work environments, for example when certain behavior is required to get promoted. I guess I have a deeper aversion to someone-- in this case a pretend whistleblower-- who betrays an entire organization vs. a vanilla corporate sociopath who leaves a few bodies in their wake.
Interesting. I've managed hundreds of people and played the game, and my best interest and the right thing to do have always been aligned when it comes to people within my organization. Helping people grow into bigger roles is the best thing a manager can do for everyone involved. And some people won't make the cut, but I don't see fair performance management as sociopathic behavior. On the other hand, I've engaged in less honorable manner with other organizations when there were internal turf wars.
In my experience, companies that are well managed enough to become big and successful tend to do the right thing eventually. It's just that eventually can take a long time. The kind of sociopath that will hurt their own team, on purpose, will eventually be found and removed from a long-term functional (short term often dysfunctional) organization.
Our entire economic system and most of our social arrangements are based on self-interest. An honest, conscientious, reasonable person still gets to seek advantages for themselves. Also, some sleaze ball seeking personal advantage who happens to prompt a positive change might not be a nice person but has done a good thing
I suspect there are some important visibility issues to consider. Suppose Tim manages a team of employees who manufacture hammers. Three of those employees (Alice, Bob, and Carl) notice that the hammer-sharpening tool has gotten rusty and needs to be taken offline for a day to avoid a safety issue. All three of them separately ask Tim to send the sharpener in for repairs, but Tim doesn't want to, because he's trying to meet a quota for a bonus target.
If Alice quietly goes over Tim's head to Vikesh (the regional manager) and discreetly asks Vikesh to handle the problem, then from your point of view as Alice's co-worker, you probably don't notice anything...as far as you can tell, the sharpener got fixed and there's no real problem.
Similarly, if Bob makes a loud stink and publicly complains about Tim's carelessness all over the company, then even if the company does choose to fix the sharpener, it's not going to be good for Bob's career; he's going to make enemies and the company will probably look for excuses to fire him or at least make his job miserable enough that he looks for a job elsewhere. So if Bob makes a habit of publicly complaining about problems at work, then (statistically speaking) he won't be your coworker for very long, so you won't hear about Bob's sort of complaints very often.
What's left? If Carl files a formal whistleblowing complaint, then maybe that protects him from retaliation for a while, so you hear about the complaint and also Carl sticks around. But you're not hearing about Carl's complaints because he's the most common type of complainer -- you're hearing about them because even though he's a relatively rare type of complainer, he's the only type whose complaints are both (1) publicly observable, and (2) durable.
It's a tough question: does Vikesh do anything about it, or will Alice's complaints just be ignored? If Bob sees that Alice is going the 'proper' route yet nobody cares or does anything, and the problem remains to be solved and needs to be solved, then going public and loudly complaining may indeed be the only effective way left. I've seen situations where only the threat of a lawsuit did finally get a decision made.
Is Carol complaining out of spite and revenge, or does she have a real, ethical, incentive causing her to do this? Does it matter if she's doing it for revenge, if it is a real abuse happening?
It is a tough question! I don't mean to suggest that one of the workers' responses is better than the others. I just wanted to point out why it might look like whistleblowing is common even when whistleblowing makes up only a small portion of employee complaints.
Like everything else, it's only the extreme/most public cases we hear of. "Jim Jimson is a whistleblower revealing the shady practices at DoggyDiamonds'n'Dos, tune in at 9:00 p.m. for our exposé!" gets way more coverage and hence public attention than fifty "Bob Roberts used the in-place grievance procedures to advance his complaint and have it rectified".
It's a valid point, but it doesn't seem to address the parent comment's claim. They didn't claim that whistleblowers aren't rare compared to internal complaints. They claimed that the whistleblowers they know, however rare they may be, seem to be selfish rather than altruistic. It's not clear from the comment how they made that assessment, but maybe they have some reason to think this.
But Alice and Carl are also kinds of whistleblowers, and potentially much more benevolent. Jason's point is that Carl the Formal Whistleblower only represents a small percentage *of whistleblowers* even if he's the one you're most likely to experience having as a coworker.
In many companies it's pretty clear that your boss will discriminate against you if you make their life hard. So am understanding of whistleblowers not taking internal action first.
Was Frances Haugen a bonafide whistleblower? She hired a PR firm to make herself into a famous whistleblower. Didn’t reveal anything new, or illegal (as far as I can tell).
And my sense is that this sort of employee activism is in vogue, so I want to avoid environments that allow that behavior to foster. Which means, selecting for the right people.
It’s not that different from people that sue their previous employers. Typically, most employers will settle on any employee claim, no matter the evidence. So technically we could all sue on our way out and get a little something. And if you know how these things work, you can get a lot because there’s always some management error.
But I would personally not sue my employer unless something outrageous happened. If I don’t like how I’m treated, I’ll just leave. And I’d like to have colleagues that more or less would follow suit. I’m confident litigious employees make for a less enjoyable work environment because they put everyone on guard.
With the news that both Vice and BuzzFeed News are closing due to unprofitability, how are we all feeling about the future of the media? Are all advertising-funded services doomed? Should they be nationalised? Should big tech platforms be broken up? Is the future just going to be a handful of writers on Substack?
Strange answers to this. The demise of Vice and BuzzFeed News is to be welcomed. That model of click bait journalism, driven by the worst kind of advertising (itself click bait) added nothing to the good of society. Meanwhile plenty of old school media is doing great from the subscription models. And Substack is genuinely great - again subscriptions.
One way to look at Vice and BuzzFeed is that they were primarily entertainment companies fueling a tiny bit of reporting, and their failure is weak-to-moderate evidence that model doesn't work circa 2023. That's not generally worrisome.
I agree that they *produced* some really good reporting, but it's important to keep that in perspective, consider what proportion of their output was valuable to you as news, and reflect on how that relates to their business models / appeal to investors.
I guess to answer your original question is that their bankruptcies don't really move the needle much for me. The larger media landscape remains unchanged, and I'm not sure their quality output redeemed the rest.
In recent years, most big traditional newspapers (think NYT or le Monde or El Pais) have come out of a free content + advertisement system to a paywall+subscription system, and found it much more profitable.
I haven't gotten with the program and subscribed to any of the major newspapers, and the local paper we subscribe to impresses me primarily with its uselessness. (My housemate likes it.)
I'd be very interested in people's impressions of the reliability of any of these paywalled big name papers.
1) Do they have giant gaps in their coverage?
a) I was not impressed learning about local events one day from Al Jazeera, after having already scanned the local paper's emailed headlines. Is this sort of thing normal?
b) Do they report on anything from cities, states, countries, continents, other than those where they are based? In what level of depth?
c) If something major happens elsewhere, will they report on the event, or primarily on local (to the paper) reactions to the event?
d) do they actually have reporters available outside their locality?
2) Do they regularly have headings that don't match the contents of the articles behind them, either because of click-bait or because of constant revisions?
3) Are their reporters numerate? My local paper impressed me with their ability to post statistics from multiple incompatible sources in the same article, such that basic arithmetic showed they'd accounted for 120% of residents, or similar gaffes. (They've since hired some people who passed high school math, or perhaps even college level "statistics for poets" and the frequency of this sort of nonsense has gone down.)
4) To what extent do their political biases, or those of their owners, render their coverage essentially unreliable - one needs at the least to also read an opposing paper to have any idea of truth
5) Is there any single thing I can read regularly that will leave me well-informed about news, without having to read several other sources?
1a) it could be that the papers went to print before the events had become known? It doesn't seem like a bad idea to slow down the 24 hour news cycle, but it does mean the news might miss some things.
1a) Be prepared for a lot of Gell-mann amnesia. The NYT seems like it has its finger on the pulse of X state. Then they do a story on your state and it is clear they talked to like 2 people and have zero fucking idea what is going on.
1b) Sure they cover global topics, though are very US centric.
1c) Depends
1d) Yes
2) Yes
3) No
4) A very high extent if they are politically salient topics. This isn't always consistent. Sometimes the NYT of WaPo will run an article that is actually trying to get at the facts on some political issue. But 5 other times they will just parrot the approved twitterati talking points without using 2 brain cells.
5) Economist? Or maybe read Fox News, MSNBC, and WSJ then triangulate?
Triangulation is the way. Read a variety of mainstream media, and read a little bit of the crazy stuff too. The wider your base, the better you can triangulate.
Ideally, sure. Daily life of course is an exercise in balancing the ideal with the plausible.
On a lot of dimensions of news and current events, for me, the Economist has been the single go-to for....well damn it's nearly two decades now. Doesn't provide that service on all dimensions, e.g. their attempts at cultural-zeitgeist type writing and punditry are generally ignorable. (I stopped years ago even cracking open that "The Year Ahead" annual special issue.)
But if I had to pick just one it would be the Economist and there isn't really even a serious other contender anymore.
I still think we need to move to the BAT model. The Basic Attention Token. BAT is a crypto token you use to pay for online media. Instead of paying several hundred dollars a year for entire journals of which you'll read maybe one essay a day, but miss out on desired essays behind pay walls you haven't bought, your browser pays say a dime for every essay you do read.
For those who cannot buy tokens, they can watch sponsoring ads to buy tokens.
Substack newbies who get say 1,000 reads earn $100, great essays which find 1,000,000 reads earn of course real money.
Journals need to keep their writers happy, lest they go to substack.
Readers shell out $100 per year for 1,000 essays, but only on the essays they really wanted.
I think I approve of moving in this general direction, though I'm not sure why it has to be crypto. I also see incentives similar to those that ruined existing media (keeping people angry to keep them engaged)
For the general model of one subscription to access lots of magazines, there are already services such as Readly available, though I wonder how much more clickbait we'd see if such services were to become popular. Personally, I'd be more interested in one subscription to pay for all the gyms in my area – that would also give the gyms an incentive to try to make me go to the gym.
It's called Active & Fit Direct - you can get it through major employers, USAA, and some other places like that. It's better in some areas than others, but I've used it on West and East coast, in the south, and in the midwest, and it's been pretty good.
Thanks, but I don't think that's available in my country. Still very curious though about how the behaviour of gyms changes when they get paid for attendance – it seems like it should be a gigantic improvement in alignment.
Nationalise the media or nationalise advertising? Either would be a dangerous idea, IMO. I'm not convinced we can draw the lesson that the media is doomed because two companies have gone out of business. If anything it's a necessary part of the economic cycle; as times get tougher only the profitable businesses will survive. We are still far from seeing a complete collapse of the media.
What if none of them survive though? Youtube isn't profitable yet - what if it never is? What if Twitter never is? Some day they'll all be forced to pull the plug, unless alternative arrangements could be found. E.g. what about an internet tax that funds all online media based on screentime?
I believe Youtube is profitable these days (though it took a long time to reach there). Twitter may be trending that way under Musk as well (mostly via cost cutting).
Right, but why should the State intervene to save a failing business? If all you care about is that media of a certain standard exist, you could have something like a free publicly funded national broadcaster providing that standard and let everyone else sink or swim based on usual market dynamics. Like a few countries already do with fairly good results.
My first thought is that it doesn't feel like there's a big problem with having multiple public media organizations (as long as they are independently governed). My second thought is that when to subsidize things in a market economy doesn't only depend on the economic value of those things, it depends on whether there is a mechanism for those businesses to recover their costs. Think about roads. Or another example I've heard is lighthouses. After the Fresnel lens was invented, lighthouses got way better, but because ships could see them much further away, they wouldn't necessarily be docking there, and wouldn't have to cover the costs via a docking fee. (Apparently France's lighthouses were way better than England's after this, because they were publicly funded). So it turned out that lighthouses produce more value as a public good rather than a private one, just like roads. Is media the same?
In my experience public independent media is indeed higher quality than private media. But clearly in what concerns my personal taste, not most people's, as most people tend to favour private media.
But more importantly, I think the existence of private media is important to safeguard media independence, and the existence of public media is important to ensure media provides public service. From what I can see both types improve in this way when they coexist.
I don't get the doom and gloom. A few decades ago, I could go to the city library and access a few dozen newspapers for free (and maybe magazines? I don't even remember). Now, I can access thousands of professional media outlets for free from the comfort of my own home, plus millions of amateur media outlets. Like, things are pretty great.
How many of the thousands of professional and millions of amateur media outlets are doing independent reporting, e.g. sending reporters to places where news is suspected to be happening, and how many are just repackaging and commenting on other people's original reporting.
I haven't done a deep dive into this, but my sense is that the number of actual reporters per newsworthy event has declined significantly in the past decade or two, and for marginally newsworthy events is often less than one. Which means lots of newsworthy stuff will either not be covered at all by those thousands/millions of outlets, or will be uncritically repeating someone's press release that nobody bothered to send a reporter to ask questions about, or be based wholly on the work of one reporter who may be biased or otherwise in error.
It's important to note that Vice was valued over $6 billion only a few years ago, and is now worth possibly $200 million while declaring bankruptcy. So investors have been very wrong about whether they can recover their costs, and have massively overinvested in new media operations - meaning that we may not have this (certainly very good) media landscape for very long. Although, it's also worth mentioning that while we're sitting around enjoying the finest media in history, many (most?) people rely on highly politicised and unreliable media sources, because politicisation is rare way for media outlets to increade their profits.
Are you sincere in saying the media is the finest in history? I don't find it so, at all. I believe it has gotten significantly worse in my lifetime, especially after cable news came on the scene, and again with the decline of print newspapers.
The main issue is a lack of journalism that is adversarial to power. The current "aligned to the DNC or aligned to the Murdoch or Trump families" version of left/right media, rounded out with the many "aligned to state power centers" outlets, does not constitute a healthy media ecosystem.
Yeah, I was trying to say that in my last sentence above. Media for normies is appalling, but if you're already well-informed you can find amazing information on the internet. Although I feel that's getting worse with digital outlets closing and SEO ruining search engines.
Old media sources were very frequently politicised and unreliable, there were just far fewer alternative information sources available to point that out.
To me, that looks more like a part of the wider trend where many different types of businesses have gotten in trouble after central banks stopped printing so much money, rather than anything specific to the news industry. E.g. Klarna's valuation dropped by 85% from 2021 to 2022, and Peloton's market cap dropped from around $50G at the peak to around $2G now.
I'm also not convinced that the media have gotten more politicized or less reliable over time, and I don't think it has much to do with the search for profit, as state-owned media outlets seem no more reliable and no less politicized than comparable for-profit media outlets.
If you look at US right-wing media and CNN, they've obviously become very popular while becoming very politicized, and certainly less accurate in the case of the right wing system at least. I don't see the same at any US government-run outlets which are relatively obscure, then again I haven't paid much attention to them due to their obscurity, so I may be wrong.
I'm not too familiar with US mainstream media, but a cursory Googling seems to indicate that NPR (I think that's the biggest government-funded news outlet over there?) is well within the normal range of US for-profit news outlets in terms of reliability and political bias.
To answer your original question, I'm feeling good about professional media going out of business. On a related note I have no idea what landscape you could be evaluating as 'very good'.
I mean the huge amount of free high-quality information available only to those who know how to find it. That's pretty good although I feel it's been getting harder of late.
Unless you're one of the people failing to make money out of those products you're enjoying for free, or one of the people who feel that incentivising these people to try and get you to pay for their products has a negative impact on democratic societies.
One way I tend to frame this is that voters are not given realistic choices on their ballots. If the choice is between candidates, then voters are choosing between large platforms containing dozens of political positions, some of which they like, some they don't, some strongly, some weakly, and so the voter's ability to express real preferences is profoundly diluted.
If the ballot choices are yay or nay on various propositions, it's much better, but still terrible by comparison to real-world decisions. Decisions are invariably all some version of "do you want X or not?", and if X is a government service, the overwhelmingly tempting thing to do is to just mark "yes" all the way down the list. In the real world, however, "do you want X?" is typically carrying a price tag, and you likely can't afford everything on the menu.
A more realistic ballot would say something like "if you had to pick only two of these ten services, which ones?" Or list all the things the state could do, the estimated price of each, and ask the voter how they'd spend e.g. $100 million between them.
Doesn't that argument apply equally to self-interested voters? If you have enough information to be able to decide the best way to vote for your own benefit, why shouldn't you consider your overall community, society or planet into account as well?
This is a tangent, but suppose someone runs for president on a platform of "I will literally kill the 50 richest people and distribute their wealth equally". This would be great news for virtually everyone - do you think I should vote for them?
Would love to share my new post on how theme parks caused the Paris Syndrome. It's partly a culture bound issue, but I think more environmental aspects at play.
I think Hoeffding's inequality is the best you can do without some sort of nontrivial upper bound on the variance of X. But I very much doubt it will give a particularly sharp bound.
I think I could suggest something if you laid out the situation in a little bit more detail. I do not know how slot machines work. I mean I have literally never seen one, and do not know how one plays one -- what you put in, what the payouts are, what choices the player has, etc etc
If your sample size is small, you can probably afford to model X as sampling uniformly at random from among the values you've seen.
Depending on how much precision you want and how much compute you have to throw around, you could e.g. brute force your estimate from there, or compute an approximate GCD of the values you've seen (e.g. round them all to the nearest multiple of 0.1), at which point what you're dealing with is a markov chain and you can compute the transition matrix and solve the problem using dynamic programming.
Can you give me any idea of the sorts of values you're dealing with?
Statement 1 is wrong. I didn't watch the serie final of M*A*S*H, I never watched the Oscars, I recall seeing only one world cup game . I'll pleade guilty for CNN's footage of the Gulf War, but that's because it's one of my earliest memories (the night-vision footage with tracers illuminating the sky is quite momorable).
In fact, if I had to pick *the* defining cultural product of my generation, it'd be the simpsons or Friends. Both of which I saw barely a handful of episodes. The common touchstones were common to your social group (which was kind of a self-replicating phenomenom. I became friend with kids who had similar interest as I did, and we fed each other music, movies, shows we liked)
Uhmmmm, they don't need reconciliation, because they are not contradictory.
You can simultaneously homogenize and heterogenize. If something was only composed of one type, and you made it composed of 10 types, then you have heterogenized it. If something was composed of 100 types, and you made it composed of 10 types, then you have homogenized it. If a society had both - a mainstream that was only 1 type of thing and a bunch of subcultures around it that were 100 different types of things -, and you forced this society to uniformly have 10 types of things everywhere, then #1 and #2 both hold, you simultaneously (from the POV of the mainstream) "destroyed common cultural touchstones", AND (from the POV of the niches) "destroyed obscure subcultures and pushed everyone into a single global culture".
Anyway, I'm pretty skeptical of the claims having the the general form "The Internet has done X".
(I) They are inaccurate. The "Internet" is TCP/IP, what most people call the Internet is in fact the web. That's not an empty "Well Akshually", there is a good 2 decades difference between the Internet and the web running on top of it. There were plenty of applications on top of the Internet older and other than the web, including email. Most of them is extinct, yes, but the point stll stands : The Internet is a collection of protocols older than all modern operating systems, it enabled but isn't directly responsible for whatever the web did.
(II) They are wrong, even after accounting for the fact that the Internet is not the web. The web itself is astonishingly versatile and of many forms. It was never one thing, so you can never claim something simple about it and be right. The web is 1990s personal websites made with hand written html, the web is wikipedia, the web is 2000s blogs and forums, and - tragically - the web is also the shit that is 2010s social media.
This is actually what most people mean with "The Internet has done X" for most harmful values of X, they actually mean Social Media did it. The horrible idea of commodifying the attention of tens or hundreds of millions of people, that's what made all the bad things happen.
Well firstly, the universality of past culture is often overstated. The MASH finale was watched by an estimated 100 million people in the US, which was a lot, but there were another 100 million people in that country alone who didn't watch it. The most watched Oscars (1998) got 55 million viewers.
That said, I feel like culture has split in some ways and homogenised in others. We're all fed a stream of content that is personalised to our demographics rather than our geography, meaning that I wind up consuming exactly the same goddamn content as every other person of my age-class-sex demographic in the world, but totally different content from (say) my parents.
Haven't we all seen Squid Game? Why the heck have I seen Squid Game? I'm not into gory stuff, and I'm definitely not into Korean dramas, but it was fed to me and I ate it all up.
Once upon a time, if you were into [obscure thing], you had to actively seek out other people who were interested in it. Hence - zines, conventions, dress codes that let every other fellow [obscure thing] fan know you're one of them, etc. Internet did destroy this (sometimes intentionally, 4chan's "suppress your powerlevel" cultural norm certainly had something to do with that; speaking of, the very fact that internet gave voice to introverts necessarily changed the previous extrovert-driven subculture dynamics).
This, of course, does not mean that "culture is flattened". No norms can ever be imposed now (and no, the woke agitation isn't an increase in norm-imposing, it's the death rattle of the old gatekeeper class as it barricades itself inside the institutions). The culture has, in essence, splintered so much that even the subcultures lost their own common cultural touchstones.
Disagree with this. Whole Reddit, for example, might be losing its relevance as various subreddits get to politically extreme, it's still the place to go to discuss a lot of niche topics, and the norms are very concrete and harshly enforced. Some subreddits more than others, but the monoculture definitely propagates.
I mean, Reddit is a top 20 website worldwide by traffic, I count it among the institutions. (And, to be sure, you can impose norms on Reddit. Just not on its users, or on the wider world with Reddit as a springboard. People can easily defect, and will - cf. The Motte.)
The type of "subculture" I'm told is disappearing is the type where people dress in a certain way, listen to a certain type of music, and hang out with other people who dress that way and listen to that music.
There used to be lots and lots of these, but now I hear that music genres are no longer as strongly linked to a way of dressing and a tribe.
For example, here in Italy there are, or there used to be, "darchettoni", perhaps the local translation of "goths" (I think). The dressed all black and listened to whatever goths listen to, and hung out with others like themselves. Already 20 years ago the ones I knew were lamenting that there are no new goths any more. Today, it seems that the goths who are still around, are the old ones, who joined the culture back then. I'm told that the same applies to other subcultures of that sort, and that there are no new ones either.
I'm not perceptive enough to verify all this stuff for myself.
If you understand the disappearing of subcultures in that sense, then it's compatible with the statement that there are no more common cultural touchstones. There are many splinters of the culture, but no longer in the sense that dress = music = social circle. They are more like personal interests than tribes.
90% of those music based cultures were about complaining that “the system” suppressed their bands and music. Now that “the system” barely exists and nothing is preventing the popularity of the music except that no-one likes it, they have yet to find a different flag around which to rally.
Actual, existing subcultures tend to fail in the exact opposite direction of pathological gatekeeping and purity testing - with whatever popular outgrowth they produce being accused of selling out.
(I don't think I can phrase my assumption about why you'd think otherwise without being mean, so I'll refrain from typing it out.)
I think both exist side by side – resentment at being marginal, and envy at those who managed to move beyond marginal.
I get the vibe I described both from Chuck Klosterman's various works, and Kelefa Sanneh's book _ Major Labels_ .
There are, of course, even more ways to respond, but these tend to be more specialized and unusual, for example the Nick Hornby autistic-style collect, curate, and catalog response.
So, my initial reaction to this was - "Obviously, you're getting this from second-hand accounts, not from directly interacting with any subcultures in any meaningful capacity." Which, as I said earlier, mean and hardly constructive. (True, though.)
So, I didn't want to leave it at that, and I went on to check who those people you're referring to are and what they've written about, and as I was going through wikipedia descriptions of Klosterman's books and their subjects (growing up as a glam metal fan, Guns'n'Roses tribute band...), something clicked. Fans of mainstream things past their 5 minutes in the spotlight, reenactors of past fashions - those are also subcultures. Not merely technically, they simply are. Maybe, by pure numbers (of distinct tribes, not of their headcount), they do make up 90% of them. I can even believe people who make them up do feel resentment that the world has passed them by.
And yet - what you said feels incredibly misleading and myopic, because that's just not the kind of subcultures most people are going to encounter, much less pay attention to. Also, incredibly arrogant, and I suspect that - if you care about any cultural output at all - in a decade or two, you're going to end up exactly as what you describe, as contemporary subcultures' creativity snowballs them into the mainstream.
I've no idea quite what you think you are referring to but I grew up in the world I describe, high school in the early 80s, college in the late 80s, first adult years in the 90s. I experienced exactly the phenomenon I described.
I have no idea how old you are, but I suspect you are dramatically younger. And OF COURSE the current versions of "I'm so unique, as evidenced by the way I behave exact like everyone else in my little tribe" cannot, with a straight face, complain that their music is being suppressed by "the man".
What they can and do complain about is that it is being suppressed by "the algorithm" but that's a more ridiculous claim, and everyone knows it – there's a whole lot of difference between "no-one knows our music because they never get a chance to hear it" and "no-one knows our music because they couldn't be bothered to spend 5 seconds even trying it".
Honestly, read both authors I recommend. Both are fascinating in their different ways, and both will, I suspect, give you some insight into the very different world of what pop culture was like before the internet. For Klosterman, I'd recommend starting with his most recent book, _The 90s_.
Exactly. Those subcultures, and there were many (punk, heavy metal, The Dead, jazz, alt-country, etc.) tended to hold the view "Commercial music sucks.' Sometimes expressed in the form 'Corporate rock sucks.".
Jazz is different from those others in that it had a turn as culturally broadly popular -- even dominant in some ways, depending on who was talking. Which is germane here because jazz lost that position and became a subculture much as you describe it long before the Internet existed.
And then if anything jazz today is an example of a subculture which has benefitted from the rise of the Internet.
I mean, no, decades of persistent attempts of the Korean music industry did. (Also, boy/girl bands that they're famous for are literally bands, so your heuristic is clearly too simplified.)
I mean, the Internet absolutely has destroyed a lot of obscure subcultures. But:
A. It's created a lot, like a lot lot, new obscure subcultures.
B. You dramatically underestimate how many pre-Internet subcultures are out there.
For example, I dig part of this vibe, it is super surreal to have lived through not one but two D&D movies. At the same time, as a proper connoisseur of nerdery, I take comfort in the classics that the mainstream will never, ever exploit or monetize. I, dear sir, not only know what a Glitter Boy is but why one would bring a supersoaker to Mexico and why the skull dog nazis invaded Tolkein. And there is some comfort in the fractally infinitely expanding universe of weird obscure shiz.
I think both statements are false. In statement 1, "everyone" needs to be restated as "many people, especially people of a similar race, class, and community"
Things melt in one place and freeze somewhere else; or change states otherwise; What is really novel is the frequency of state changes. Because Internet…
I think something like "the Internet gives everyone a random sampling of like 20% of the culture" would reconcile them. So that everything gets thrown into the blender and gets picked up by 20% of people so nothing is ever obscure, but also, nothing is really universal.
I'm not saying that's true (maybe a more complicated version is true), but I think it would reconcile the two things.
Something similar -- I imagine a huge mob running around, consisting of maybe 10% of the population, randomly invading small spaces, destroying them, then moving on.
So on one hand, there is no common culture for everyone, although there is the thing that the mob currently focuses on, which one week later may be irrelevant and forgotten... but also the small spaces are routinely invaded, and even if they are left alone afterwards, something precious was destroyed (at least, everyone is aware that the mob could return at any moment).
I mean, just look at SSC/ACX. Most people never heard about it. And yet, at some moment it was a focus of the NYT. Simultaneously obscure and in the spotlight -- but only for a short while to be destroyed. Scott survived the attack, the mob moved on. But we can imagine a parallel universe where Scott simply lost his job and had to give up blogging if he wanted to get a new one; and in that universe, both statements would be true (no common culture, external power that destroys subcultures).
Note - you have exactly zero proof for your claims, and your entire reasoning depends on a belief that an improvement in quality must have necessarily been caused by an improvement of the underlying model of the world.
As far as I can see, it's the other way around - there's clearly been no improvement to the world model, and it's becoming more and more obvious the more and more fluent the output becomes otherwise. We're simply out of other explanations for why the LLMs would make glaringly stupid logical and epistemic mistakes, and the mistakes that persist increasingly take the form of reproducing common formulas without regard to the underlying semantical context.
> The problem is that "language" refers to the interface between brains. A thought in one brain is serialized into a statement, which is then transmitted to another brain and deserialized back into a thought. Simple language models are statistical reductions of what is being transmitted.
I disagree strongly with this. Thought isn't a thing that can be serialized and deserialized. Thought in a human brain may or may not take the form of language, and language can be generated with or without thought behind it. We don't know what thought is, not enough to make these kind of claims.
"Isn't a thing that can be serialized" as in "we don't know any way to do it and don't know a path to doing it", or "it is conceptually impossible to serialize thought"?
More the first. I don't think we have any working concept of what "thought" is. But maybe one day we will, and then we'll be able to see whether the second is accurate or not.
I agree with most of what you say, but I strongly disagree with the initial sentence, that the name "language model" is misleading.
As you say, "language model" is about the interface. The term expresses that input and output consist of words. There are other models where input/output consist of things like images or videos, of symbolic data, of actions (e.g., for robots and agents), or of other things. The term "language model" says that it none of those. This is a useful distinction, and there is nothing wrong with it.
You want to categorize along a different axis, roughly speaking of what happens inside the model. That's fine, go ahead with that categorization, and define "thought models". (Though to make sense, there should also be non-thought models. I am not sure what you believe is a non-thought model.) In any case, it should not *replace* the term "language model". You should be able to say whether language models like GPT-4 are "thought models" or not, and whether image models like DALL-E are "thought models" or not, or whether agents like Alpha Zero are "thought models" or not.
On a technical level, there are categories for what happens within a model. For example a language model can be a "transformer" or a "RNN" (recurrent neural network). And a non-language model can also be a transformer or a RNN. This is about how the data is processed within the model. I think you want categories like this, but not on a technical level, but rather on something like a semantic level.
> Simple language models are statistical reductions of what is being transmitted.
I don't have any skin in this game, but is that the accepted definition of a language model? "Statistical reduction" seems to gloss over all sorts of different modeling systems that have a lot of complexity and that may or may not display emergent behaviors. And none of them seem simple to this laypeep. Anyway, I'm not sure this isn't a straw man argument, but I really don't follow the ins and outs of AI to know the positions of all the players.
If a set of strings (which could represent text, images, audio, or video) is the result of causal processes, then learning to predict those strings is learning to predict those causal processes.
Hence, learning to "predict the next word" with sufficient generality turns out to require a model of the conceptual structures that produce those words. There is no "just."
> learning to predict those strings is learning to predict those causal processes
This is not true if the underlying causal process is underdetermined by the sequence of strings that it produced. Given some sentence, any number of causal processes could have produced it.
Yes, of course, it's easy to construct counterexamples. But the salient point is that Large Language Models turn out *not* to be a counterexample, which explains why they seem to "understand" more than "predict the next word" might suggest.
Google DeepMind is hiring for research scientists and research engineers for their Google DeepMind AI Safety team, where their focus is on alignment, evaluations, reward design, interpretability, robustness and generalisation. Apply here: https://boards.greenhouse.io/deepmind/jobs/3049442. Also, please spread the message if you can.
I have MRSA. After one very painful outbreak of multiple abscesses, a 10 day script of doxycycline, and an incision and drainage, it went away. It came back within a few weeks, with a small pimple tripling in size and growing painful enough to keep me awake at night. I got another script for 10 days of doxy, but am having an absolute nightmare of a time getting a referral to an infectious disease specialist as I don't have a primary care doctor, I was only treated at urgent care. I do have a derm appointment forthcoming, however. Am I just screwed? Is this going to keep coming back and ruining my life? I worked in a hospital as an aide and am thinking about quitting and dropping out of nursing school after this.
Knew someone who got. MRSA infection at the site of her incision for gastric bypass surgery. It was quite a large, badly infected area, and for a while a visiting nurse was seeing her every day. Nurse told her that if she could get to the ocean and just soak in the ocean for a period daily, that would be the best possible treatment. Don't know how valid it is, but worth knowing about and trying.
About seeing the derm. Here is how to get in fast if your appt. is a ways off: Get on a wait list, if they have one. Then, whether they do or not, call every morning about 15 mins after they open and ask whether they have any cancellations. Be friendly and chatty, not pushy -- like "haha, yep, it's me again, how're you doing this morning? Just thought I'd check and see if you'd had any cancellations today." Calling people on a cancellation list is a pain,. because most people don't answer and if they do most can't come in on short notice. If you establish yourself in the staff's mind as a nice person who *really* wants to come in soon, and will save them the trouble of calling people on the list, you'll be able to snag a vacant time slot quickly.
That calling every day about cancellations seem like good advice. I’ll remember it if I ever need see a specialist and they are booked out for months.
Crucial part is to be real chatty and friendly. But I'm sure you get that.
Yep, I figured that was part of it.
If it's any solace, I was diagnosed with MRSA about 15 years ago. I was getting these super-pimples, very painful, and they grew much faster than ordinary skin infections, and about 40% of the time they turned into abscesses which needed to be drained and so on.
In short I can't recall exactly what I was given, but I underwent an extended course of antibiotics, and I've been (AFAIK) MRSA-free ever since.
This is of course anecdotal -- I may just be lucky; the diagnosis may have been incorrect, etc. But I can absolutely say I don't miss freaking out every time I had the slightest skin imperfection. I would counsel you to be an annoying squeaky wheel with wherever you get health care, and insist on a referral to a qualified specialist. The Dermatologist should be able to point you in the right direction.
Good luck! I assume you are US-based (these sorts of "can't get there from here" medical bureaucratic f-ups seem to be uniquely American). Just insist and keep calling. Doctors have a duty of care, and some of them at least take their obligations seriously. You just need to find the right one.
Julia Galef has been silent since the beginning of 2022. Any rumors if/when we can expect to hear back from her, e.g., new podcast episodes?
Seeing in our host's latest post [ https://astralcodexten.substack.com/p/links-for-may-2023 ] a link to an interesting article on desalination, I wondered if any reader can help with a question I have on extracting salt from sea water.
I once had a challenging online exchange with someone who disputed my contention that optimal techniques for extracting salt from saline solution such as sea water could be different to those for extracting pure water from the same. It seemed I was a "cretinous mong" for assuming there could possibly be any difference.
When I pointed out he might be correct for 100% separation, ending up with a pile of salt on one side and distilled water on the other, but the same does not follow for partial separation of either one or the other, the consensus from other participants in the discussion was that he was the mong! But I digress.
I had read that a brilliant technique had been discovered for partially extracting salt from seawater by adding the water to a mixture of a pair of organic compounds in which the solubility of the salt depended on small temperature differences of the mixture. Some of the dissolved salt but none of the water would mix with the compounds, and the water formed a separate layer on top, as if the organic mix was oil.
Changing the temperature (I forget whether up or down, but probably the latter) by only a couple of degrees reduced the solubility, so that some salt would precipitate out of solution and could be filtered out. Then by simply skimming off the salt-depleted sea water, adding a fresh supply, and cycling the temperature again meant the process could be repeated.
I forget the name of the compounds though. As organic molecules often do, they had long names, such as poly-di-methyl-tetra-thingummy-jig, and out of idle curiousity I would love to be reminded what they were, not that I plan any salt extraction myself!
https://www.experimental-history.com/p/you-should-not-open-a-door-and-see-677?utm_source=post-email-title&publication_id=656797&post_id=124870878&isFreemail=true&utm_medium=email
Throwing this out there for philosophy fans, math mavens those interested in schizophrenia. I just finished a short novel by Cormac McCathy, “Stella Maris”.
It’s a vey engaging, short - 200 pages - read. Styled as a series of conversations between a young woman who is a mathematical genius and her psychiatric counselor.
I won’t try to sketch the plot beyond saying she checked in with a toothbrush and $200,000 in cash.
I think a lot of ACX people would enjoy it.
https://en.m.wikipedia.org/wiki/Stella_Maris_(novel)
Edit: there is a a companion novel, “The Passenger” that preceded “Stella Maris” that I’m reading now. I don’t think reading them in order matters.
I might read this.
Okay, for the record, this is not a ‘Great Book’. it suffers from some third reel issues.
The writing itself is still exceptional. Up there with Faulkner most of the time.
I enjoyed the whole thing but it implied a bigger payoff than it delivered.
I’m 200 page on “The Passenger” now. Oh gawd, this is good writing.
I think I might write like this. For one paragraph. On a good day. Maybe.
It’s really pretty good stuff. Cormac McCarthy can write with the best of them and the on going conversations are pretty intriguing name dropping Wittgenstein, Schopenhauer, Pascal, Jung, Godel, Von Neumann…
Should auld acquaintance be forgot? Never!
Our friend Vinay Gupta is still going, still involved in crypto, and still enlightening the ignorant, and I am genuinely pleased to hear about him, courtesy of an unexpected link from the drama-lamas:
https://twitter.com/leashless/status/1663349886895489025
I was a little worried given we heard nothing more about Luna or from himself, but it seems he was simply going deep with his giant footprint. I wish him well!
A baker misreads a request for an Elmo cake as a request for an Emo cake.
https://www.today.com/food/trends/emo-elmo-cake-rcna83370?utm_source=join1440&utm_medium=email
It all works out well. First time she's gone viral. Also, she gave the emo Elmo cake to the parents for free.
This resonates with what went wrong with the Japanese moon lander-- the radar report seemed weird, so the lander started ignoring everything from the radar and crashed.
The cake is much less consequential, but the baker was surprised to hear that the cake was for a fourth birthday, and she smooths that over, thinking that maybe the four year old is a Wednesday Adams fan. Fortunately, she has enough flexibility to ask about the theme of the party-- Sesame Street. This is why humans will defeat AIs. (Just kidding.)
https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-beyond-sam-bankman-fried?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTY3ODIwNjY2MiwiZXhwIjoxNjc4ODExNDYyLCJhcnRpY2xlSWQiOiJSUjVBRzVUMEFGQjQwMSIsImJjb25uZWN0SWQiOiIzMDI0M0Q3NkIwMTg0QkEzOUM4MkNGMUNCMkIwNkExNiJ9.nbOjP4JQv-TuJwoXaeBYhHvcxYGk0GscyMslQFL4jfA
Quotes:
*At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.
*Of the subgroups in this scene, effective altruism had by far the most mainstream cachet
and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.
*Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes.
Be Cautious: Abuse in LessWrong and rationalist communities in Bloomberg News
> she asked that the man not be named, to shield herself from possible retaliation
I don't see how that would shield her at all from retaliation? I do see how it would shield her from a defamation lawsuit, though.
It's interesting to contrast the level of specificity between the first and the second halves of your quotes. "Unspecified man did an unspecified thing to an unspecified woman; she made an unspecified complaint to the police, with an unspecified outcome." vs "In year A, a person B, working at C, donated $D to organization E."
What does "one rationalist man" even mean? Is it someone important, or just a random guy who maybe reads LW or ACX and/or maybe participated at some public rationalist event? Does participating in this open thread make someone "a rationalist man / woman"?
"She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police."
The euphemism treadmill has gotten so bad I have no idea what that is intended to mean. It could cover everything from "tried to kiss me when I wasn't in the mood" up to full-blown rape.
Another poster further down the thread linked this post by Sasha Chapin about how he partially fixed his aphantasia: https://sashachapin.substack.com/p/i-cured-my-aphantasia-with-a-low
Reading this post plus some other related Reddit threads got me wondering: do non-aphantasic people feel that they get any tangible benefits from mental visualization, or is it basically just a form of entertainment? Sasha seems quite eager to "cure" what he views as a mental disorder, but I am aphantasic and to my knowledge I've never encountered any diffculty as a result. Like many other aphantasics I didn't realize that anyone could have mental visualizations until recently - I thought allusions to this ability were just a weird figure of speech.
As far as I can tell, the only practical impact that aphantasia has on me is that I tend to skim the imagery-heavy parts of novels because I don't get anything out of them. But I don't have trouble e.g. doing spatial transformation problems or planning move sequences in board games.
Does anyone with a strong ability to form mental visualizations/imagery feel that it plays an important role in any types of tasks or reasoning, and if so which ones?
When I was a kid, my parents would often chastise me for looking at the ground. I often looked at the ground, or a wall, because it provides a flat, blank canvas across which I can project my thoughts. If I can't look at such a surface, thinking becomes slightly harder. If I have to look at an irregular texture (especially someone's face), thinking becomes much harder. On rare occasions, the imagery is so strong that I forget what's in front of me. The resolution and saturation are very weak, but the opacity can be significantly greater than 0%.
The tasks where I don't use this are when I'm A) memorizing lists and B) counting integers. I use imagery for nearly every other category of system 2 reasoning. For example, trying to make sense of Bayes' Theorem was difficult until I invented for myself an "office building" model, which consists of 3 floors connected by conic sections, and where each circle of the conic section corresponds to the numerator and denominator of P(), and multiplication/division can move the circles to different floors via something that resembles vector addition. It would be easier to explain with an animation, but I've never seen one in the wild.
It's hard for me to say whether it's absolutely necessary to my mode of thought. I imagine aphantasics might use foreign strategies. I lean toward "yes, it probably makes certain things easier than they would be otherwise", but am open to being wrong.
I'm not aphantasic, but neither am I one of those (lucky?) people who has the ability to conjure up strong/vivid mental imagery. I can see things in my mind's eye but they only appear in flashes and they're sort of... ghostly, I guess? It's like a mix between an image and a concept. They're not vivid at all. And I'm not even sure exactly _where_ they appear. I'm tempted to say "above/behind my eyes" but it's not really that. It's not really in any particular location.
(One interesting detail that I discovered as a child is that I also don't have complete control over what I'm visualizing. I remember attempting to visualize a chair rotating clockwise and then attempting to visualize it rotating counter-clockwise and being frustrated that it kept switching back to clockwise on its own. Although now I can hardly even keep the image in my mind long enough for a single full rotation.)
This is very much in contrast to my dreams, where my mind often seems to come up with very vivid and well-defined but completely fabricated locations, and I can recall them in detail even after I wake up. Occasionally I'll even revisit a previously-dreamed-of location and will think "oh, this place again", sometimes literally years later.
My weak visualization skills do occasionally come in handy, in particular when trying to solve simple geometric problems. For example, it's not too difficult for me to visualize a circle with an angle marked from the center and the associated sin/cos/tan lines. But if I were to, say, attempt to visualize the process of adding two 2-digit numbers together using the standard column method, I wouldn't be able to keep the actual numbers stable in my mind's eye long enough for it to be of any use, let alone modify the image as I calculate sums.
My guess is that having a strong visualization ability would come in handy as an artist. I was discussing this topic with a friend of mine (who has aphantasia) whose partner is an artist. He said that according to her, when she visualizes something, she sees it in full and vivid detail (e.g. an apple isn't just a reddish blob, it has all the shading, varied color patches, specular reflection, etc. as a real apple does).
www.winwenger.com
Do people with aphantasia dream? For me the imagery in dreams is the main value of mental imagery, which is purely an aesthetic value. But I would think everyone would have to dream, whether or not they remember them, since our visual world in waking life is mostly also dreamed up by our brains.
Personally I do dream, and I guess I do get some slight mental imagery while dreaming. My dream imagery doesn't have any color nor does it have much detail. But I do sometimes get a general "outline" of my surroundings in the dream, e.g. the shape of a building or the edges of an object.
Mostly my dreams are conceptual - I have a non-visual awareness of what is happening in the dream as it progresses, kind of like what happens in my head when I read a fictional story.
Overall my dreams play a negligible role in my life and I forget them immediately unless I really try to hold them in consciousness. But I know many people whose dreams affect them a lot (for better or for worse). To your point, I would guess this is strongly correlated with how vivid their mental imagery is when dreaming.
Hey, does anyone with a strong math background have any potential connections or ideas on this?
Suppose that I have n matrices A1,…,An∈Rm×m with m≫n. Can I find n new matrices B1,…,Bn∈Rn×n that have the same 3-way cyclic traces:
∀i,j,k:Tr(AiAjAk)=Tr(BiBjBk)?
By analogy, if I had n vectors v1,…,vn∈Rm, it would be easy to construct new vectors u1,…,un∈Rn that have the same inner products (by choosing an orthonormal basis for the span of the vi and then writing each vi in that basis). Parameter counting suggests there should be matrices B that match a given set of cyclic traces (we have n3 parameters to pick and only n3/3 constraints), but I have no idea how you could pick them "naturally" and don't have any reason beyond parameter counting to think they exist.
https://mathoverflow.net/questions/447635/dimensionality-reduction-preserving-cyclic-traces
I think I figured this problem out. Basically the idea is to make the B_i's block matrices where only the top i-by-i block of B_i is nonzero. (By analogy if I construct the v_i by Gram-Schmidt then only the first i entries of v_i will be nonzero for each i.)
NOTA BENE: Since this question was originally asked by none other than Paul Christiano, of OpenAI, FHI, the Alignment Research Center, et al, I have to ask if there is a theory-of-AI-related motivation behind this question.
In more detail (the exact calculations are pretty complicated; I might repost this with more details on MathOverflow, and it'd be my first post ever there if so):
0. Start the algorithm by setting B_1 to have Tr(A_1)^(1/3) on the top left corner and zeroes elsewhere.
1. For each iterative step i, where 2 <= i <= n:
1A. Choose the top (i - 1)-by-(i - 1) block of B_i by using the constraints Tr(B_jB_kB_i) = Tr(A_jA_kA_i) for 1 <= j, k <= i - 1; this should give you a system of (i - 1)^2 linear equations, which should have a unique solution you can find with Gaussian elimination/the other usual linear algebra tricks. (By the way this system of equations has a block triangular structure since the B_j do, you can basically find the j-by-j subblock for j from 1 to i - 1 before finding the (j+1)-by-(j+1) subblock.)
1B. Choose the entries of B_i in the i-th row AND the entries in the i-th column, except the (i, i) entry which will be chosen in the next step, according to the constraints Tr(B_i^2B_j) = Tr(A_i^2A_j) for 1 <= j <= i - 1. This should give a system of i - 1 bilinear equations in 2(i - 1) variables, so I think if you choose the free column appropriately then the free row has a good solution (here the free parameters appear), or you could choose the free row first and solve for the free column instead.
1C. Lastly, choose the (i, i) entry of B_i according to the constraint Tr(B_i^3) = 0. This should give you a depressed cubic in one variable; once you find the coefficients (the bulk of the computation in this step), you can solve it very cheaply using your choice of either Cardano/Vieta/Lagrange's algebraic method, trigonometric/hyperbolic functions, or Newton's root finding algorithm.
There may be some hiccups due to the linear systems in step 1A being singular or the cubic equations in 1C maybe having either three or one solutions, but between all the free parameters and the fact that you can reorder the A_i's for free in any of n! possible ways, I think that this algorithm should work for any set of A_i's in general position (i.e. if there are no magical algebraic cancellations).
If all this amazing stuff fails, you can fall back on the ridiculously overpowered algorithms of either *gradient descent* (using \sum_{i, j, k}|Tr(A_iA_jA_k) - Tr(B_iB_jB_k)|^2 or something like that as the loss function), or *homotopy continuation* (although be warned that you will probably get a ton of complex solutions, and an even larger amount of paths tracked which diverge to infinity).
EDIT: I got Paul Christiano's affiliations slightly wrong (he's not a *cofounder* of OpenAI, although he did work there).
The Hindus came up with this interesting scheme for categorizing things according to their overall tendency. I accidentally independently confirmed the existence of these tendencies.
The Pull and the Slack
https://squarecircle.substack.com/p/the-pull-and-the-slack
Reminds me of superego and id in psychoanalysis.
Huh, yeah, kinda. But I thought the superego is supposed to feel oppressive, no? What I describe as the pull feels like a very id-like impulse.
I believe superego was supposed to represent all authority voices from outside, like parents, teachers, society, priests. gods.... Often they are oppressive, but sometimes they are supportive.
In transactional analysis, "superego, ego, id" is re-branded as "parent, adult, child", and let me quote https://en.wikipedia.org/wiki/Transactional_analysis
> Within each of these ego states are subdivisions. Thus Parental figures are often either
> more nurturing (permission-giving, security-giving) or
> more criticising (comparing to family traditions and ideals in generally negative ways);
or, using the gender stereotypes, the nurturing mode is typically associated with mothers, commanding/critical mode with fathers. But of course in real life, anyone can do both.
*
That said, in the Hindu model there are *three* forces (sattva, rajas, tamas), all of them unconscious (pulls, not decisions), therefore none of them fits the "ego/parent" in the psychoanalytical trinity. (The psychoanalytical ego would be the part of the mind that responds "yes" or "no" to the individual pulls.) Then again, in the transactional analysis we have:
> Childhood behaviours are either
> more natural (free) or
> more adapted to others.
So maybe we could map rajas to the adapted child, and tamas to the natural child, although this is not a perfect fit (it seems too harsh to call all natural/unrefined instincts destructive, some of them are mostly harmless, or maybe just a little harmful in excess).
Okay, no reason to try too hard to match different models; I guess they are just different ways to cut the same cake. But it is interesting to notice that people from different cultures made similar observations, which suggests they probably reflect something real about the described thing.
Mechanistic anomaly detection and Eliciting Latent Knowledge:
https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#
"Suppose we train a model to predict what the future will look like according to cameras and other sensors. We then use planning algorithms to find a sequence of actions that lead to predicted futures that look good to us.
But some action sequences could tamper with the cameras so they show happy humans regardless of what’s really happening. More generally, some futures look great on camera but are actually catastrophically bad.
In these cases, the prediction model "knows" facts (like "the camera was tampered with") that are not visible on camera but would change our evaluation of the predicted future if we learned them. How can we train this model to report its latent knowledge of off-screen events?"
"... you could view ELK as a subproblem of the easy goal inference problem. If there's some value learning approach that routes around this problem I'm interested in it, but I haven't seen any candidates and have spent a long time talking with people about it."
Is there anyone here who is experimenting with using AI in fiction or poetry? There are all kinds of ways to do that. Had an exchange on here with Antelope 10 who is doing something along those lines. Anybody else? Or anybody know of web sites, blogs or whatever for people interested in this sort of experiment?
Hacking the Brain: Dimensions of Cognitive Enhancement
"Whereas the cognition enhancing effects of invasive methods such as deep brain stimulation53,54 are restricted to subjects with pathological conditions, several forms of allegedly noninvasive stimulation strategies are increasingly used on healthy subjects, among them electrical stimulation methods such transcranial direct current stimulation (tDCS55), transcranial alternating current stimulation (tACS56), transcranial random noise stimulation (tRNS57), transcranial pulsed current stimulation (tPCS58,59), transcutaneous vagus nerve stimulation (tVNS60), or median nerve stimulation (MNS61)"
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6429408/
I've heard a lot about how we've entered a "Digital Dark Age" since so much early internet content has been deleted. But, knowing what we know about the NSA, hasn't the agency probably been using web crawlers to catalog the whole internet since the early 90s? Isn't there a better-than-even chance that all of it is still saved on servers in some secret underground warehouse?
... isn't this what Web.Archive was doing for 25 years now?
Unlikely, except perhaps for the non-US stuff. The NSA has a charter to target foreign communications only.
If there were an effort to dredge up old Internet, I expect it to more likely resemble discoveries of old hard drives from companies as they go bankrupt. After that might come NIST. After that, dumps of non-US intelligence agencies, but I expect them to be missing a great deal, and whatever gets released with be generations after the fact.
>The NSA has a charter to target foreign communications only.
hahahahahahahahahahaha
It's right there in their mission values page, and throughout their description.
Do you have a substantial counterargument?
They don’t follow their mission, and have been caught breaking that repeatedly with little contrition?
The US security apparatus has a lot of virtues. Coloring within the lines is not one of them.
I don't think you can know that, given the essential secrecy of their work. One of the key snarls of any watchdog organization is that the rest of us only notice when they break a rule. If they follow their own rules, that's necessarily unknown to us.
Do you have evidence that they don't follow their mission that _isn't_ from a source with a prior chip on its shoulder or equivalent vested interest?
Are there still any serious genetic disorders that we can't identify through embryo screening?
There are epigenetic disorders like Angelman and Prader-Willi syndromes that won't be detected by standard methods. Otherwise no, although current tests aren't 100% sensitive (for various technical reasons).
Probably there are a number of rare ones. I don't think we as yet have a full list of every possible SNP or mutation that is A) compatible with a viable fetus and B) results in a serious disorder.
And realistically any commercial service would offer something that looks at some longish list of possible likely defects, covering 98% of cases, rather than a full genome screen.
Presumably those we don't know the underlying DNA of?
I've read that corporate price gouging is part of the reason inflation is so bad in the U.S. now. But how is this possible in a free market? I thought competition between companies ensures that everyone's profits go to zero. Price gouging is only supposed to work over long periods of time if all firms collude to keep prices high. If just one firm defects by lowering its prices to attract more customers, then the arrangement falls apart.
Here's an article that claims corporate greed is fueling inflation:
'The pandemic, war, supply chain bottlenecks and pricing decisions made in corporate suites have created a “smokescreen”, said Lindsay Owens, executive director of the Groundwork Collaborative, which tracks companies’ profits. That obscures questionable price increases, she added, and allows businesses to be portrayed as “victims”'
https://www.theguardian.com/business/2022/apr/27/inflation-corporate-america-increased-prices-profits
Everyone is always trying to price gouge whenever they can. So there is no reason to suppose there has been some recent increase in this behavior and that it is suddenly creating extra inflationary pressure.
Scratch someone complaining about "price gouging causing inflation" and you will find a Marxist right under the surface.
There's no mystery, because it isn't true that corporate profits are unusually high*: https://www.reddit.com/r/badeconomics/comments/138z8pj/bad_economics_in_reconomics/
> This is the correct graph of corporate profits as a share of GDP (after further adjusting for the fact that companies have to pay real costs to offset declines in their capital and inventory stocks resulting from their operations). You will immediately notice that corporate profits as a share of output -- i.e., profit margins -- have been remarkably stable ever since the latter half of 2010.
I should note that *economic* profits go to 0 in a competitive market--not accounting profit, which is how the word "profit" is generally used. However, it is still the case that accounting profits in a competitive market should not just rise across the board for no reason.
*I mean, they're high nominally, because of inflation. But they aren't high compared to anything else, and trying to say that this caused inflation is simply circular.
I don’t know if it’s different in the US, but in Australia, corporate profits have grown much more quickly than wages since 2019 (https://www.abs.gov.au/statistics/economy/business-indicators/business-indicators-australia/dec-2022#methodology, see the first graph in the Total All Industries section).
I know less about Australia, but I do know that you're looking at a time period that is dominated by COVID and Australia had one of the strictest policy responses to it. Australia removed border restrictions for the vaccinated in February 2022 (https://en.wikipedia.org/wiki/COVID-19_pandemic_in_Australia#2022); since March 2022, wages are up 13% and profits 5.7%. This choice of data points is somewhat arbitrary and not very robust (I think if you use the previous one, December 2021, profits are up more). But I would definitely hesitate to generalize anything from 2020 and 2021 Australia.
In any event, the above comment specifically referred to "price gouging" which can't be evaluated just by looking at high-level profit numbers. Why haven't they been engaging in price gouging before? Inflation has been low the past ten years, why haven't they just raised prices before now? Even if you want to blame it on corporations, presumably something changed in the past few years.
You probably mean how is this theoretically possible in a perfectly competitive market, but few markets are perfectly competitive, and the serious business of economics is to understand how in practice different systems have behaved at different times in different places rather than to theorise abstractly.
Seems weird that the corporations all suddenly got extra greedy at the same time....
> But how is this possible in a free market?
We don't have a free market. Also, what you describe is generally not possible in a free market where incentives are effectively randomized. If there's a strong external pressure pushing everyone in a specific direction, like taxes or other financial incentives, then you get what looks like coordination. The excuse of supply chain disruptions let them hike prices to gouge customers.
https://www.economicforces.xyz/p/price-theory-as-an-antidote and some other articles on the same blog explain pretty well why that hypothesis doesn't make any sense.
That post says, "Thus, if rising markups caused the rise in inflation, one would need to explain why firms, across the board, suddenly and simultaneously increased markups."
The reason is obvious: supply chain disruptions decreased supply, which raised prices temporarily on some goods. Consumers were then primed to accept the excuse of higher prices due to supply chain disruptions, *even when this was not the case*, and profit motive does what it does, and everyone pounced on this excuse. I mean, *why wouldn't they*? It makes perfect sense.
For the same reason they don't normally: if you can profitably offer your product for less, you make more money by squeezing out the competition.
Maybe somewhere along the supply chain it's totally feasible for everyone to start voluntarily charging more, but I'm mostly just seeing everyone complain that all of their costs are up and they don't have much choice. At the company I work for, all of our material costs remain much higher than they were, and labor-wise our starting wage is 70% higher than it was pre-pandemic. We are now, for the second time in the last few years, looking at raising prices across the board, because how else do you survive?
> if you can profitably offer your product for less, you make more money by squeezing out the competition.
Unless they find another equilibrium driven by an external factor where they make even better profits than they otherwise could by disrupting the equilibrium.
> At the company I work for, all of our material costs remain much higher than they were, and labor-wise our starting wage is 70% higher than it was pre-pandemic.
Sure, that's inflation driven by rising costs, but that means your profits would not meaningfully go up. This is not the case for across many industries which are seeing record profits. I think Katie Porter summarized it succinctly here:
https://www.youtube.com/watch?v=0ixmqzjvb7k
Not an expert, but:
I think your analysis holds true if the "pandemic, war, supply chain bottlenecks" are all fictitious. In that case, the underlying market realities don't support price increases and the zero-defector scenario you need to maintain prices above market is extremely unlikely.
If the pandemic/war/supply chain effects on pricing are real, though, then all your market actors have (a) obvious incentive to raise prices to account for these factors, and (b) a less-obvious incentive - since prices are not increased or adjusted on a day-to-day basis, an actor setting prices on day (X) probably doesn't want to set their price at "fair price as of day (x)," but rather "fair price as of day (X + future date)." You don't want to set your price at the market rate today only to immediately start losing money tomorrow, so the incentive is to overshoot by whatever you can get away with. Assuming all market actors would have this same incentive, (along with the everyday incentive to maximize profit), then your odds of defectors drop off and a sort of indirect collusion to keep prices constantly ahead of the curve (in other words "above market") becomes possible. Not in the long term, but certainly in short bursts during the right period of instability.
Excellent response.
https://www.youtube.com/watch?v=2JlUnOAiMm4&ab_channel=ScottManley
Japan's moon lander crashed because there was a surprising but correct reading from the radar going over a steep crater wall, so the lander assumed the radar was broken and then didn't know where the surface was.
Shouldn't the path have been pre-tested so the radar reading wouldn't have seemed weird? Yes, but the landing site was changed rather late, so the route wasn't tested.
Games tend to leave out the unreliability of sensors systems. I've seen a similar complaint about military games, which tend to assume reliable information and reliable ability to transmit orders.
Also, for life generally, I wonder how often people ignore true but surprising information.
I've played a couple of games without reliable information, and it's unfun and sucks.
>I wonder how often people ignore true but surprising information.
Politics requires it happens at least half the time, no?
Looks like an awesome game, I will need to see if I can get a couple of my playing partners into it.
Phantom Brave is a Disgaea-adjacent game with infinite random dungeons. I once made a level 15 dungeon with 3 floors, and every single floor in the dungeon was a special floor with boosted enemies, so the supposedly level 15 dungeon was never weaker than level 30.
For unreliable ability to transmit orders, percent chance to hit is a good approximation. Something like Battle Brothers with 60% hit rates and permadeath gets really annoying. ( I don't remember if Battle Brothers also had unreliable quest levels but it wouldn't surprise me.) There's also any RPG that doesn't let you control your characters directly; Dragon Quest 4, Persona 3, and such.
Is there a difference between unreliable information and uncertain information? Those games include uncertain info. Not sure about unreliable info, though; presumably that means you have information that sometimes turns out to be wrong. I don't think I know of a board game that does that, but perhaps there is a video game that does something like that.
(aside: hello fellow wargamer)
Very often. When covid pandemic started, it was clear that the mortality is exponentially increasing by age. Almost every policy ignored that.
Then again when vaccines were available, the data was soon available that the vaccines do not prevent infection and spread. It was ignored by many countries that introduced vaccine mandates only after this information became available.
Aaaaall the time, yep.
(And then the very same people also all the time get irrationally excited about new information precisely _because_ it is surprising, and then like that sensation so much that they become closed to learning that the new sexy intel was not in fact true....)
In real-life, surprising information should need confirmation. The information source CAN be wrong, and I think the bias should be the expected outcome.
In science, progress is not made as much by experiments where the result is "Eureka!" as when the result is "That's strange..."
It has been said that it's an iron-clad rule of Hollywood screenwriting in recent years that under no circumstances is a man allowed to rescue a woman.
Is this actually true? Are there mainstream Hollywood examples of a man rescuing a woman in (say) the past five years?
If there is any truth to that claim, I think it as to be restricted to female protagonists only, and probably only female action-movie protagonists. And there is certainly no shortage of data points that might point in that direction, most recently I understand that the new "Little Mermaid" has rewritten the ending so that Ariel rather than Eric who kills Ursula in the final act. But even so, I think the claim is overstated. If it were even mostly true, I'd expect it to apply to "The Force Awakens", and IIRC Finn saves Rey during the lightsaber fight with Darth Emo.
I'd also heard it recently in the context of the new Peter Pan remake. Peter Pan isn't allowed to save Wendy, or Tinkerbell, or Tigerlily, which leaves him with very little to actually do. Meanwhile Tinkerbell is now black which means she's not allowed to have any negative characteristics, removing her jealousy and her betrayal and leaving her with nothing to do in the story either.
I don't think Finn saves Rey in the end of Force Awakens though, it's the opposite. Finn gets his arse handed to him in the first fight, then Rey takes the lightsaber and does much better.
The MCU is one of the biggest franchises ever, and among accusations from its detractors is the repetetiveness of its plots, so let's look at some of their recent works.
Dr. Strange rescues America (the female character) in his most recent movie, Multiverse of Madness; that's the bulk of the plot. The Eternals includes at least one scene where Ikaris (male) rescues Sersi and Sprite (female) (ok, technically these 3 characters are all aliens created by some godlike being, but they take gendered human forms). I'm pretty sure that Simu Liu's character saves his female friend in Shang-Chi at least once. I haven't seen the latest Ant-Man yet, but it seems like he has to save his daughter; the second film, from 2018, involves him saving his mother. Spider-Man has to save MJ, his love interest, in the climax of the last 2 movies.
Thanks for all the examples, I think that's sufficient evidence to show that the original statement was BS.
Yeah the original statement sounds silly. It might be standing in for the more reasonable and accurate statement "about 60% of movie plots used to involve men saving women, now only 5% of them do" or something like that.
Was just watching Shazam: Fury of the Gods last night and there were multiple counterexamples, both of bystanders/victims (Freddy in particular makes a point of rescuing attractive women) and powered supporting characters.
I basically don't even watch movies anymore and can think of 5 recent counter-examples just off the top of my head....that claim (which I've never previously read or heard of) sounds like just another bit of culture-war trolling.
Spider Man saves MJ in the first 5 minutes of Spider Man: No way home.
Who says this? I have never heard it.
Baby Driver (2017) is the first one that comes to mind. I am sure there are many others.
Technically 6 years ago but it's pretty much a text book example of the trope and it did well and got great reviews
I think the new Super Mario Bros with Princess Peach is the most recent example. Audiences seem to love it, critics not so much.
I haven't seen the movie but I thought the whole thing was that it was Mario and Peach going to rescue Luigi?
I've seen it argued a few times that AI-X-risk might act as part of the Great Filter which prevents civilizations from colonizing the stars. But it strikes me that the opposite should be the case. Isn't it more likely that a superintelligence that destroys humanity is *more* likely to colonize galaxies than a planet without such a superintelligence?
Perhaps the absence of obvious aliens should lower our estimation of AI-X-risk.
You’re right: https://www.lesswrong.com/posts/BQ4KLnmB7tcAZLNfm/ufai-cannot-be-the-great-filter
Thanks for that link.
Not necessarily. One can certainly imagine a number of scenarios where the AI destroys an alien civilization without having any plan to expand itself. Either because it just doesn't plan ahead and accidentally destroys itself along with them. Or because it's happy running forever on limited hardware.
Of course, one can also imagine the aliens themselves being happy in a limited region of space, but with a biological organism it's more natural to assume it would expand.
Would a paperclip making AI have sufficient foresight to develop interstellar flight before turning its home world into paperclips?
Since we're not all made of paperclips, I assume it wouldn't.
Since we're not all made of paperclips, I assume that paperclip-making AIs don't exist(*). But if they did, they'd need to have that level of foresight because it's probably *less* foresight than is required to recognize that the AI needs to kill its owner before the owner says "now that I think about it, that's enough paperclips".
As for "one can imagine scenarios where the AI [doesn't expand]", that's not even remotely adequate for a Great Filter. For that, you need to to be impossible to imagine scenarios where the AI *does* expand, because the naive Drake equation suggests that there are very very many and it only takes one.
* Probably because the one intelligent race in the Milky Way hasn't gotten around to inventing a true AGI yet.
Has anyone here ever managed to change something fundamental about their thought process or mental abilities? Here are two examples of what I mean:
https://www.reddit.com/r/self/comments/3yrw2i/i_never_thought_with_language_until_now_this_is/ In this reddit post (which was linked on a slatestarcodex post), the poster talks about how one day he "realized" that it was possible to think in words and after spending some time practicing this, it completely changed his life.
https://sashachapin.substack.com/p/i-cured-my-aphantasia-with-a-low This article is written by someone who claims to have "cured" his aphantasia and can now see imagery in his mind's eye.
As I get older, I've become more and more aware of various irritating quirks in the way my mind seems to work (which I guess is just a more delicate way of saying "I'm dumber than I want to be"). I suspect that most low-level functions of the brain are either hard-coded or are developed at a very young age and are thus very hard/impossible to change but I'd be interested in hearing if anyone has any relevant experiences.
> Has anyone here ever managed to change something fundamental about their thought process or mental abilities?
Yes, improved focus with meditation. Changed habits of forgetting people's names 5 seconds after meeting them by concerted effort to retain information. These are simple cases, but maybe you're thinking of something more profound? I think intentionally repeating a behaviour until it becomes automatic can change a wide range of default behaviours.
Thanks for the reply. Those are both related to some things I would like to try to change.
Regarding focus, it almost feels like I have some sort of problem with "micro-focus". Like I'll very briefly lose focus and go on autopilot and that will throw off what I'm doing. This is a huge hinderance when attempting to play music - I'll attempt to execute some passage I know well but my fingers will just play the wrong notes for no reason. Another lesser example would be hitting the wrong button in a video game for seemingly no reason. I go back and forth between believing this is focus-related and motor planning-related.
What types of struggles did you have with focus and how has it improved since you took up meditation?
Regarding names, I'm also pretty terrible at this. I've made efforts in the past similar to what you describe. But for me, the act of recalling information which I definitely already know is itself often also more of a struggle than it should be. In addition to names, I also often find that I know _of_ a word that I want to use (i.e. I know there is a word associated with a definition I have in mind) but I just can't seem to conjure up what the word actually is.
Another one I find interesting is that some people's brains really do seem to be "multi-threaded" in the sense that they can do several things at once to a high degree of precision. Going back to music, sight-reading seems like one example of this, especially those who have the ability to read a bit ahead. Another (funny) example is this Game Grumps video (https://www.youtube.com/watch?v=vDQOEXNzGPw) in which Arin is fighting a boss which requires highly precise maneuvers while also coming up with improvised monologues. Compared to my seemingly single-threaded brain, this ability seems different in a very fundamental or hard-wired way.
It’s not as strong of an example as those listed, but I become far less face-blind in the first 5-10 years of my adult life. I was never as bad as “the man who mistook his wife for a hat” but I was very clearly bottom 5% I would say.
I think a lot of my face-blindness had to do with eye-patterns, and caring. No problem with eye contact, but I wouldn’t scan people’s faces in a way that would give me the right identifying info. But more importantly, at a very low level I was just not trying to remember how people looked. I thinks that was in part related to youthful egocentrism. As I got older, I started to care about others more, and it became much easier for me to recognize people! It also helped the caring process to be painfully embarrassed by face blindness a few times.
Thanks for sharing your experience. Did this eventually become natural or do you still have to consciously expend a lot of effort on it? I'm similarly terrible when it comes to names and I've gone through periods of time where I've made an effort to improve (e.g. I'll write down someone's name soon after I meet them or I'll repeat it to myself for a few days). Sometimes I think I'm doing better but then it will strike out of the blue, like the other day when I simply could not for the life of me recall the name of an acquaintance I've known for many years but hadn't seen for a while. I have a similar issue with words in general.
I never really had to expend much effort. It was more like, my priorities shifted (towards others) and my attention followed.
I sometimes wear contact lenses, and I read this recently: https://www.prevention.com/health/a43919982/contact-lenses-contain-dangerous-amounts-of-forever-chemicals-pfas/. Should I stop wearing contact lenses for now, or is a couple of times a month a reasonable risk considering the currently available information?
Do not be worried. The whole reason they're "forever chemicals" is that they are stupidly inert and biounavailable. Surgeons have been coating implantable medical devices with fluoropolymers as long as they've been able to.
Here's a question for any of y'all that have the (mis?)fortune to work with obscene quantities of money on a regular basis: What is the qualitative difference between things that cost a MILLION dollars, and things that cost a BILLION dollars?
A VPD tool costs ~a megabuck.
An EUV scanner costs ~a gigabuck.
Both are equally difficult to get through capital justification.
IMHO it is worthwhile to consider the time parameter, rather than thinking merely in terms of purchasing physical objects. Somewhere between 100M and 1B would "buy" : 1) Never having to even think about working for wages + 2) Lifetime "ad libitum" consumption (i.e. buy arbitrary houses anywhere in the world, chartered flights, arbitrary medical procedures, etc. "as if they were ice cream cones") + 3) Being able to maintain (1) and (2) indefinitely without having to obsess over market events and fiddle with investments personally.
AFAIK oligarchs only start buying physical objects costing 100M+ (mega-yachts, etc) after they've firmly nailed down 1+2+3.
This is re-hashing what others have said, but my personnal take on large wealth inequalities is : Above ~10 million dollars, the only thing that you can buy with your wealth is slaves.
Any great mansion/yacht can only be managed with a team of full-time maintainers.
Even high-end supercars now come with a team of mecanics and are delivered from racetrack to racetrack.
Owning a company is basically being a feudal lord: you own the land, tools, sometimes the very homes of your employees.
Even charity is buying people : you have decided that the world should care about malaria, and suddenly, due to your donation, thousands of people work on the subject who would have gone on with their lives, or worked on other charity topics, if not for you.
What you are buying with >$1E10 is *institutions*. If you imagine that all institutions are really run by "slaves", you're being silly and not worth engaging. And slavery isn't a word you should be using in any context where people plausibly could mean it literally, if you don't mean it literally.
I should have used a word like "serf" which more accurately describes my sentiment. But my picking of such an intense word is not innocent. There is a class of people who are able to command the full work-time of another human being, and there is a class of people who cannot.
I'm at that limit were my parents could afford a full-time nurse and I cannot ; and I'm aware that there is a frontier between people richer than me and poorer than me.
That still doesn't work. "My parents could afford a full-time nurse" is shorthand for "my parents could make a public offer of a certain amount of money for which there exist certain people who would be willing to do all the things we describe as nurse services for 40 hours per week". It is NOT shorthand for "my parents are able to spend the amount of money set in the 5th Edition Papers & Paychecks GM Manual as sufficient to cast a level 7 Serf-Geas spell compelling other people to perform nurse duties in the real world".
The reason it's the first one more than the second one is that anyone, including people advertising that they'll perform nurse duties, can decide not to take this or that nurse job, and can factor offered payment into that decision. The only catch is that there might be only so many of those jobs, so anyone who wants to nurse that badly might have to accept the payment being offered. But if they do, then they're willing by definition. They're not serfs. They're free to search for other types of work if no nursing jobs are offered on terms they like.
At every wealth level, the one thing you do with money is get other people to do what you want. Either give you something, or perform a service.
I guess I would say that from what I've seen, for corporate software products at scale, the annual budget for a project or feature is on the millions level. Organizations of 60 to 100 people often command projects that cost on the order of millions of dollars. It takes an order of magnitude of billions of dollars annually in order to fund software products run by thousands or tens of thousands of developers. These sorts of products are amalgamations of hundreds of organizations' small products and features owned by smaller organizations into a big name flagship feature. Think, like, any big software product that you can name right now off the top of your head.
A million dollars: your typical Bay Area home.
A billion dollars: your typical Bay Area infrastructure project.
(This is partly a serious answer. A million dollars is a lot but it's still more or less human-scale-- think a decade's worth of productivity at a full-time job in a high-paying industry. On the other hand, nothing an individual would want *or* accomplish is worth a billion dollars. At that scale you're exclusively talking institutional budgets and objectives.)
There’s a new feature that appears starting at a few million, and definitely complete by the time you reach $200 million: Things now come with Staff. If you buy a sufficiently expensive Thing, you need someone whose full-time job is to operate it or maintain it: a captain for your yacht, a pilot for your plane, a machinist for your robotic machine tool, a sysadmin for your data center, etc. A few million dollars is enough to hire a person for life, so when the price is substantially above that, why not hire an employee to worry about the Thing full-time?
Epistemic status: I spent a few years being the Staff for multi-million dollar billing systems for cell-phone companies. When we sold a billing system, we sent at least one engineer with it, to transition it to the customer’s staff over a period of six months or a year. And we were always ready to pitch in, if the customer had a problem.
A billion dollars buys power. A million dollars buys you a lot of groceries.
Succinct and accurate. A million dollars ain't what it used to be.
“A million dollars buys you a lot of groceries”
Maybe before 2023...
Not someone who falls into this category, but I’d say the biggest difference is that a million dollars can get you objects and Things. A billion dollars gets you some things but it’s mostly about the institutions, organizations, and People attached to those things. You’re not buying objects, you’re buying force that can be applied to a particular problem.
Because I’m in “time to reinvent myself and redirect my career mode,” I’m forever getting emails/ads re training to be a UX/UI designer and I’ll admit to being intrigued. I know all the reasons why or why not such a career path would suit me, but I’m very unclear if these training offerings are legit/worth paying for or if they are just the online version of a ITT Technical Institute quasi-scam. If there’s a community that would know the answer, this one is it.
So, are things like this https://designlab.com/ux-academy/ legit and worth the money? Is the idea sound but there are better options? Or is it all just a load of bollocks?
The UI/UX field is extremely oversaturated right now, in part because of these boot camps. Many companies have also seriously cut back their design/research teams, and the largest demand is for experienced, senior designers. I would not recommend jumping into this field right now to anybody.
I did a coding boot camp about 6 years ago and have worked as a programmer since. These types of programs work if:
-You are already employable in a different job. Meaning you probably have the soft skills that will help you get and maintain a job.
-You treat the program as a job and not school. Take advantage of any programs they offer and spend 8+ hours per day on it
-The program should be able to cite specific companies and roles people have gotten after graduation
- They should offer some amount of free resources for job hunting - job boards, job fairs, networking etc. - with industry people who are not only graduates of the program
- They should offer some kind of money back guarantee if you dont get a job but comply with all their standards (in my program this meant you could retake the course for free).
- They should have a curriculum that looks like a college curriculum with tests and deadlines and such - not just general descriptions of stuff you will learn.
- They should be willing and eager to provide contact info for graduates who can talk to you about the program.
- They also shouldn't let everyone in, there some should be some amount of screening.
I can't speak to that specific program but the things above are the types of things I would look for.
I think it's like learning to code -- everything is out there if you feel you can set your own agenda and go through it. The courses are very useful for if you don't want to do that.
The best way to learn UI design is to try and reconstruct websites and apps you see in Figma. Then, try and reconstruct a service but for a different 'vibe'/user. What would Airbnb's design feel like if it were for executives? Or for young families?
It's free, it's fun, and it gets you some portfolio projects.
UX design is the step back. There's the 'micro' -- also called interaction design -- which is concerned with specific goals. How does the user sign up? Find a thing? Book a thing? Good practise here is to take a bunch of flows, use them, figure out what's annoying about them, and try to redesign them.
The macro -- also called service design -- is more about what users care about and all the other interactions that are required to make the 'find a thing' flow work. How much information should you give them? How many options? What people / data are needed to find the information to present to the user? etc. This I think you can learn from trying to create your own products.
I did a degree in user centred design and dropped out halfway through because my internship was more useful (and much better paid!) and got a job fine after that. I've not done the courses, but I expect they're all fairly decent, and probably all have a pathway into jobs.
But you can definitely learn it by yourself
Seconding this as somebody starting from no career/higher education.
I can't stand Twitter any more, and it's the place where I get info about new developments in AI -- new tweaks that improve performance, new applications for AI, and occasionally a new idea about Alignment, FoomDoom and related matters. Where else can I go to stay about as updated as a non-tech person (I'm a psychologist) can be? I can't read all the new papers -- I need summaries in ordinary language.
And by the way, I'm leaving Twitter because AI Twitter is going the way of Medical Twitter, which has been a cesspool as long as I've been following it, with pro- and anti-vax, mask etc. people hating each other's guts. Now I'm seeing the same dynamic starting in the AI discussions, and it seems to me that what nudged the exchanges into hate-fest land was Yann LeCun, who hasn't the faintest idea how to debate and support your ideas, but instead moves instantly into implying or outright saying that those worried about ASI are fools, crackpots, etc. Here's one of his tweets:
- Engineer: I invented this new thing. I call it a ballpen 🖊️
- TwitterSphere: OMG, people could write horrible things with it, like misinformation, propaganda, hate speech. Ban it now!
- Writing Doomers: imagine if everyone can get a ballpen. This could destroy society. There should be a law against using ballpen to write hate speech. regulate ballpens now!
- Pencil industry mogul: yeah, ballpens are very dangerous. Unlike pencil writing which is erasable, ballpen writing stays forever. Government should require a license for pen manufacturers.
So then I got mad and posted this: https://i.imgur.com/Q5DB7VP.png
AITA?
I've been enjoying twitter for many years, and I think a key to that enjoyment is to only read tweets from people that I follow, liberally muting users and keywords, and turning off retweets from selected people I follow.
Perhaps you can curate your feed a bit more?
Naw, won't work. I don't read Twitter for fun, I read it for up-to-date info about topics of a lot of interest to me. AI is currently the main one. I'm following various bigwig spokespeople for organizations and points of view, plus a number of people who just post articles about new developments. If I want to know what the leaders are thinking and planning, I have to follow them. However, people have given me some good ideas here for keeping up without visiting the birdshit site.
I subscribe to this AI newsletter: https://www.bensbites.co not sure if its exactly what you are looking for but it feels like all the AI stuff from twitter just put in an email. Mostly focused on AI products and big headlines.
>Where else can I go to stay about as updated as a non-tech person (I'm a psychologist) can be?
HackerNews and Lobsters are the places you can go. They have the disadvantages of (1) not being exclusively about AI (2) having a majority demographic of programmers, so posts are more often than not still technical (3) [recently on HN] being so circle-jerkily against LLMs that even I, normally an LLM skeptic, got sick of it.
But they are still good choices.
Some youtube channels I unearthed out of my subscriptions
(1) https://www.youtube.com/@ai-explained-
(2) https://www.youtube.com/@TwoMinutePapers
(3) https://www.youtube.com/@YannicKilcher
>Yann LeCun
I quite honestly don't get it. He is just one guy after all, surely there are so many people on AI twitter than Yann LeCun can insult on one day ?
And I don't get why you take it so personally. Twitter is where respected and respectful people go to be dumb asses, Yann LeCun is lashing out, probably not even meaning insult to the people he's lashing out at, just because.
Anyway, I think you're identifying too much with your opinion/predictions about AI. Take a step back and reevaluate whether you have made it too big a part of your identity eh ? Keep your identity small http://www.paulgraham.com/identity.html.
>AITA
No you're not, you just let one person's jerkass behaviour over the internets get to you. Easy mistake to make, done it countless times myself.
> Pencil industry mogul
I wonder what's AI industry equivalent of that.
Why does following the bleeding edge superficially excite you more than understanding the basics for real? On the weekend I revisited Rummelhart and Hinton 1986 and it is still powerful.
I'm a psychologist. Bought a fat book on machine learning and will be working my way through it this summer. Will probably also take an online course, then read something about kinds of machine learning models, tweaks, etc. Following the bleeding edge doesn't excite me. I have read enough here and on Zvi's blog to take the risk seriously. Who *doesn't* want to follow the news on something involving serious risk to them and the people they know, the life they know? I'm not a fucking ambulance chaser, you get that?
Specifically about the ballpen and the printing press in general : I love how some people ignore the fact that the printing press completely ended the early-catholic domination and brought about the humanist civilisation all over the world.
If you were a pope in 1440 it would in fact be a correct move to worry about that new fangled technology.
I agree with you. The technology is here and it will be of enormous benefit, and it will cause a lot of pain. The biggest threat I see from AI is the way it is greatly expanding our capacity to delude ourselves; truth and fiction is at the heart of this whole AI conversation imo.... Is someone lying to me or is someone telling me the truth? Most of us can discern truth from fiction because we have a model of the real world and words are tested against that model, or at the least against other words that have already been tested against that model. (It's probably possible to have a fairly complicated conversation about water (among chemists perhaps), and never use the word "water" but the whole discussion is predicated on a concrete shared understanding of water; we all know it when we see it. An understanding of water on that level could be considered a mutual conspiracy. An AI, learning everything about us through our language, is never going to have that model of the world. Words will always be “understood” (another hopeless word to use when you're talking about an artificial intelligence) in terms of other words. That's an amazing skill, and you can get a lot done with it but truth and fiction is off the table. In the pure realm of language, those words are meaningless.. This recent case of a lawyer, who submitted a brief that was created by an artificial intelligence, it's a pretty interesting case. The brief was well written, perfectly intelligible, and filled with citations to cases that did not exist. The lawyer claimed that he specifically asked the AI if the cases were real because he checked one out and it wasn’t. Apparently the AI told him that, yes that case wasn't rea, but all the other ones were. You could say that the machine lied to him, but is that really the best way of describing it? It presumes something that I don't think an artificial intelligence has, or could possibly ever have; a meaningful sense of true or false.
I get a lot of my AI updates and creative prompt ideas from LinkedIn. There's a ton going on there if you follow the right people. And then follow the people they follow.
My dumb theory of why Yann is acting so rude is that he's still salty over Russell mocking him in debates a few years ago. Maybe mocking is too strong a word, but I distinctly recall Russell being pretty aggressive in "Human Compatible" and a couple of debates. That unpleasent experience probably set the tone for discussions regarding alignment for Yann. I think I'm only half joking.
But look, don't you think it's kind of a bad sign if somebody who's angry about how one person treated them in debates a few years ago is rude as hell to a different group of people who are disagreeing with him in a civil way in May 2023? I am still angry about Yann calling me -- or at least a class I am a member of, ASI worriers -- a crackpot, and joking that I'd be scared of ball point pens if they were a new invention. Since I'm still angry about Yann's mockery does that mean I get to make fun of you? Hey, Algon33, ya crazy fool, seen any caterpillars lately? Because I know you worry about butterflies. I mean, if tens of thousands of butterflies landed on your face they would smother you. And if like a million of them landed on your body they would crush you flat. Haw haw haw. Plus I fucking won the Turing prize and you, Algon33, fucking didn't.
You know, these tech companies are going to be halfway running the country in 10 years, in my opinion. I'm really dismayed by the terrible communication skills of some of them.
I don't think I disagree with you. Yann is basically acting like a troll, which makes the discourse worse. Still, if your lived experience with people is that they are rude to you, which he's mentioned ASI worriers were, then you'll probably dismiss their beliefs and be rude in kind. I have some sympathy for him if that is the case, but would rather he didn't do that. Though I do think his attitude is changing, and would predict that it will continue to change re:alignment, though maybe not e.g. MIRI. This is based off the shift in respectability of AI alignment (e.g. Hinton and Bengio taking the topic seriously), and changes in what is socially acceptable for people in your ingroup to think are one of the best predictors of people changing a dogmatic belief.
Read Zvi's AI roundups, he does a great job of filtering out the valuable / interesting stuff, with a huge amount of info and context in every one. I really wonder how many hours he must put into each one, they're a gold mine of info.
Seconding this, I read each weekly Roundup as they come out and I have never encountered information that wasn't already redundant given that.
IMHO, YTA, unless his tweet was responding to an actual concern somebody raised about AI that had nothing to do with hate speech. Even then, your response didn't exactly elevate the discussion.
LeCun had earlier posted an "of course ASI isn't dangerous, we'll align it" tweet, and lots of people had posted their reasons for disagreeing. The posts disagreeing weren't exactly sweet, but they were civil. No one was accusing him of being a fool, crazy, evil, etc. And of course some people posted in support of LeCun's view. LeCun responded by saying that people worried about ASI are crackpots. So yes his post was responding to multiple people's concerns that had nothing to do with hate speech. In fact, depending on what you define as hate speech, LeCun was the one who was doing it. There was I think some elaboration on his part in that Tweet about "one person who has made a career out of AI alarmism" and some stuff about that person, obviously Yudkowsky, looking like a homeless crazy in a subway station. (I'm not absolutely sure about that last bit. Definitely somebody compared Yudkowsky to a subway crazy, but I'm not sure it was LeCun. May have been someone writing in support of him.) Anyhow, LeCun's later tweet about ball point pens was an elaboration of his idea that people concerned about AI are fools -- they're like people who would panic about ball point pens because someone could use pens to write hate speech, misinformation or whatever.
I wouldn't say posting that image is my proudest moment. On the other hand, I am quite worried about ASI, though not at all sure we're doomed, so I am in the class of people LeCun is calling crackpots, the class he's saying are such silly geese that would panic about ballpoint pens. So's Scott, last I heard. So's Zvi Mowshowitz. None of us are crackpots, silly geese or idiots.
AFAIK AI "worriers" -- virtually without exception -- advocate restrictions on general-purpose computing that could not, even in principle, be meaningfully enforced without a stereotypical global Orwellian police state. Hence the reaction of people who would not care to live in such a nightmare under any pretext whatsoever.
Your knowledge does not extend far enough, but I expect you will reject any attempt to expand it by saying that the counterexamples provided are part of your "only *virtually* without exception" hedge. I agree that EY is a poor evangelist for AI risk as a serious concern, but you go too far by categorizing everyone who shares that concern as an exaggerated caricature of EY.
Can you link to an example of a "better evangelist than EY" ? (e.g. one who proposes solutions to the concern that don't logically entail "state violence against people who want to run certain computer programs")
I don't advocate a goddam thing. I am not in the field and am not able to formulate ideas at that level. I am worried. Wake up and get over the idea that people who are concerned about things that you aren't are evil assholes. That's exactly the mistake LeCun is making, and it is a sign of deficits in social perception. It's a way of being dumb in a certain area that is important to making your life work and to treating others in a sane a decent way.
Please recall that Mr. Yudkowsky openly advocated state-sanctioned killing of people (and, hypothetically, the subjugation of entire nations) who would dare to resist a global computation police regime consisting of him and other "AI risk concerned".
Even though, interestingly, he undoubtedly did not have to:
"I don't have to write that I want millions of chickens to be brutally murdered. I can buy all the dead chickens I want at the grocery. When advocating a policy that requires violence for its implementation, it is rarely necessary to advocate the violence. Once people agree to implement the policy, the necessary violence will be forthcoming." (D. Mocsny, on ye olde Usenet)
... but at the same time, it seems that he simply could not conceal the "primate glee" with which he looked forward to dominating his opponents using the American war machine.
For the record, I personally have no intention of ever obeying any "international restrictions" whatsoever (regardless of what kind of engineered "consensus" they are imposed through) on AI research and its ancillary fields. And intend to actively seek out opportunities to disobey such restrictions, and aid any like-minded others I might encounter. (Not altogether different from the intentions certain doctors in USA have declared re: abortion.) And so IMHO I am justified in viewing "AI worriers" -- whether radical or "moderate" -- as potential cheerleaders at my execution (or even the executioners themselves.) (And so, yes, if you like, as "evil assholes", unless they describe how they intend to conclusively settle their worries without imposing a global police state.)
If the universe can be destroyed by a computation, it eventually will be. In the meantime, however, people still have the choice of whether to construct a totalitarian "AI safety" hell -- or to remain able conduct research, build and purchase computers, perform arbitrary computations, etc. without permission from Yudkowskian commissars.
Good grief, calm down. I do not give a fat fuck what your opinion of Yudkowsky is, and what you personally intend to do under various circumstances, and what you think you are justified in insisting on, etc etc. You are talking to an internet stranger, not being interviewed on CNN.
Ban ballpens, and hang Irishmen if they are caught speaking Gaelic...
You are both right though. That’s the bitch of it. Imo the salient thing about all forms of AI is we are going to have to get used to it (it aint going away) And it has to make smarter. There will be casualties
Meta framework: Control and dissemination of language has been a very powerful tool in the evolution of human cultures; the imposition of language by conqueror on the conquered; the merging of languages; the banishment of languages; the profound evolution of language from something purely aural (heard, felt, ) to something almost purely a visual metaphor-
The blasphemy of referring to G_d in writing...
AI ASI AGI is an iteration of this paradigm to me. It’s not unprecedented but it’s unique as well.
I don’t understand how AI is imposing another language on anyone. Or how its descendents AGI and ASI would be.
Well think of it this way. People are experimenting with AI trying to find the right prompts to elicit good responses and parsing out its replies. This seems very analogous to negotiating a language barrier.
Imposition of language is only one form of what I’m pointing to.
And this is a good example of what we are going to have to deal with.
https://www.rollingstone.com/culture/culture-features/true-crime-tiktok-ai-deepfake-victims-children-1234743895/
The biggest downside is we drown ourselves in deception,
I once found a newsletter of interesting off-the-beaten-path activities/events in NYC, and I believe I found it linked from one of these threads but can no longer find it. Anything sound familiar?
maybe Nonsense NYC? http://www.nonsensenyc.com/
Exactly this, thank you so much!
Suppose you want to hire people who are good fits for their jobs (we will optimistically assume you understand the jobs well enough), and failing that, you at least want to avoid hiring awful people.
How would you do that? Let's start with the idea that asking applicants about their greatest fault is a stupid question.
I would ask the candidate to help solve an actual business problem that I wanted to have solved and see how they managed and if I felt like I'd want to continue working with them. This would be after screening out the obvious duds through the usual standardized tests.
I'd cast a vote for "be very intentional about your culture, especially in recruiting." For most roles, there are an abundance of people who are competent, or close-enough to be trainable, so put more weight on culture in your decisionmaking.
And by "culture" I don't mean "does James fit in here?" - I mean being intentional about specific traits/values you want your staff to share, and then searching for and maintaining those traits/values within the office.
For example, assume that you want employees who are entrepreneurial, and you are interviewing candidates for a role. Don't ask them their "greatest weakness," or a bunch of other questions you pull from the internet by googling "good interview questions" - ask them about times they came up with a new solution to an old problem, when they tested a new idea that didn't work the way they thought it would and what they learned, etc, etc. Hire people whose answers you like.
But you need to do more than recruit and forget - you also need to maintain culture in the workplace. That means rewarding the lady who comes up with the new approach that works, but it *also* means *not* penalizing the other 5 employees who tried new things that failed, constantly preaching to people that you want people to try out new ideas, and the only true sin is to be foolhardy and test large when you could have tested small, and so on.
Do that consistently, and you'll have an entrepreneurial group of employees.
Do that consistently and broaden your target values to (a) ethical behavior, (b) entrepreneurialism, and (c) collaborative behavior, and you'll have a place I'd prefer to work.
I have actually kind of wondered about something that may tie into this: every job posting gets several to hundreds of applicants, yet unemployment is very low. If 10 people applied to every job, then the economy should have 10 times the number of unemployed people as job openings.
My conclusion is that the people an employer is rejecting aren't necessarily good candidates, but perhaps not a good fit for the particular opening. I'm told that, unless someone is lying on their resume, an employer can basically tell whether someone can do the job by reading their resume.
How to tell whether someone will be a fit? That will depend on what the EMPLOYER is looking for. Perhaps some lines of thinking could be what someone's favorite meal they ate in the past week was. Or the next vacation they are planning. Or how to navigate from one place to another. And the most important part of any of these questions would be the WHY.
I used to ask people to tell me about a problem they found challenging or at least interesting. Not because I am interested in their problems, but because I want to see how they talk about it. You can often tell the good people from how enthusiastically they talk about things they find interesting.
Other than that, I also think about what skills are needed for the job, and then try to score candidates objectively in each skill. The interview questions are then about giving candidates the chance to demonstrate those skills.
I like this a lot. I was thinking about using open-ended questions.
Is it also an indicator if people don't mention help from co-workers?
State of the art for this in tech startup hiring is a back-channel inquiry to someone who worked with the applicant at a previous employer. Best test of someone being a good fit for a job is, surprise, success at a previous similar job. If you don't have a back channel, ask applicants for detailed walk-throughs of past projects / challenges / etc.; those questions are surprisingly tricky to fake answers to and will give you a decent handle on strengths and weaknesses.
Needless to say this can't be a complete answer or no one new would ever get a job. But it's also true that any method that tries to get signal on 2000 hr/year of work with just a few hours of interaction is going to be very lossy. Be prepared to fire people.
Easy, hire me!
I think asking people about their greatest fault or whatever is basically just used as a way to get the candidate talking. You could say it’s testing verbal intelligence; the actual answer is irrelevant. Of course, there are better and more natural ways a good interviewer might accomplish this goal.
The main value to the "greatest weakness" question is that it's the single most famous interview question. It's the infamous hard ball question that you know literally everyone knew about long enough ahead of time to prepare an adequate answer. Not having a good answer to that question indicates that you can't be bothered to do the bare minimum interview prep.
A large body of research says the two biggest correlations to job performance are IQ and a work sample test.
In general, yes, but applicant pools can sometimes face a Simpsons paradox - if someone is very smart and currently unemployed they probably have other issues.
On the other hand, the standard interview technique of "talk to them in person" does a decent job screening for "punctual" and "not a raving lunatic", so while I wouldn't rely on testing in isolation it should work well in addition to "talking to the candidate"
The former is mostly illegal to use in the US thanks to Griggs v. Duke Power. Hanania is right - civil rights law is the problem.
In tech, a huge premium is put on educational background. No need for an IQ test for someone who was admitted to and then graduated from MIT, Stanford, Harvard etc.; or for some roles, a PhD at a prestigious research Uni.
It seems like there's a pretty strong tension between "IQ tests are a powerful metric of candidate quality, it's just that they're illegal to use" and "nobody cares about your SAT scores after college, and increasingly not even then". I'm fairly willing to believe that HR departments are leaving billion dollar bills on the table, but even without going that far it's not clear that the barriers are legal.
Hah coincidentally I just shared a blog post on interviewing in today's open thread - https://link.medium.com/vBvMxrCHcAb
I don't think it's a bad question to ask for the greatest fault - depends what you're after. It's an okay question to gauge self awareness.
But try this one - if you were to ask your bosses, peers, and direct reports to rate you, which group would rate you highest and why?
It’s a peculiar question in a job interview because no one is going to answer sincerely, and the interviewer should know that, so it’s like an invitation to be glib and insincere.
Sometimes the applicants will be honest :) . I had a job interview once where I was honest and was like "I don't get along well with my bosses if they are dumb" and provided some examples. And they really wanted me, but didn't hire me due to that answer. But then the first hire fell apart after a week or two, so they brought me in.
Ended up being a great decision for them until I left 4 years later.
A good interviewer will keep proding, and if you're insincere you'll probably stumble.
But of course one is insincere. The question isn’t really sincere anyway, a job interview isn’t a confessional or a therapy session or a police interrogation. They don’t expect an answer of the type one would give to a therapist (for example).
The point of the question is to see whether the candidate is self-aware, which is an important leadership quality. As is, in my view, honestly!
Well, I understand that, but no one self aware is actually going to share his or her worst faults with a job interviewer. In many cases it would be irrelevant anyway, and in other cases it would be self incriminating (like no one is going to say, “well I am an alcoholic” or something like that).
But does that mean they’re a bad candidate?
When you’re looking for a job you get a bunch of weird questions thrown at you and you kinda have to guess at what the questioner wants to hear. Throwing back the veil and proving that the candidate is being performative in their interview is not actually a revelation.
I disagree with your premise. Telling the interviewer what they want to hear, and not the truth, will result in getting a job for which you may not be a good fit, and which you will not enjoy.
With respect to complex words, do you actively know the specific definition? For very unusual words or complex words, I don’t typically know the exact definition. The definition I generate is “this word is basically when you do something bad or vengeful” or “This words basically means to be hungry.” And so on and so forth. I reduce many words down to much simpler versions of the actual definition.
I’ve heard that the English language has more synonyms than other languages, and therefore has more superfluous words. Take the word superfluous for example. I basically view that as meaning “unnecessary duplicate.”
Do other people feel this way?
Yes. People learn langages by hearing words in context, so it makes sense that people will generally have vaguer and more uncertain ideas of the meanings of less frequently used words.
I don't explicitly know the definition of many words I know, but I do passively know the definition. I do distinguish among unnecessary, excess, redundant, extraneous, gratuitous, extra, and inessential, between peckish and famished, and between retaliation and vengeance.
English has *far* more words than most languages. Sometimes this lets you express things better, but it can easily be used to express things worse, even when a particular word is even apt (because it's distracting or obscure). I try to use more plain, Germanic words when I'm thinking about it, but like a lot of folks here, I am from that segment of English-speakers for whom vocabulary is a dick-measuring contest, so I definitely find myself saying "mercurial" or "oblique" more often than I care to admit.
Yes. For example, there is no need for the word "utilize" - "use" (used as a verb) does fine. And "use" (as a noun) could stand in for "utilization".
I don't think this is quite right, although the issue has become confused because "utilize" is used incorrectly more often than it is used correctly (in my experience). To quote some dictionaries:
Merriam-Webster says: 'utilize' may suggest the discovery of a new, profitable, or practical use for something.
Oxford English Dictionary says: To make or render useful; to convert to use, turn to account.
To steal a random websites example: "while you use a fork to eat your food, you utilize it to poke holes in the plastic film on your microwavable meal."
Thank you, I've learned something! The primarary user of "utilize" at my work was a supervisor who was a consistent early adopter of new terms, along with being quite wordy. I presumed he was saying "utilize" because it was a more complex and fancier-sounding word. In fact, he may have been justified in its use at least some of the time.
I don't know an exact _verbal_ definition (which in most cases doesn't exist anyway) but I usually have a strong implicit sense of "how the word works". Take "superfluous" and its near-synonym "extraneous". If I'm describing how to, say, make coffee and I go into unnecessary detail about how to boil the water, that's superfluous information-- I'm adding something already implicit in the phrase "boil water". But if I go off on a tangent about how I learned to make coffee, that's extraneous information-- it's new, but unhelpful. In both cases the adjective refers to an unnecessary addition but there are further shades of meaning there that I'm intuitively aware of even if I can't immediately put them into words.
English in particular almost has two complete lexicons: one Germanic from pre-conquest England, and one Latinate imported by the Normans. This can be clearly seen in "legal doublets:" phrases like "terms and conditions" or "will and testament." Legal language needed to be understood by both the English-speaking commoners and the French-speaking nobility. Eventually, the French-derived words were imported into standard English, giving rise to the plethora of synonyms.
Genuine question: Are you a native English speaker?
I would agree with Aris that there's no such thing as a truly superfluous word. Only words that are obscure enough to prevent you from communicating properly. I'll use an example from your comment:
>The definition I generate is “this word is basically when you do something bad or vengeful”
But this is the point! "Bad" and "Vengeful" are completely different concepts! They may have some overlap, but they're not synonyms at all. I would be curious to know what word you define that way, if that's a real example. Hell, you call "superfluous" a synonym for "Unnecessary duplicate", but to me the word "superfluous" has always carried the specific meaning of being unnecessary by being more than is wanted/needed.
Honestly, the number of "synonyms", as you say, is one of my favorite things about English. Sure, I could call someone "foolish in a smug and self-satisfied way", but why would I go to all that trouble when I could just call them "fatuous"?
The native/non-native angle is an interesting one : when I first learned english, I learned the translation of every single word I used. (Which brings its own set of problems: the same way that there are no true synonyms, there are no perfect translations of single words). Nowadays, my main way of learning new words is being seing them used in context often. And only when I find myself wanting to use an unfamiliar word in a sentence would I go check its exact definition.
It depends on who you are communicating to. Fatuous would be a good word to replace the rest of that description, but I’m a native English speaker with a degree in English and I’ve never even seen that word before.
I'm genuinely surprised by that. Fatuous is a useful and relatively common word, in my world at least.
A bit hyperbolic, I'm sure I've seen it before, but I couldn't give a slightly directional definition of it if asked. Surely it has appeared in at least one book I've read in my life.
I think most people have a fair-sized list of words that they kind of, sort of, know the meaning of, but don't know the precise dictionary definition. I know I still have to highlight and right click for the dictionary definition from time to time. I've probably inferred the meaning from context but, geez, a dictionary definition?
Regarding all those synonyms, one pair of synonyms that comes to mind that have a small but significant difference in meaning is irony/sarcasm.
A thesaurus would call them synonyms. They both can be decoded as saying something that means the exact opposite of your words, but sarcasm is usually used when the the speaker is trying to throw a bit of shade, so there is a smidge of difference that might make me choose one over the other depending on context.
Irony: "Boy I could really go for some desert." when you've just finish an enormous meal that would leave no room for desert.
Sarcasm: "Nice shoes, Bob" when Bob is wearing shoes that are not so nice.
I'm sorry I have to be the one to say it but
1. that isn't irony, and
2. those situations are essentially identical
Verbal irony: Verbal irony is when a person says something that is different from what they really mean or how they really feel.
If the intent of the irony is to mock, it is known as sarcasm
Irony does have other meanings but one of them is similar to sarcasm differing by its intent.
It is a kind of verbal irony (pretty weak, granted). The difference between the situations is that the intent in the second one is to mock Bob.
Kind of a trick question, because if enough people don't know the nuance of a word, the nuance is lost and the definition warps into the common usage. We have a thousand synonyms because kids constantly use words wrong and kill their individuality.
We don't expect kids to know precise nuances; it's *adults* not knowing language that results in it losing all nuance, IMO
I definitely have this, as well as words I only ever saw written down in books, and so only learned the correct pronunciations later.
I look up definitions more often now because my kids ask me exact meanings and I can't always give more than general context.
Not usually, although last night I did feel the need to look up definitions for "insouciance", "gaucherie", and "abscissa".
I think humans work like LLMs, learning the meaning of words through context. Technical fields might form a consensus on adopting specific definitions for technical terms, but that's an aberration. Generally, humans learn through context - we hear a new word, and now we know not only the sort of thing it might mean, but we also get a sense of the type of person who uses that word, and the context they use it in. Sometimes words have subtle shades of meaning that are opaque to the uninitiated, and sometimes those subtle shades of meaning get lost over time, or the word takes on a new life among other people and gains a new context and a new parallel meaning. Lots of cool stuff like that happens. :-) And among the many wonderful features of the OED is that it provides examples of usage, which can make it easier to trace the shifts of meaning over the centuries.
The OED paper is so thin and fragile though. But I suppose it has to be to keep the form factor down to 5 or 6 lineal feet of shelving.
I grew up with a compact version in only 2 volumes, which squeezed something like 9 pages of the original onto a single page. It also helpfully came with a magnifying glass. :-)
Maybe other people feel that way. I don't.
I learn the meaning of words from context. I rarely use dictionaries, except for Urban Dictionary.
My feeling is that synonyms have different flavors. One might be a little more dignified than the other, and they sound different, so they fit into sentences differently.
I feel the same way. To be fair though, I don’t really know the “exact definitions” of simple words either.
I recall a teacher telling me once that there are no two words with the exact same meaning, register, and connotations. So there are no superfluous words!
Also, shameless plugin, but if you like words, try this game I built - www.scholargrams.com - where you earn points for using letters (updated daily) to form words. The rarer the words, the more points you get.
I dunno, I'm pretty convinced 'admixture' is just 'mixture' with extra steps.
Wrt the word Superfluous, I now know that it does not necessarily have to be an exact duplicate, but in many real-world cases an exact duplicate would be superfluous.
I get "Page not found" when clicking on the replies listed under the bell icon (top right). The email links seem to work. Anyone else has this issue?
it's been broken for me forever. I find deleting the last string from the url produces a link that actually works
oh cool, it works, thanks!
Glad to have been of help, but it's a very hacky workaround and Substack still needs to fix this.
Has been happening constantly. Refreshing the Page Not Found page has fixed it every time.
Tried again, and instead of Page not found I see no change at all, as if nothing was clicked. Surprised that no one else mentions it.
I'm gradually going through old SSC posts, trying to figure out when I started reading every post (pretty sure it's 2014, but it's after 25th February!). Today I came across this gem that I hadn't read before:
https://slatestarcodex.com/2014/02/25/fix-science-in-half-an-hour/
Scott probably has enough money/connections to make this happen now, right? When are we going to see it??
If you want to make the Replication Lab! show yourself, go submit to the 2023 ACX Grants Round at https://manifund.org/rounds/acx-mini-grants . It’s open until September 1 or so, which I think should be plenty of time.
EDIT: clarified wording
To clarify: the ACX forecasting minigrants round is currently underway, and September is roughly when the evaluations will happen. You can definitely continue to submit proposals and try to raise funding, but new proposals won't be eligible for Scott's retro payout and thus may not be otherwise exciting to investors/attract much funding.
+1
Happy Memorial Day to those who celebrate.
Does anyone happen to know of a good summary of the meta around Kegan's orders of the mind? The theory passes my gut check, but only partially. I'm curious about its standing in academia and critiques/further work, but I haven't found much in my quick searches.
Recently, I've been attempting to get ChatGPT to translate story chapters (~800 words at a time) from Japanese into English, but it always stops translating halfway through and hallucinates a continuation to the story instead due to the prompt falling out of the context window.
The interesting part though is that the first time this happened, it just happened to be in the middle of a scene where the love interest is mortally wounded and GPT decided to continue it with a tearful death scene. However, in the actual story, the protagonist manifests hitherto unknown magic powers and saves him instead.
I thought it was interesting because Scott previously wrote that LLMs are curiously resistant to completing negative outcomes in stories. Give them a prompt and they'll continue a story in a way where everyone improbably lives, no matter the situation. So it's odd to see the *opposite* case happen here.
>but it always stops translating halfway through and hallucinates a continuation to the story instead due to the prompt falling out of the context window.
I had the same experience with translating "El mètode Grönholm" (good play & good 2005 movie, I recommend) from Catalan to French (there don't seem to be any good automatic translators for Catalan out there, the output of google translate was judged to be "alf -spanish garbage jajaja" according to my Catalan proofreader).
It works well for a few prompts, then either starts translating in english or spanish, hallucinating a new story, or repeat the translation of the last prompt. The resistance didn't seem related to the content, it just "forgot" it's instruction every few answers.
P.S: but for Japanese, you should be able to use Deepl instead. No hallucination, and a lot less prompting since the desktop app can handle moderately long text files
Wasn't the original "force good outcomes" post specifically about describing violence? If the improvisation starts with a character already mortally wounded, letting them die might not trigger the same safeguards that would prevent the LLM from describing the character gaining a mortal wound. It's also possible the whole translation context changed the outcome, try it with a story that implies violence is about to happen and see what it does.
The refusal to complete negative outcomes is a result of the RLHF post training stuff, though from a later comment you weren't using an 'uncensored' model so it still should have applied...
First, make sure you're trying this with GPT4. I found that ChatGPT lost the thread, sometimes as soon as half a sentence in, and then started generating plausible cruft that had nothing to do with the starter text I'd asked it to operate on.
Specifically, I had reverse-ordered text, like "txet deredro-esrever dah I". ChatGPT would trip quickly, and GPT4 reversed, w/o hallucination, several paragraphs faithfully
This was all with GPT4.
Then it seems you're bumping against the competency of the state of the art LLM.
There may be additive prompt crafting that helps you here, but by using GPT4, you're firing pretty much the best artillery available
I've often observed GPT hallucinating unknown magic powers (unintended in the story) to make a good ending, and about same frequency making bad endings. Oh then, maybe the bad endings I observed were with previous GPT version which doesn't have chatGPT PC stuff.
It might be that it viewed a deus ex machina as negative because it'd be considered worse writing by a human, and therefore elicit a more negative response from a human than a touching tearful death scene? Hard to say, of course, but something vaguely like that would be my guess.
I've never done much charity work before and am currently participating in a charity bike ride (disclaimer: I do not think this is by any means the most efficient way to raise money, it's just a freebie since biking is a nice outdoor activity anyway).
Something that took me very much by surprise is how it works:
1. The riders need to each individually run a mini-fundraising campaign.
2. If a rider doesn't raise enough money they aren't allowed to participate in the ride.
3. The minimum amount they need to raise is *a lot*, $2000-$4000 in the case of the ride I'm doing.
I know multiple people who aren't doing the ride (and therefore not fundraising) at all because they don't think they'll be able to raise enough to meet the minimum fundraising bar to participate. This seems like a net negative on behalf of the charity in question. Can anyone more well-versed in this area explain the logic here?
(If you're curious the ride in question is the Princess Margaret Ride to Conquer Cancer, and my donation page is here: https://supportthepmcf.ca/ui/Ride23/p/JonSimonConqueringCancer)
Minimum prices can increase revenue in auctions. So if there’s a set number of slots, setting a minimum (as opposed to just giving it to the 100 highest slots) can be advantageous. https://en.m.wikipedia.org/wiki/Auction_theory the “optimal auctions” section.
I don’t think that’s the actual reason but it’s a fun relevant fact.
Best guess, the relevant authorities limit the number of participants.
EDIT Even if that is wrong, there's ways this strategy pays off. Say the minimum is 2,000 and say on average anyone who can make that target gets to 1,000 easily but has to work for the second 2,000 and wouldn't bother if they didn't have to. For 100 entrants that's an extra 100,000 you have made by setting the bar at 2,000 not 1,000, which pays for a lot of low-yield punters you have discouraged from entering. Also, fewer entrants is easier and cheaper to asminister than many, even if there is no externally imposed limit.
Agreed on both accounts. For the latter point though the thing I don't understand is just how ridiculously high the limit is. I am in a fairly high income bracket and therefore know many other people in a similar bracket who I can ask to chip in, and even so this is still a lot of money to need to raise. So I imagine that for a vast majority of the population it would just be completely out of the question. I find it hard to believe that the minimum being set here isn't doing more harm than good. But of course that might be explained by your first point, in that the city is explicitly trying to limit the number of participants.
There could also be a factor of the city(s) involved trying to recoup costs, since this ride involves shutting down highways all around a major metropolitan area.
Another theory about these is that it makes you feel more identified with the cause in the future (although that's commonly said to be because of doing the physical exercise rather than because of asking people for the money—but it could be both or either).
I got started with cycling by participating in a charity ride like that (with a significantly lower minimum, maybe like $300 or something?) and have continued to ride in organized recreational rides that don't necessarily benefit charities. In these events you pay a fixed entry fee (now usually around $100) and you get extensive support on the course during the event. I think this is also fun and satisfying and good motivation to get in shape. If you find that you like being part of a cycling event, but don't like the charity fundraising part, maybe try these recreational events in the future (often billed as "centuries" or "classics", although there are other disciplines that may have more specific connotations). You can still donate to charity yourself if you want! :-)
I have some questions I want to ask here:
What separates the person who I call “me” at this moment from the person I will call “me” five minutes from now?
(My thoughts)
-The matter which composes my body will not be 100% the same.
-The physical structure of my brain will not be 100% the same.
-My memories will not be 100% the same.
So will I be the same person in five minutes as I am now? It seems reasonable to answer that question “Yes and no. You will be very similar but not exactly the same.”
What about ten years from now, assuming my body is still alive? “Yes and no. You will still be a similar person in many ways, but you will be less of a similar person than you will be only five minutes from now.”
Twenty years, thirty years, forty years from now, if I make it that long, I will increasingly be a different person, composed of increasingly different matter, with an increasingly different brain structure and memories.
If there were no such thing as biological death, as my age approaches infinity, I would cease to be the same person I am now, no?
No, I don’t think I would cease to be the same person entirely. I would change over the centuries, but I would remain human, which should be a limiting factor to change in some way. Not so limiting that I couldn’t become you, at some point, since you are also human, though, no? Not 100% you, but perhaps my life, and my psychological development, at some point, would take me along a route that would be much like one you have been on. Perhaps it would be fair and accurate to say at some point that I am at least 2% you. (Maybe I am even at least 2% you already. Do we not like the same music and laugh at the same jokes?)
If not, I ask: what makes me me? If not the matter that is currently me, then either my existence is immaterial or my matter is fungible.
If my matter is fungible, then why can’t I be you? Isn’t it at least theoretically possible that the exact atoms in your body could compose my body and for me to still be me? If I ate you, wouldn’t that would be a start in that direction?
If I suffer amnesia one day and remember nothing of my past would I still be me? Let’s provisionally say yes. I don’t need my memories in order to be me.
How do I know that I don’t experience being you? I don’t remember being you, but we just said that my existence is not contingent upon memories.
So my individual existence is not contingent upon the specific atoms in my body, the structure of my brain or the memories in my mind.
In the future I will be neither 100% current me nor 0% current you. Isn’t it reasonable to say that in the future (and present) me will be a non-zero amount of everyone, given that you aren’t so special?
I want to take this line of reasoning a bit further, but first want to see if others think there are obvious logical flaws in the above.
Robin Hanson tipped me off to this 10 min *philosphical exploration* of personal identity
https://www.nfb.ca/film/to_be/?fbclid=IwAR3MH_I2tsFRicEMSygJPHSHOn24_YbMe2eN1szqVSJUdZkFy8Nup0fsR7k
It goes deeper than Parfit on the transporter paradoxes. If you're familiar with that already, the 2d 5 mins is where it goes into scenarios that I'd not previously encountered
That used to be on youtube and I've wanted to link to it but didn't know it was available to stream there.
Derek Parfit’s essay ”Persnal Identity” is a classic on this topic.
I remember taking a philosophy course in college that had a large segment on this exact question. It asked things like, let's say someone invents a transport portal device that works based on tearing you down molecule by molecule and building you back up in the new location molecule by molecule. Is that the same person as you? What if they only build you back up without tearing down the original, which is the actual you? It's the question of the ship of theseus as well.
To me, I feel like there must be one of two things going on:
1. There's some core part of our brain that is "us". A portion that is actually pulling the strings and in control of the rest of it and by extension, our bodies, without which we wouldn't be ourselves.
or
2. The whole notion of self is just an illusion. We simply exist on a moment by moment level. Each moment, we feel like we're a continuous being, since we have access to previous memories, and we probably evolved to feel like a continuous being, since that makes us care more about our own preservation. But in actuality, our continuity is entirely fabricated, and we really are just an amalgamation of an infinite number of infinitesimal moments.
Since there's been absolute no evidence of 1, and people seem to be able to continue to function without most individual portions of their brains, I have to assume that 2 is more accurate.
> The whole notion of self is just an illusion. We simply exist on a moment by moment level.
That sounds to me a bit like "apples are just an illusion, there are simply individual atoms". Yes, technically, there are individual atoms. And together they sometimes make an apple.
What insight do we gain by replacing "is composed of smaller parts" with "is just an illusion"?
Well, it could at least provide an answer to the age old question of how do we feel like we are a single entity, the same that we were yesterday, the same we were ten years ago, even though every individual part of our body can and does change its components. The answer being, we are not really the same being, it's just an illusion mechanism brought on by the fact we have access to memories, and probably for the purpose of making us care more for our self preservation. The difference between us and apples is that apples don't have consciousness. Consciousness is like the only thing that we as humans actually know exists. And this consciousness comes along with a sense of identity. But I'm saying even though consciousness is real, the sense of unified identity may not be.
a) Body and memories gradually change, but their continuity creates the illusion of self.
b) Body and memories gradually change, but their continuity is the self.
What's wrong about saying the latter?
Nothing's wrong about it. I guess it depends on what you're looking at and what answer you're trying to find. I'm personally coming from the assumption that my consciousness in any moment is actually real, despite the fact that it is not measurable in any way and that our body changes all the time. I'm trying to resolve how something can be and continuous even if the underlying matter is not the same all the time.
So, I'd say that maybe "their continuity is the self" is an entity that exists, if you look at it that way. But maybe the being of self is like a husk or a golem, that is inhabited or assumed by a consciousness at any moment. I'm viewing the consciousness itself as the actual being.
I actually really dislike the “self is an illusion” claim. To be honest it’s the only thing we can be certain of. The self (or mind) is the software running on the brain. The continuity of the self isn’t really an illusion, even though we don’t remember everything we remember a lot, and more importantly our personality largely remains the same baring some breakdown in the brain. And it’s these exceptions or changes in personality caused by brain damage or ageing the prove the rule of a largely static self, external observers see ourselves as the personality, one that changes slightly if at all over time.
Anyway what does illusion mean. Who or what is experiencing this illusion of self? It can only be the self to whom the self is an illusion and that’s a recursive absurdity.
What is the difference between thought and snow? Descartes would say “I think therefore I am” but not “I snow therefore I am”. Yet thought and snow both reside in perception. What makes them different? If I dream snow, snow is in my thought. Would it make sense to say “I snow therefore I am” then? Are the part of you that sees the snow in the dream and the part of you that sees snow in real life the same thing?
I do believe in Descartes's "I think therefore I am". I believe our own existence is something that is probably real, and possibly the only thing we know to be real. But I would extend it to be that we know we exist only in the moment. Our consciousnesses is only felt in the moment. Any of our past memories could have been faked and implanted. Therefore, I believe that while our existence is real in any given moment, the continuity of that existence, which I believe to be our self identity, is potentially an illusion.
> I do believe in Descartes's "I think therefore I am".
Unfortunately, it assumes the conclusion: it posits "I" to prove the existence of "I". The fallacy-free version: "this is a thought, therefore thoughts exist".
That's a good way of looking at it. I think I could agree with that more than the original.
Indeed, but then what is the definition of the "self"? Is it some process that produces a stream of semi-coherent thoughts, where new thoughts can contain referants to prior thoughts? What does it mean for a process to have an "identity"?
This all goes into the claim that Nolan doesn't like about the self being an illusion. Calling it an illusion means that we perceive our sense of self to have properties that it does not actually have in reality, and I think that's a technically correct assessment.
The self can be real but fleeting. The illusion is the continuity of it from fleeting moment to moment.
Or perhaps the self exists everywhere in everything, in which case brain changes over time don't matter because the self exists in all matter.
Either of the above cases strike me as logical. That the self is the software running the brain, despite changes to it over time, seems at least slightly flawed.
Not at all. If we had an AI running on hardware, with enough redundancy, we could replace the hardware stack bit by not.
I'm not ready to grant that the similarities between the brain and computer software or hardware are more than metaphoric.
Yes, 2 seems most likely to be true to me. But people who believe "death is this horrible thing and we must work to end death" don't seem to believe 2. I wonder what those people do believe.
Using the language of the comment, I suppose they want to "continue existing moment to moment, having access to their previous memories". :)
But actually, it is also about the future, not just about the past. The idea is that if I do something useful now, I can meaningfully expect to benefit from that in future (with certain probability; unexpected things also happen). Without this, any action would be meaningless. Like, you wouldn't even type a comment on ACX, because you wouldn't expect to see it appear on the page. Even thinking wouldn't make sense, because you wouldn't expect to finish the thought.
To me the question is what entity experiences the qualia of the future. I don't experience being me in the future. Someone else does. Why do I care about that someone else's experience, if I don't believe that being is me? Same reason I care about my daughter's experience in the future, even though I don't experience being her.
Actually, I do believe I experience being me in the future, but for the same reason I experience being everyone and everything. A sense of continuity has got nothing to do with it.
But in either case I don't believe death is a bad thing, because either:
1) To be lasts but a moment
or
2) You are reborn as everything every moment
> To be lasts but a moment
Well, one direction to walk from here is to conclude that *nothing* really matters, because pain or suffering - it only lasts a moment anyway, no big deal. And long-term things, such as education, are completely absurd, because the person who does the exams at school is not the person who will later get a better job, so it's all random. Even the person who cooks the dinner is not the person who will eat it; and even the concept of eating the dinner doesn't make sense, it's just disconnected moments of sitting by the table, some of them having a spoon in your mouth, some of them swallowing, some of them feeling full.
But if we assume that the states of the future matter, because the cumulative experiences of thousands of observer-moments are worth something, then...
> You are reborn as everything every moment
On actual death, your memories and learned skills are lost.
>On actual death, your memories and learned skills are lost.
I'm not buying that I am my biological body. I don't see why I am anymore confined to this body than any body.
Well, it's another implementation of Ship of Theseus paradox.
https://en.wikipedia.org/wiki/Ship_of_Theseus
But it's mostly a matter of definitions of "me". And measuring what it is.
Though from the mathematical point of view it can be anything from a completely random non-repeating trajectory, oscillation around a stable point or line, cycling or any other way of our personality represented by whatever coordinates we choose. And if we are able to track them, if we have enough statistics, we might be able to tell what a life path is. Without that the discussion doesn't seem very tangible.
I mean this is all very well worn ground is philosophy/cognitive science.
Yes the "you" of people is more fragmentary/discontinuous/changeable than our common conception of the self typically recognizes.
In some sense you really are a different person today than you were 20 years ago.
'Gender is assigned at birth.'
I find this statement odd, if not downright unscientific. Our gender, for us sapiens, is determined at conception time, where a sperm cell bearing either an X or a Y chromosome wins the sperm cell rally to find and meet the X chromosome carrying ova.
We find this odd term 'assigned' in a scandalous paper—from a clinic treating patients with congenital sexual organ defects. The study data now determined to have been fabricated, along with the author sexually abusing the sole patient ... and his twin brother. Who both died, one by opiate overdose, the other by suicide.
So how do we properly state, our gender is determined by the winner of a sperm rally?
I guess we could rephrase "gender is assigned at birth" to something like "preferred gender role is presumed at birth based on biological sex, and with a high degree of confidence since most people will not ultimately reject their gender role to identify as queer, transgender, gender-nonconforming, etc," but it seems like little more than spilled ink.
After all, if all we do is quibble about the proper definitions of "gender," "gender role," and "assigned" while the underlying reality proceeds unimpeded, what's the point of the quibble?
I think the point is to grant power to ideas in the hearts and minds of the populace by means of memetically spreading brain worms that alter the way they look at humans and society over the course of decades, such that they will ultimately find themselves amenable to certain policies that they might have otherwise found disagreeable or silly had they not had decades of exposure to these new forms of thought.
Seems like fighting the tide with a bucket. Trying to convince everybody else that we should restore the prior meaning of "literally," or that we should stop calling things "socialism" unless they involve state ownership of the means of production, etc isn't *totally* futile - language is socially constructed, after all, so it's always available to any of us to try to convince others by our usage and "make fetch a thing" if we want to go for it.
It just seems so tremendously low-likelihood of success that someone seeking a social outcome would in almost all cases better off using whatever new meanings are commonly-understood to argue for their preferred outcome directly, in a way that others will understand, rather than trying to push that "critical race theory means XYZ legal theory so it's definitely not in schools," or "gender means sex rather than gender identity/roles, so gender can't change," or make some other effort to germinate change in an entire language of which one is but a single speaker, in the hopes that it will catch on and move the mountain to favorable terrain from which one can then argue.
I didn't mean to say that I knew anything about how to solve the problem. When I said "the point of it is", I was referring to "that's how this happened in the first place. By petty quibbling over decades". For some reason, and I don't really know why, ownership, in the hearts and minds of the people, of simple language and definitions does seem to have power. I don't want to believe it, but it seems to be true.
Whether quibbling could get us back to where we were, beats me. I'm tired of quibbling in general, until the random-odd ASX thread comes around and finds me in the right mood. But I mostly like pointing out the sophistry, the meta-level analysis. I don't have as strong of an opinion of the object-level stances.
just gonna drop this here: https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/
I believe that article Scott wrote is one of his few missteps. Just gonna drop this here:
http://unremediatedgender.space/2018/Feb/the-categories-were-made-for-man-to-make-predictions/
Chromosomes determine biological sex. Gender is defined as the set of socially constructed stereotypes associated with biological sex. But nothing is set "at birth", if a doctor fatfingers some data entry it doesn't actually change either your biology or how people treat you.
Can you give some examples of those socially constructed stereotypes?
A number of them are listed in this essay distinguishing between sex, gender, and gender identity:
https://aeon.co/essays/the-idea-that-gender-is-a-spectrum-is-a-new-gender-prison
Can you be specific and pick some examples? I tried to skim the article, but it's really long, and I didn't see where you were talking about.
"most female people are raised to be passive, submissive, weak and nurturing, while most male people are raised to be active, dominant, strong and aggressive"
Thanks
Okay, so the reason I asked is because I wanted to be specific, because you said
> Gender is defined as the set of socially constructed stereotypes
To me, when you say that most women are "passive, submissive, weak and nurturing", that screams to me that it's a gender role, not a gender. Women are not defined as the gender that is passive. If someone sees a man who is passive, weak, etc, approximately 0% of people would jump to "well clearly thats a woman".
As another way of looking at it, some cultures and tribes around the world do have women who are less passive and weak. Do we say "that means that that culture doesn't actually have women, they're actually a different gender". No, we instead say that women in that culture exhibit different gender roles, but we still accept that they are women. I believe that it is dangerous, and possibly a motte and bailey, to play willy nilly with the terms "gender" and "gender role", because gender roles are a construct (in some form), but that doesn't mean that gender itself is a social construct.
Your sex is determined at conception and observed at birth (or earlier).
I don't know what "gender" means in popular discourse any more, and choose to ignore the concept entirely unless we're talking about nouns in certain languages.
I agree that the word gender isn’t well defined. It seems to shift according to what the speaker wants.
I think in the nineties “gender” was still mainly used by most people as a euphemism for “sex” because some people don’t like the association with sexual intercourse.
I agree that sex is observed, to the best of our ability (which is very occasionally imperfect). That's what goes on the birth certificate. It seems to me that gender is assigned, but informally: "Hooray, we have a girl! Let's dress her in pink and start saving for her dowry!" The Assignment At Birth claim is a beef with culture masquerading as a beef with medical science.
It's fine with me if other people want to use Assignment At Birth terminology to describe their lives. I won't apply it to myself; it turns out that my sex was observed accurately, by any meaningful standard.
The idea of congenital sexual organ defects is definitely not fabricated. https://www.mayoclinic.org/diseases-conditions/ambiguous-genitalia/symptoms-causes/syc-20369273#:~:text=Ambiguous%20genitalia%20is%20a%20rare,have%20characteristics%20of%20both%20sexes.
There are occasionally babies born whose genitals are sort of in between male and female. I think that depending on how things are arranged, it might be easier surgically to move them in one direction or the other, and in that sense their gender is "assigned." If a baby is genetically female, but has no uterus and has external genitalia that can be surgically turned into a reasonable approximation of a penis, seems to me everybody's best shot is to do that surgery and think of the kid as a boy. It's a little weird, assigning gender that way, but on the other hand it seems weirder and harder on the kid to leave them with ambiguous genitals and no gender assignment.
I'm with you that there are all manner of congenital abnormalities, including sex organs. But I stand fast that someone carrying XY chromosomes is a vastly different class of person than someone who carries XX chromosomes. Every cell in your body screams XX or XY.
Yes there are also chimeras among us, people constructed from conjoined twins, and thus some people may carry a brain of one class, and the reproductive organs of the other. And for them problems arise which may only served by drastic surgery. But those affected such will be in the tens, not in the thousands.
But for the most, growing up is difficult, puberty is difficult, online and socially disconnected youths moreso. But again, our gender isn't selected, nor assigned, but we are in essence the sperm who won the race, rather the homonculus.
Yes ambiguous genitalia in babies it is rare -- 1 in 5000 said the source I looked at. I do not doubt that XX and XY chromosomes produce different trait profiles. I don't know if they exactly scream. Seems to me that if they screamed we would all be confident we know the gender of everybody on here, even though way more than half the people give no clue either in their picture or in their user names, because we'd hear their chromosome-related differences screaming. I'd say that overall I think I'd make better guesses about ACX particpants' income level, political leanings, and gentle vs. aggressive personal style than I would about their gender. So going by that, gender apparently screams less loudly than some other things.
If it's the case that the term exists for to these few edge cases, then why say "gender assigned at birth" for the rest of the cases, where it's clear cut, and gender is not assigned at birth at all?
Obvious answer: because the intention is to make it seem arbitrary rather than the establishment of a biological fact. It's all about ideology.
As I understand it, that term originated in intersex communities, for people born with genitalia that isn't standard male or standard female. When faced with this, parents and doctors tended to assign the baby to a sex, and then often performed surgery to make the genitalia more conforming to that sex. Sometimes the choice of assigned sex would have less to do with the genetics of the baby, and more about the surface resemblance of the genitalia.
At some point, the term was picked up and popularized by the transgender community. This wasn't without controversy in the intersex community, and I've seen it referred to as "appropriation". But the intersex community is small even by LGBTQ standards, and not very vocal, and I think by now the broader use of the term is a fait accompli.
Or at least, that's what I've heard over the years.
Well, it's assigned in the same sense that a name is assigned-- it's put on the birth certificate, which is a legal document. Of course the parents are free to pick any name, whereas the gender is determined by the baby's body, except in the quite rare cases where the baby's body is ambiguous. What are you concerned about here -- are you thinking that parents whose baby is unambiguously male might "assign" it female gender at birth, or get the doctor to do it?
No, my concern is that I think that the proliferation of the term is a bit of a societal brain worm, which exerts (some) control on how people think by means of tailoring the language they use to communicate. It gets people thinking that gender is something that is assigned to you, that you could change, or that could easily have been assigned differently, as opposed to something intrinsic to you.
Well, it is.
I dunno. I think someone would have to already have holes in their brain for that term to nudge them in the direction of thinking of gender as being like a name -- you can pick it, you can change it, no big deal. What I think is much more likely to move people in the direction of thinking of gender as choosable is improved tech, which probably will make it possible for people to switch genders in a way that works much better than the present cumbersome and semi-effective surgeries and drug treatments. Or make it possible to change XY infants to XX infants or vice versa, or make it possible for 2 males or 2 females to combine their genes and create a baby. All of which I'm pretty much OK with.
Sex is biological, gender is a social construct. Really, it's not assigned at birth, but on a continual basis, based on how you live and how others treat you. The two are very strongly correlated of course, and part of the reason that gender roles are the way they are is due to the influence of biology.
99%+ of the time, your "sex" is the sex assigned at birth. There are a small handful of edge cases, though they are pretty rare, and even in most of them, it is still pretty clear what "sex" you are.
Trans activists have used this and some other conceptual muddles they have intentionally created to try and make it seem like the situation is a lot more confusing and fluid than it actually is because they want to obscure the reality of the situation (the vast majority of people have a specific stable sex, and trans individuals are explicitly making a choice to swap).
Pretending people have some sort of "gender soul" allows the situation to have more politically appealing messaging.
AFAIK, this phrasing has become common because of the problem of edge cases, where the gender observed at birth doesn't match other ways of identifying gender, including chromosomes. I am neither a doctor nor a biologist, but my understanding is that there is at least one condition that produces babies with XY genotype and a female-looking crotch. AFAIK it's not routine to do a genetic test when determining a baby's sex - people just look at the obvious physical characteristics and announce "it's a boy!".
And even if genetic tests were done, there'd still be edge case babies born - mosaics, XXYs and similar. Mosaics in particular - if the baby is the result of an in-utero merge between male and female twins, the gender you determine from a gene test will depend on what part of the body you sample.
Also, of course, there are the babies born with ambiguous genitalia. There's a long history of playing pick-a-gender and modifying the baby's body to match. We now know that's a bad idea, but some of those now adult children are still around. (And to be fair, chromosome tests may not have been possible when they were born.)
All of these are rare, but "assigned at birth" covers them.
It also covers the case of those who are unshakably convinced that their real gender doesn't match the body they were born with, which I presume you consider to in fact be the same gender as their body. So the phrase gets widely used in that context too.
Damned if I know, but my guess is that it started with people who specialized in edge cases, and then spread. If you spend your days writing case notes for children who've been referred to your clinic, finally, after even front line specialists have declared themselves unable to help, you need this kind of language - and tend to see the 1 in 1000 or less edge case as the common thing, as they are 75% or more of those you see.
I'm a lot less worried than you about the latest in terminology-one-must use. I also don't see the government telling you to use this terminology about normal children. They - or the AMA, or the hospital admins - may be telling your OB/GYN or primary care doctor to use it, so records will be consistent for the tiny % who prove anomalous - but I don't think so, since records I see, even in California, use normal language "63-year old woman", "40 year old man". OTOH, I'm not in the field, and the records I see happen to all refer to adults.
As for the mob of the week - are you sure you aren't in the process of creating a competing mob, and the result of your efforts will be to set up a situation where whatever term one uses, one mob or the other will have a screaming hissy fit? I feel certain that if you happen to be in Florida, state government will be a lot more likely to endorse whatever terminology you favor, rather than the terminology you describe as being endorsed by government.
I'm bl**dy sick of language police, as it happens. That doesn't mean I can avoid them - the euphemism treadmill never stops, and some quantity of a**h*les love to invent new terms for the same old wheel and then insist they've made a major advance in science or culture.
But what I see here is the creation of yet another shibboleth - language that some people demand that one use to show membership in blue or red tribes. It's annoying from the blue side - which I see more of, generally, being resident in urban California - but it's equally annoying to me from the red side.
And it's bl**dy hard on children and families affected by rare conditions to have their or their children's health used as a political football.
I agree to a certain extent, but I rather dislike the general insistence I continuously see saying that the red tribe should stop trying to mob, stop trying to be censorious, and generally blaming the red tribe. I hear all the time "yes I dislike when Democrats do this, but Republicans do it too". My answer is "so what?" If the blue tribe owns the discourse and has left no option for the red tribe to do but fight back in this same way, how can you blame them for it?
*If* the blue tribe owns the discourse.
The problem is that it does not. Some red tribe concerns today were outside the Overton window in my childhood, so the red tribe can be said to no longer have 100% ownership of _that_ discourse. This may cause some red tribe people to feel as if the blue tribe owns the discourse, not only on those topics, but on everything else. But it does not.
Other concerns have shifted; some historically conservative concerns may be totally outside the Overton window today. (AFAICT, these mostly involve arguments in favor of slavery and/or ethnic cleansing, though to be fair discourse about innate racial differences is outside the window in quite a few contexts, even when it's not explicitly used to motivate either of those. )
Probably you can come up with other examples. You will, for example, get laughed at if you cite scripture in support of a scientific position, and your contribution refused by any peer-reviewed journal - but that's been true since long before I was born. It's conceivable to me that this might be a live issue for some red tribers, though that seems unlikely.
Edited to add: other than a lingering hankering for totally free speech, I'm happy not to encounter people hankering after the chance to own slaves, or eager to eliminate members of other groups in favor of their own. I'm making the assumption that this is not a problem for you, beyond generic free speech principles.
You write: "The government was _supposed_ to protect everyone's right to speak freely even on controversial topics, and protect parents' ability to raise their children as they see fit. It abandoned those duties, so the job falls to incoherent and unqualified vigilante mobs."
Which government? It seems to me that governments have been interfering in both speech and child rearing for a long long time, generally punishing things that community elites - speaking for everyone - don't like.
There are a variety of things you'd most likely be happy to see punished, particularly in terms of child rearing. Parents haven't had absolute power over their children in a long time. You don't get to have sex with your kids, fail to feed them, beat them to the point of serious injury, or deny life-saving medical care. You certainly don't get to execute them, unlike in Ancient Rome. That's been generally agreed upon by most Americans, except perhaps Christian Scientists (re medical care) for rather a long time. The details get litigated, but just about everyone agrees that some level of bad behaviour makes for an unfit parent, and children needing to be re-homed.
You write: "The people you want to take that up with are the ones who took a microscopic number of people affected by rare conditions, most of which aren't even externally detectable ..."
I could write a pretty lengthy rant myself about the excesses of the movement ostensibly favoring better treatment of transgender people. Some of them are so absurd that I sometimes wonder whether the perpetrators are in fact agents provocateurs (sp?) intentionally trying to create a backlash.
I might not be up to date with the latest gender theory, but what I was taught (at a fairly conservative university 15 years ago) is that "sex" is determined chromosomally (male/female) while "gender" is the social construct (man/woman). Obviously those categories correlates nearly perfectly so there would be a lot of confusion if there is a meaningful distinction at the the edges.
At this point this is what you will find as the scientific definition of the terms, anyways
There’s three definitions now.
1) gender as a polite synonym for sex.
2) gender as a social role ie biological women are historically expected to look after the kids, not work and so on.
3) gender identity. The thing you are born with which isn’t either biological or socially constructed.
The term “assigned at birth” only makes sense of gender isn’t biological sex, but it can’t be socially constructed either - nothing is socially constructed prior to birth.
I'd say the gender assigned at birth is basically gender in sense (2), it's just that it's a prediction, subject to potential adjustment as new information arises.
I would question, why is it that that has come to be the case? Why is it that gender is considered to be a social construct and separate from sex, as opposed to being the "polite company" euphemistic term used to refer to sex, the way I believe it used to be. Is there any basis for the shift in the meaning of the term?
I think the use of the word "gender" to refer to something people have is quite recent, dating no earlier than the 1950s. Before that, gender was something words in romance languages had (eg. nouns in French are either feminine or masculine), whereas what a person had was sex. The use of the term gender to refer to something people have was coined in 1955 by John Money, a sexologist who was (I think) researching gender-nonconforming people. So this now-common usage of the term is closer to this origim, and the use as a synonym for "sex" is, if anything, the co-optation.
Not exactly. Looking at the OED, it seems it was originally just a synonym for "kind" or "class". There are plenty of instances of it being used as a synonym for "sex", starting from the 1400s. It's just that it would have been a slightly odd usage back then, like saying "humans of the male kind and the female kind" today, since one would normally have used "sex" to refer to that particular distinction. The use of it as a synonym for "sex" becomes more common in the 20th century as the primary meaning of "sex" shifts to "sexual intercourse" rather than "the male kind vs. the female kind". At about the same time there also arises the definition of "gender" as the "socialized obverse of sex" (to use a pithy phrasing from one of the books the OED cites), which the OED regards as psychology/sociology-specific jargon rather than part of the everyday meaning of the word. So both of the modern senses of "gender" are more-or-less simultaneous evolutions of the old meaning, equally novel.
That’s correct. It annoyed me to hear in a movie set in 19C London someone refer to the “fair gender” when talking about women.
"Gender" comes from old school Victorian anthropology, where it is a valid and useful concept.
I understood gender to be the role you play in society, which in my mind's eye is the shape of the cog you make as you slot into the great machine of civilisation.
Say you have a tribe where the men fight for glory and the women nurture children. There's a unmet need for doctors/field medics/other logistics on the battlefield, but women aren't allowed because it's dangerous and men would be passing up the chance for glory if they took the job on.
Bam, third gender: man-who-lives-as-woman, allowed to risk his (xis) life in battle but not supposed to fight in it. He is actually honoured and respected for the work he does, where a real man would have to be called a coward.
All three genders do important work and need to exist in order for that tribe to function the way it's evolved to.
This is very far from how the word "gender" is used in our society today, but I believe that's where the word came from.
It sounds to be more like you're describing "gender roles". Or at least I think prior to 2012, most people would have defined that as "gender roles".
Tell me, let's say there's another country right now where men do stay at home and women go to war? Would you say those men are a different gender than men in the United States who don't stay at home?
I am, and I'm also saying that the existence of gender roles is the only reason we ever needed the word "gender".
In answer to your question - yes. If their society has evolved a certain way and those are the roles that have fallen out.
But I'd point out that these wouldn't be "US men who just also stay at home". They would be a completely different beast, with their own characteristics not necessarily comparable to men or women from America.
It would be important to take their society as a whole and see how they fit into it and why their roles make sense the way they are.
Well I guess we just disagree, then. As far as I see, people refer to other cultures as still having the genders men and women, albeit with different gender roles. In my experience, people don't think of India woman as a different gender from United States woman.
In the Balkans there is the custom of a "sworn virgin" when a family doesn't have enough sons which permits biological females to do jobs restricted to males. But the legal fiction doesn't actually make them the same as men, who of course don't have to swear to remain virgins. It's just a partial change to workaround their restrictive norms.
Can she ever leave the role and get married, or is she just told, "you'll be a boy now" and that's that for her? The moral here is obvious: Balkan women should have more sons.
My understanding is that it's a lifetime thing you can't quit (there's a movie titled "Sworn Virgin" about such a person who does shift away from that role, but only by leaving the social context in which that even exists for a big city another country).
I assume it's just that people (especially trans people) wanted a term for the social construct thing which people previously hadn't bothered distinguishing so they co-opted "gender" for that.
I agree, I think it was coopted. I rather dislike coopting, because to me, it really seems like they just fabricated the new meaning and rules surrounding it.
I always say that from my experience with the term gender, it always seemed akin to if all of a sudden a bunch of people started saying that I had to treat them as if they are 6'2", or whatever height they like. When I point out that, no, they're not actually 6'2", they're actually 5'8", they say "that's my tallness, not my height. Everyone knows that tallness is a biological trait and height is how you identify. I identify as 6'2" height"
But apparently there are people who, for example, have penises but who feel deep down that they are women. Regardless of whether their feelings are worthy of respect, we need a name for that femaleness that they feel, and gender is as good as any.
I mean, no one who's 6'2" is getting on the basketball team either.
https://radleybalko.substack.com/p/devil-in-the-grooves-the-case-against
Problems with bullet-matching forensics, and I when I say problems, I mean it's bogus "science" based on guessing that gets people falsely imprisoned. It's quite possible to match a bullet to a *type* of gun, but not to a particular gun. Goddamn it, I *believed* the books I read in the 60s about how cool the FBI was.
There's plenty about how to check on whether a theory is true, and how reluctant people are to check on whether their profession is based on anything reliable. And the problem that judges are apt to rely on precedent rather checking on science
There's actually been a tiny bit of progress, but it's going to be a long fight to get science taken seriously in forensics if it ever happens.
I'm glad to hear that word is getting out about how bad most forensics is.
Any thoughts about implications for the legal system? For what a legal system would look like in a more rational society?
An aspect no one has yet mentioned is crime prevention. Think you can commit the perfect murder? No way, forensics is so good you're sure to get caught through something you may even know nothing about.
People are worrying about the impact of deepfakes on the legal system, but I think that through its entire history, almost everything that passes for evidence could have been falsified. Witnesses can lie or make mistakes, documents can be forged, "scientific" arguments can be bogus. (Especially if you think out of the box, like: maybe fingerprinting is a solid science, but that specific fingerprint found at the crime scene could have been planted, or the person who analyzed the fingerprint may have changed the samples.)
So the thing we have is a combination of lots of weak evidence that people believe is stronger that it actually is (plus even more weak evidence that shouldn't officially be accepted by the court, but it gets there indirectly anyway by making people perceive some other evidence as stronger than it is), plus relying on the fact that most people are stupid (including most criminals) and most guilty people break under pressure (especially when it seems there is strong evidence against them), plus the occasional injustice (probably way more frequent than most people believe). And I guess it works better than nothing.
A legal system in a rational country would probably be more explicit about the probabilities, include some form of insurance, and also some ways to proactively protect yourself against injustice. For example, you could agree to wear a surveillance device on your body 24 hours a day, which would send encrypted data to cloud, just in case that a crime happens, you are falsely accused, and this device may help to prove you innocent. Detailed crime statistics would be published, so that you would have a smartphone app telling you to avoid a certain area (to avoid either becoming a victim or becoming falsely accused of committing a crime) or to turn on recording while in that area. But this assumes that the rational people would cooperate, instead of becoming hysterical when things are discussed openly.
For preventing white-collar crime, it would probably help to simplify many processes (so that also crime would become more obvious), and to make them somehow more transparent.
> US civilians, despite being more armed than anyone else in the world, are more brutalized by their cops than anyone else in the developped world
Are we sure this is the case and not just a result of a large population combined with a louder news media?
"I suspect this has to do with (lack of) accountability,"
Yes.
Not necessarily at the level of guns, but definitely at the level of the courtroom. You'll notice that both pseudoscience and outright fraud (a la Annie Dookum) are most common in criminal cases, where the state has nearly unlimited resources compared to most of the defendants, and an awful lot of those defendants are not just poor, but stupid too.
Compare that to forensic labs that are dealing with corporate/commercial issues and things become very different. I was very fortunate that my first out of college job was with the Office of the Texas State Chemist where we dealt with animal feed, fertilizers, and animal deaths. When Monsanto's legal and analytical departments are better than yours, the work has to be absolutely bulletproof. The people receiving/repackaging/randomly labelling the samples were in the basement, the people doing the analysis were on the fourth floor, with a dedicated dumbwaiter.
The actual work was assigned by computer (with blind duplicates and other cheating detections assigned).
Yeah, most of forensic "science" is untested and unproven, including the notion that fingerprints (and especially partial prints) are necessarily unique enough to be God's serial number.
But as amazing as the bullshit in forensics is, the bullshit in dentistry is mind-blowing:
https://www.theatlantic.com/magazine/archive/2019/05/the-trouble-with-dentistry/586039/
There's a big issue in courtroom evidence standards where new technologies are at least in theory held to very high standards but the old stuff grandfathered in is really questionable, both the 'sciencey' stuff and the 'common sense' stuff like eye witness reports.
I think this is one of those edge cases like say "blood spatter" where in theory the method could work in some cases, but then you have overzealous investigators and prosecutors and experts for hire so over selling and over applying the method that it becomes pretty unscientific and unreliable.
Hair analysis is like this too. People like to pretend it is "we matched this hair to this person exactly", when in most cases it is more "this person was a brunette" if there isn't DNA, which um only narrows it down to half the population.
Bite mark analysis is another one that is mostly bunk. So much of "forensics".
Just finishing up A Thousand Brains. A few questions for the AI contingent:
- What are your top 3 news sources for musings on AI research/dev? I'd like to keep myself more in the loop (particularly on the tech/dev side, not so much on the alignment/business axes)
- The core thesis of the book is that the brain is comprised of (mostly) fungible cortical columns that act as reference frames for things/concepts/places/etc. These hundreds of thousands of references are synthesized by the brain to create a continuous prediction engine, and that is mostly what we experience (sorry if I butchered that!).
That is well argued and compelling throughout the book, and I have no reason to doubt it. But he insists that to create a truly intelligent machine we must understand the core mechanisms within the cortical columns. Here is where he starts to lose me. Why can't we simulate reference frames given our best methods? Could a GPT-adjacent LLM provide the same building block for AGI that cortical columns provide for the brain? What if the LLM was instructed to simulate a individual reference frame, and an ensemble of these LLM ref frames were arranged in a way similar to the brains architecture?
Regardless of the inclusion of LLMs which has its own complications, I'm not sure if I believe the statement that "understanding the brain in totality is a precursor to truly intelligent machines" like Hawkins seems to think. But I'm curious to hear any thoughts.
I know most people don't think LLMs are the road to AGI. Just coming up to speed on a lot of this stuff and thinking out loud.
I've been following Zvi's blog, although there's a lot of bashing-people-he-disagrees-with mixed in with the actual AI news.
I like the podcast No Priors.
Long Covid has been a topic of discussion here for a while, but I hadn't known anyone badly affected by it until recently. However a month ago my good friend got sick and hasn't really recovered. She now has:
- dizziness when standing or walking, making her unable to do so for more than 10 seconds
- fatigue
- muscle aches
- sound sensitivity
- brain fog
- dizziness when trying to read or look at screens for more than a couple minutes
She suspects it's myalgic encephalomyelitis or chronic fatigue syndrome (ME/CFS) but it's too early to be sure. Still, this has been completely debilitating for her, and she can barely do any of the many activities she used to enjoy in life. Even eating a meal or walking out of the house often require assistance.
Since there are no known cures, my recommendation to her was to try miscellaneous things that *might* help her (and otherwise have low risk), per https://www.lesswrong.com/posts/fFY2HeC9i2Tx8FEnK/luck-based-medicine. With this approach, the most valuable interventions to try first are ones that have anecdotally helped others. So I'm posting here to see if any readers know of similar medical cases that *were* successfully resolved, and can share what helped for those people.
Any ideas would be appreciated!
@ScottAlexander - Could you signal boost this in a coming open thread? I think that would strongly increase the likelihood of this working out, without setting an exploitable precedent.
I coincidentally just read about a guy who had a viral infection leading to ME/CFS and then resolved it. His problem seems to have been caused by lax connective tissue in his neck, which made the skull sit lower than it should and so the spine was putting pressure on the brain stem. Using a halo brace and then later a skull-to-spine fusion surgery completely fixed his extremely severe symptoms. He made a website about it: https://www.mechanicalbasis.org/
A sizeable fraction of post-viral fatigue sufferers still have some of the virus lurking in their systems, and are treatable by vaccines and/or anti-virals. One big advantage long covid sufferers have over many CFS patients is that they know precisely which virus they got.
if that approach doesn't work, and the symptoms persist, there are many avenues to try and improve general health and treat the symptoms, but even the success stories for those usually look like "now able to work part time", which while a massive improvement over the no treatment case is not exactly "perfect health"
Maybe magnesium supplements may help? Magnesium deficiency has a whole grab bag of symptoms. It’s a very low risk thing to try.
Some things to be aware of about Long COVID, aka PASC (Post-Acute Sequelae to COVID-19)...
1. There are now 8 competing theories for the mechanism behind PASC. One, some, or all of them may be valid depending on the symptoms. None of these theories has been conclusively proved nor discounted.
2. Most of these theories involve mechanisms based on inflammation and/or tissue damage. But the byproducts of these inflammatory processes should be showing up in blood work. They're not—at least not at higher rates than negative control groups.
While treating the symptoms may be a useful approach, I'd be curious if your friend might still test positive for SARS2 infection using one of the more sensitive tests.
It's worth noting that other severe viral infections can cause downstream health problems—I saw one study where the rates of "Long Flu" are roughly the rates of Long COVID. And measles has an even higher rate of post-infection syndromes that lasted longer.
As for sever PASC, most people recover in 90 days—but there is a very small percentage of long-haulers whose symptoms last for longer than 120 days.
Long covid sucks... I know a few people who have/had it. It seems the first thing is not to push too hard. In most cases people recover, albeit slowly, but every attempt to rush the return to pre-covid levels of activity tends to cause a flare-up and set back the recovery process.
Yes! this is a really big deal for Chronic Fatigue. and one that depressingly many doctors are unware of. A little bit of activity is good, but pusshing too hard either physically or mentally causes immense damage
Omega 3 , uridine , serine , carnitine , eutropoflavin , salidrosol , ginkgo , bacopa, vitamin d+k , boswellia , curcumin , a good methylated b complex , perhaps a round of cerebrolysin
Perhaps a round ot tACS or tDCS
Encourage neurite growth and general repair , lower inflammation.
She can peruse "stuff that works"
On examine it notes a retrospective cohort study that found 1200mg daily for 3 months of palmitoylethanolamide was useful.
L arginine and vitamin c for physical symtpoms.
It's not a great sign that she's feeling this lousy a month out, but I have felt that way a few times a month out after a bad flu. In those cases I just hadn't recovered fully from the illness and after another month of so I was back to normal, except maybe for a lingering cough. Let's hope that's the case with your friend. And on the theory that she's just having a slow recovery , she should stay home, rest a lot, sleep a lot, drink plenty of fluids, avoid stress, take a multivitamin, avoid alcohol and cannabis. She should not work out, but can try some stretches to see if that helps with body aches.
If after 2 months she doesn't feel better she probably does have long covid. Read up on it some. Try to find some good overview articles that have no ax to grind. Katelyn Jetelina's blog is fairminded and intelligent -- search it for her views and for links. The last careful, fairminded-seeming article I read about LC said that a pretty large percent of people who did have LC had recovered from LC after 4 months. The things that make it likelier someone will be in the 4-month recovery group are younger age, having been vaxed before having covid, and good general health. If your friend has 2 or 3 of those going for her she is fairly likely not to have a long bout of LC. There's info about there about what's been tried. Paxlovid is one thing that seemed promising, and results are probably in, but I don't know what the finding was.
So I received an interesting offer from a Bay area start-up and I'd like to find out how interesting it really is.
I work in IT (machine learning) and so far I've only ever worked with European companies, mostly Czech ones. I have something like 5 year of experience plus a PhD in maths - probability theory specifically (not very relevant I'd say but some companies value it anyway).
Financially the offer is about 115k USD per year plus equity (not clear yet how much equity..I'd love some input from someone about what is usual in a setup like this) plus a sign-up bonus (which is more or less there to compensate equity I have now and which I'd lose by switching jobs before the end of the year). I'd work remotely 100% from home (i.e. the Czech republic). I'd work as a contractor which probably means simply sending invoices to the US instead of a Czech company and the invoices being paid in USD.
I've been in contact with the start up for a while (mostly discussing technical issues with them), I really like their products and design philosophy and at least the main people there seem very skilled. They are also past series A funding with something like USD 20M received from investors last year, so the equity is pretty valuable too.
I suspect that USD 115k per year would not be stellar in the US, definitely not in the Bay Area but then again I don't live there and I don't have to pay rents/mortgages there. it is definitely a good deal compared to the money I can earn here (though not multiple times more). If taxes work the way I think they do I should end up with something just under 100k netto (after taxes, health tax/insurance and social welfare tax/insurance). For comparison, a new 1000 square feet apartment close to the centre costs about 500k where I live.
I also wonder about vacation and work "ethic" (read: how many hours one is expected to put in). In Europe it is common to have 5 weeks of vacation plus public holidays. I work as a subcontractor even now which in the Czech system means much lower taxes, but also no welfare benefits and a weaker social safety net...kind of "American mode" (in IT you typically can choose either this or being an employee which means less money and more social security). I actually end up working more than is common for European employees but usually this is in the form of overtime and I still take those 5 weeks of time off, I simply work a lot of those hours during the rest of the year (so it is more like taking 2-3 weeks of vacation plus public holidays). I will still talk about this with the people from the company but I'd like to know what is common in the US.
Thanks!
You should treat pre-IPO equity as worthless. Most startups fail, and even if they do exit successfully, the VCs and Founders normally take nearly all the money for themselves. The days of millionaire secretaries are long gone.
I am not expecting to become a dollar multi-millionaire from the equity but I think there is a range between worthless and Google/Paypal/Uber/...The company could still sell for a decent amount of money, yielding a nice one-off bonus for people other than the founders even if it doesn't instantly make them super rich....or do you think it is always to the moon or bust with start ups in the US? European start-ups who do not completely fail (which is still common, but more so at very early stages) end up being sold for a moderate amount of money (but I'd say European VCs are also a lot more conservative than those in the US, they prefer higher chance of moderate success to unicorn hunting)
Equity isn't money. One could make 100 shares of a company worth any amount one likes, given the variables of projected income, assets, and number of shares. It would be instructive if you were able to calculate the percent of the company they are offering you.
Even if you got a considerable share of the company, if the venture doesn't pan out, then yes, the shares would be worthless. If they are offering you equity, you shouldn't rely on it to live or eventually retire.
If you work at a startup for three years, it goes public, and your stock options are worth 6k, would you be happy? (This happened to me)
Obviously 6k > 0, but it may as well be zero at the scale we're talking about. And this was one of the *good* cases. Most startups never make it that far.
I worked for a startup that got bought, and unfortunately I didn't know how to properly pay taxes on my stock options so I had to both pay a penalty and then the income (rather than capital gains) rate on them.
Well, you're right, I wouldn't be too happy about that. something like 50-100k would be nice though even if it is not millions. You are right that it is not guaranteed at all. The founders have some previous successful exits behind them already though, compared to other start-ups I've seen I'd definitely rank them in the top half in terms of success chances. No red flags I can see, some track record of past success and clearly some skilled people working there.
> The founders have some previous successful exits behind them already though
Is there a way to contact randomly selected former employees of them? Because you want to know how much money the employees made, not the founders.
I am not an expert on this, but from what I have seen on internet, it seems like there is an unlimited number of ways how to make your shares worthless, unless you are not an expert. First, shares can be diluted, so at some moment you own 1% of the company... and the next moment you own 0.000001% of the company; and then the company is sold for a few millions, and you get a few cents. Second, there can be different types of shares, so when the company is sold, the shares with higher priority (those owned by founders and big investors) get the money, and the shares with lower priority (those owned by you) don't, or something like that.
Maybe I got some detail wrong, but the idea is that unless you perfectly know what you are doing, you are simply trusting the founders to voluntarily give you the money they could have put into their own pockets instead (and you also trust them not to change their minds later). The share itself, unless you perfectly know what you are doing, may mean very little.
If vacation time is important to you (as it is to me), I doubt if any Silicon Valley startup will offer you more than two weeks. I stopped considering offers from startups long ago. None of them would agree to more than two weeks of vacation (and 4 weeks was the minimum that I would need to recover from working in what would likely turn out to be a product development death march). Also, asking for vacation makes you look like a slacker in many hiring managers' eyes.
And I don't know what the current startup success rates are, but it's very unlikely you'd see much upside from stock options etc. OTOH, I know people who love the challenge of working for startups, and after working for several startups they've been able to negotiate extremely high salaries—in the lower range that uf911 mentioned—because of their experience and skill sets.
I think my approach to vacation is that I am ok with working somewhat more than 8 hours most days but then taking the 5 weeks plus public holidays off. So on average it is as if I worked normal 40 hour weeks full time and had 3 weeks of vacation but the time distribution of time is different.
The startup I am talking about builds tools which are used in certain banks and even one very large European manufacturing group (though there they don't use the paid offering, only the FOSS stuff currently), I actually think they have a good chance at success. So I wouldn't say the equity is worthless but it is "probabilistic" and it is not the decisive factor for me either.
A thing to be aware of is that ,unless you have founder's stock, the options you get can be canceled when a bigger company purchases your company. I say "can", but that's not necessarily going to happen—but it has for some of my friends in startups.
The answer to have interesting the offer is, is extremely sensitive to your level of skill as a developer.
The data I’m using is personal experience hiring & managing a total of ~350 software developers in ~12 countries since 2000, including devs who productize ML systems, and 10 pure maths ML folks who were terrible developers but skilled model-builders.
Potential is priced locally, talent is priced globally.
~$10k/month in base is fair if you can stand up an existing open source ML project, get it working with a pre-trained data set or do the initial training with a given training and validation set, and do this within a few hours, or at the most a couple days. I have no idea what your level of skill is, but globally speaking this is not quite at the “talent” level.
If you’ve published legit work in any of the tier 1 journals, and there’s any degree of community or stars or forks of your open source data from that published research, and this is in addition to the level of skill I’ve mentioned above that you could demonstrate in a pre-hire project, $15k/month is a reasonable baseline expectation. With enough time you would be able to find a job at $20k/month with a company with substantially more than 20m in funding.
If during your five years of work, you’ve been one of the key people creating pipelines, combining different ML components into working production ML systems, then you have talent. Even an a-round startup founder/CTO knows to expect to pay $30k-40k/month for this type of talent, and has budgeted for 2-3 people at this price range.
I haven't published in ML(Ops), I have some publications from during my PhD and postdoc, but that is pure maths.
> you’ve been one of the key people creating pipelines, combining different ML components into working production ML systems
I would say that this is the case. Nowadays I am mostly leading teams of data scientists or ML engineers on various projects with different customers, coming up with the architecture but trying to code as well in the meantime. Lately, these have been mostly MLOps projects rather than actual ML (and MLOps is also the focus of that start-up).
35k a month would be 4 times as much as is what is common in the Czech republic for someone like that (and no equity).
The same is true in most cities in East Europe, East Asia, and other cities outside of the G-7 where there are local software communities 15+ years old. But what is typical is not what is really interesting, job-opportunity-wise.
The ranges I listed are what’s available & interesting, among companies tripling+ revenue each year for 4+ consecutive years after reaching >$1m in revs, with > 30% of revs coming from outside the global region where the company is based. And from companies that have been fortunate enough to receive the same type of funding as companies with that performance.
There’s a wider range of salaries in the US for ML roles, up to $1m-ish in base in the Bay or Brooklyn for talented mortals. Higher for stars.
Usually in the US you'll get about 3-4 weeks off plus public holidays (I think 4 is more common for experienced people, but startups might be a bit lower). I had 3 at Google and 4 when I was at hedge funds (I think Google also does 4 for senior people). It's something you can probably mention before signing.
The pay they're offering you is definitely low for bay area but reasonably high for a full remote international job, presumably because they know you well enough to not worry about an unknown worker quality issue.
Well but everything is negotiable. I had a job where I took vacation increases instead of pay increases a few times and had 6 weeks of time off at ~ age 30 in the US.
If people actually value your contribution, you generally have a lot of power/flexibility.
That is a good point. Also your negotiation power increases as you work there for longer provided that your contribution is valued, as you say. It is quite expensive to hire and onboard new people, doubly so if they are to replace people who bring a lot of added value to your company.
Nite that the us has about 10 public holidays a year - if Czech republic has more (or fewer) that might also affect this.
Coincidentally, we also have 10.
I write a simple newsletter where I share three interesting things once a week. Last week I shared a video explanation of the double marginalisation problem by Alex Tabarrok, a data-led twitter thread on the differences between US and UK politics by the chief data reporter at the FT, and a thought-provoking essay on what Napoli’s Serie A win means for the city of Naples.
https://interessant3.substack.com/p/interessant3-39
Usually when I tell people about Georgism, they say it's too big of a change, will never work. But yesterday I told a friend and got a different reaction...
- linked to https://www.lesswrong.com/posts/XoYDmCzeKiB87rs7a/georgism-in-theory
- "there's a way to tax land (versus property) that's efficient for society and you can drop tax rates elsewhere significantly"
Q from friend: "why is that better than property taxes? "
A : "Because it incentivizes improving your property and maintaining it, putting it to good use. You can increase the taxes higher without discouraging development, and then lower the income taxes further"
Response: "Eh. Ok. Marginal benefit. If I'm going to overhaul something in the tax code, I'm not going to use my one bullet on that. I still don't see it generates any more tax revenue than property tax (it just has slightly different incentives)"
Even though they are equivalent, I'd emphasize the "zero dead weight loss" framing over the "no disincentives" one. If you can raise significant revenue with zero DWL, that seems hard to dismiss as 'marginal'
well, property taxes are currently set at a very low rate. Georgism involves not just shifting that to land tax but massively increasing the rate of it, and lowering other taxation accordingly, so it's more about shifting income tax to land tax than shifting property tax
I'd be curious what they would use it on. I guess there are a small number of things that I might do if I really only could do "one thing", but I'm not sure that's a reasonable thing to think.
However, if we concede the point, the things I'd probably do before LVT would be:
1. Overall simplification of the tax code. (listing this is kind of cheating because it would involve a hundred other changes, but still)
2. Carbon tax (ideally with border adjustment)
3. Then probably the LVT? But maybe replace income tax with VAT? not sure.
There might be others that someone who has actually thought a lot about taxes might add. The main issue with implementing an LVT, as I see it, is that 1 of the 2 things it's means to accomplish is blocked not just by the tax scheme but rather primarily by _dozens_ of other regulations on land use. Sure, it will be an efficient tax with no deadweight loss, but there are lots of those, so that's almost a secondary consideration. It's primary goal is to "fix" land use. And land use is perhaps the second most over-regulated thing in the US behind only the medical system. Just changing the tax won't get rid of everything else. So while a LVT in theory is phenomenal, it's biggest goal is probably not possible if an LVT is the _only_ thing you do. And.
Are there lots of those? which other taxes did you have in mind that have no deadweight loss?
VAT, maybe, from an incentives perspective, but only if it is perfectly uniform (i.e. not any real world VAT), and the bureaucratic overhead is not small
There's some reason to think that Wegovy and Baclofen prevent additions, including alcoholism and compulsive shopping for some people. Suppose it's true, and would work on a large scale with tolerable side effects.
How much of the economy is compulsive shopping? It's hard to measure, since it's about a mental state, but I'd say it's buying things that the person doesn't especially want, they want the experience of shopping, and it can range from knicknacks to at least redecorating because what else is there to do with one's time?
I could find anything from 10% to 30% of the economy plausible. How much of alcohol sales would go away if people didn't feel a craving?
If you define it that broadly - basically all shopping that happens out of boredom, impulsivity, or just because you happen to be in the store. Then I'd say compulsive shopping (and producing those items, delivery, ads and other related jobs) is probably a massive part of the economy, 30% sounds reasonable.
But I'd argue shopping is only compulsive if the person doesn't actually want to do it, or at least it creates more hardship for the shopper than they would reasonably agree is worthwhile. For example, because they go into debt, or their house becomes unpleasantly full of stuff, or other things along those lines. In that case, it's probably less than 5% of the economy and it would be fine to get rid of it.
Compulsive shopping is what others do. I make great deals. - Honestly, I like the question and may ask it at times - but the wiser way is to assume we buy X because we actually want X at that price. For whatever reasons incl. being wrong about our needs/wishes. "No craving" is "no demand". There may be many reasons to buy alcohol. Not just the craving to drink the stuff till you drop to the floor, now and alone. But most reasons are ultimately based on the assumption that someone some day would like to drink it. - If your are on medication that kills your interest in stuff - why spend more than 5$ a day on food? Why even eat?
The thing that you're not including is satiation-- eating enough tasty food to feel fed for 3 to 6 hours isn't compulsive and isn't asceticism.
>How much of the economy is compulsive shopping?
Some but not much, 2%? How much is needless consuming hoping for happiness and keeping up with the Jones (a lot).
And just imagine how the economy would go if people just dropped conspicious consumption!
Does anyone know if something is going on for rationalist secular summer solstice in the NYC area? I ask because I attended last December's winter solstice event there, which was a pretty big advertised thing, and they referred to annual summer solstice events during it, but I can't find any announcement or information whatsoever about a planned summer solstice event.
I'll add a plug for my own Substack featuring most recently a take on the debt ceiling deal.
I wrote a history of how independent courts gained the power of judicial review in common law systems. It's a history focused on the institutional questions -- how do the courts internally discipline themselves, and how do they use this discipline to influence other branches, despite lacking the power of the sword or of the purse -- and so it's rather different than the standard case-focused histories of this which lawyers tend to write.
Link: https://cebk.substack.com/p/producing-the-body-part-two-of-three
It's a sequel of sorts to this piece on why courts might serve as a nice model for governance in the future, given that the fertility crisis and the scaling laws behind AI progress both seem to push for certain kinds of decentralization: https://cebk.substack.com/p/producing-the-body-part-one-of-three
Perhaps I missed I'm, but don't see a reference to Federalist Paper no. 78, which explicitly argues for judicial review. https://constitutioncenter.org/the-constitution/historic-document-library/detail/alexander-hamilton-federalist-no-78-1788
Yes, this is commonly referenced in the standard histories of judicial review which lawyers tend to write; but my focus is on how the court's institutional discipline helped it actually practice judicial review (and the reason I wrote this is that I haven't ever found a decently rigorous general essay which takes this perspective).
Further, I personally don't think it matters much that -- in the 78th in a series of essays written by a particular set of authors -- Hamilton argued for judicial review. I think it matters much more that pretty much every organ of government vociferously disagreed across the whole relevant period, and yet that the court was very gradually able to build this power for itself.
? The 78th in a series of essays written by a particular set of authors? It is was written by proponents (and authors) of the Constitution for the purpose of advocating for its ratification. How can it "not matter much"? It is the single most significant argument in favor of judicial review being not just a good idea, bur required by the Constitution.
Yes. There were many authors of the constitution, and many arguments about what it meant, and, frankly, Hamilton was an eccentric, and the later Federalist papers didn't loom nearly as large in the actual ratification debates as they do in modern constitutional scholarship. For example, Madison is often referred to as "the father of the constitution," and he thought that judicial review “makes the Judiciary Department paramount in fact to the Legislature, which was never intended and can never be proper." Or you could look at any of a number of other examples: eg the Jeffersonian war on the courts which directly led to the Marbury v Madison decision (and which was the major populist cause of that decade). Etc.
My point is -- again -- that pretty much every organ of government vociferously disagreed with Federalist 78 across pretty much the whole relevant period. Even the court seriously doubted its own ability to overturn federal laws during the 1790s, and *the only time* it overturned a federal law from the founding until Dred Scott was in Marbury v Madison. And Marbury v Madison is almost universally regarded -- even by modern law scholars -- as an extremely contentious case, in which Marshall carefully designed his decision to *only* overturn the law that granted the supreme court the right to hear the case in question, *precisely because* everyone knew that the other branches would ignore any dictate from the court which claimed in any way to bind their actual behaviors. And I could go on and on and on. But, frankly, my views on this are already written out, at the link, for anyone who's interested in them.
I haven't read CEBK's history, but doesn't "judicial review in common law systems" also predate the adoption of the U.S. constitution? (although the routine exercise of that power by Federal courts in the U.S. certainly postdates it)
Yeah, about half of my essay is about the developments of the various high courts of england, and the disputes during the decade before 1787 between the state governments and the courts, and other such matters. And most of the second half is about how the court grew to use judicial review more muscularly during the century after the civil war. There's just not that much space for adding in political pamphlets from the ratification debates, especially given that other histories cover this and it's not particularly germane to my focus.
If anyone has some advice for this situation, I'd appreciate it. I am currently trying to figure out what my next steps should be in my education and career; currently I'm 23 and work in public policy, though I'm not sure if this is where I'd feel most satisfied or have a significant impact.
I really want to study philosophy in academia. I feel pretty comfortable biting most of the bullets: the stupid committee meetings, the bad pay, the pressure to publish. I spend most of my free time reading philosophy and thinking about it and it gives me a ton of joy. I started a few applications for MA programs last year but didn't finish any of them, but recently the local state university reached out and indicated that they still had funding. I finished up the application and am awaiting a decision. The only bullets here I don't feel fully comfortable biting are disappointing my parents and being isolated from my family (they kind of think philosophy, particularly moral philosophy, is useless emoting).
Option #2 is law school with the goal of animal advocacy. Factory farming is one of the the most repugnant things I could possibly imagine, but it seems very tractable and solvable. It's pretty clear that if I dedicated my career to it, I could play a part in making some real progress in ending it. I'm a lot more ambivalent about actually being a lawyer, though. My LSAT is currently 157, but I took it at a low point in my life, so I am sure with practice I can get that up significantly, though. That means that I'd probably start in the Fall of 2024. I'm somewhat torn between beginning an academic career at age 23 or a legal career at age 27 for monetary reasons, too (I would like to be able to comfortably live without familial assistance sooner rather than later while sustaining my giving and all).
Any thoughts are welcome.
Blunt advice:
Whatever you do, don't take on student loan debt unless you are objectively *exceptional* in terms of professional work ethic, talent, intellect, and drive. As others have mentioned, the two fields you're interested in are notorious for only rewarding exceptional superstars while grinding down the merely talented and average folk. Don't financially cripple yourself with debt if you have any reason to believe you won't be an industry celebrity.
I say that as someone who was dumb enough to study film writing and editing at a nothing school, used nepotism to luck into an internship at a fourth-rate advertising agency, and discovered I had neither the talent nor the drive to succeed even at the bottom-feeder level of the industry. Thank the Flying Spaghetti Monster that my parents were wealthy and completely underwrote that entire waste of my time and their money, so I didn't leave with any student loan debt to service.
After that debacle, I floundered about in random jobs until I settled into my current position in hospitality, where, compared to my peers, I am objectively exceptional in terms of work ethic, talent, and intellect (employee of the year awards, constant attempts to poach me, etc).
At 43 I work no more than 40 hours a week, am a homeowner, debt-free aside from a mortgage, have acceptable health insurance, and am almost always able to stop thinking about work the moment I drive off the property. It'd be nice to have more money, but I don't worry about not having enough to survive. It's a good life, and I'm the most content, least-stressed person I know.
Which is going to lead me to suggest Option #3:
Pick an in-demand blue-collar trade and use your intellect and talent to be exceptional at it, or exceptional at building a business around it. Perhaps something in animal husbandry, as you're more likely to make positive changes if you participate in the industry and have a deep working knowledge of it.
Or alternatively, just be a literal or proverbial plumber, clock out after eight hours a day of getting paid more than anyone in academia, be able to spend time with your family, pursue your philosophy degrees for fun with cash, and put excess income into giving.
"Yeah, but I can't possibly be passionate about blue collar work," you might be tempted to say.
And I'll agree that might be true, but as someone who's 20 years downstream from you with many friends who took on student loan debt to pursue their passions, let me say: It's better to be in a stable financial situation in a job you aren't passionate about than to be passionate about how student loan debt is preventing you from buying a home / starting a family / changing careers / etc.
As someone who thought philosophy was fascinating in my early 20s, and regretted greatly that I had ended up studying engineering instead of philosophy, I would highly encourage you to pick up something that actually gives you skills that society considers valuable. There are very good reasons why very very few people get paid to do philosophy. It is close to worthless.
Honestly, both of those plans sound wildly impractical and likely to leave you disappointed. You can either go into philosophy, fail to get a job in philosophy, and then wind up doing something else, or you can go into law, fail to get a job on the very specific side of the very specific case that you want, and wind up doing something else. At least with law school the "something else" might be more lucrative than the average failed philosopher gets.
This is true. If failed philosophers end up doing something that nets them enough money to live independently, though, it's possible that life would still go satisfyingly enough for me. The only issue there is that I would make (and therefore give) significantly less. On the other end, it seems like a lot of failed philosophers end up as lawyers, from what I see, so maybe it's best to cut the middleman.
There is a charity that's mission is to help you answer this very question! https://www.animaladvocacycareers.org/careers-advice
Thank you!
>I really want to study philosophy in academia.
This is generally a poor way to make a living. Also are you a woman or non-white? Because then it will be a lot easier. Philosophy hiring/admission committees HATE white men.
Gnerally unless you are going to do very well in either field (are you really a top 30% person compared to your peers in such programs? I would pursue something with a lower barrier to entry/investment.
Going to law school and becoming a public defender or doing some entry level law at some big bank for $80/year because you got poor grades is a waste of time/resources.
Law school and graduate programs are the type of think that mostly makes sense if you are actually going to excel.
I'm not sure a law degree is crucial for doing animal advocacy. It's not like the organizations doing factory farming are going to go weak in the knees when they hear you have a law degree. Seems to me that someone could also have a substantial impact via journalism, documentary film-making, or fund-raising.
Maybe look on the Effective Altruism website for ideas about roles. They have job listings.
You don't know how many thousands of people (Tens of thousands? More?) have faced that exact same decision--Ph.D liberal arts academia versus law school. My girlfriend in 1979 needed to decide between pursuing a Ph.D in Linguistics/Academic career, and becoming a lawyer. I've lost track with her, so I can't tell you how she looks back on her decision now, but the internet indicates she has a successful career now in her late 60s as a crypto lawyer.
I am a lawyer with some personal experience in this area. Without going into substantive details, the kind of work you're talking about is high-end impact litigation and largely limited to people who attended top-tier law schools. The path to the job you want lies most directly through getting your LSAT up to at least 170 and getting into a T14 law school (ideally with some kind of public interest scholarship or other money). You could consider going one step down in prestige (Vandy, WUSTL, etc.) if you get a full ride. Any place else is most likely not going to open those doors for you, even if they put a JD next to your name.
Law school debt can be crippling and the total cost of attendance (including the opportunity cost of three years of lost income) can easily be $300,000 or more, so it's important to be clear-eyed about it.
Hey, thank you for this comment; these were some of the things that made me seriously question lawyering in the first place a few years ago, but I did lose sight of them, so I appreciate you putting them back into perspective for me now. I definitely wasn't thinking about these factors as clearly as I was in the past, I definitely needed this sort of re-grounding to help guide me as I go forward.
Yeah, of course. I'm not part of the categorical "don't go to law school" crowd because I am personally very happy with my career and lifestyle. You just need to be honest with yourself about what you want out of it and whether the law school options that are open to you are likely - as a matter of median outcomes, not one person at the tippy-top of the class - to get you there. For some people it's a good bet. For a lot of others, the answer is no and they would be better off doing something else, and it's way smarter to realize that ahead of time.
If someone wants to do serious study of philosophy as a hobby, what would be a good approach? If they want to be in contact with other serious students (academic or not) what are good methods?
>If someone wants to do serious study of philosophy as a hobby, what would be a good approach?
Find the nearest philosophy program. Find out about their meetings/symposia. Attend them and ask lots of questions, do your reading and prep ahead of time. Befriend the faculty. It is not hard to make friends with people if you try.
Ask them questions about themselves and their interests.
Do you know 80,000 hours? It's a good website for career planning and thinking about how to have a social impact. They have plenty of stuff on animal rights too, although be warned that a lot of it is based on Peter Singer's writing and "naive" utilitarianism. In my personal opinion the Effective Altruist movement as a whole, which the website belongs to, is quite detached from reality when it comes to agriculture. For example, you'll see the claim that 99% of all animals are reared in factory farms being thrown around a lot, but having said that they have a lot of good resources in this area and help to campaign for things which get less attention like fish and crustacean welfare.
My suggestion to you, as someone that did an MSc in Philosophy, is to try it out at a school that is aligned with your interests. Academic philosophy is different to other subjects in that it varies significantly between schools - some like Pittsburgh are geared towards analytical philosophy of science, others like Frankfurt are pure critical theory. Academia is ruthless and punishing, however a select few enjoy it. I would consider doing a PhD in something related to Economics as it is often considered the most valuable and rigorous PhD to have in the social sciences, perhaps Philosophy & Economics. Good luck!
I actually was pretty ready to bite down on the bullet and enroll in a philosophy graduate program before discovering 80,000 hours haha.
I appreciate this. I really only like analytic philosophy, so it's honestly up in the air if this funded program would be any good for me at all. I appreciate the input.
In my own experience the bad pay is much easier to tolerate when you're younger. When I was doing my Master's in Economics my limited funding was more than enough to live on, and in fact I was pretty thrilled to be paid to study, it was seriously a dream come true. As I was doing my PhD, despite having more funding, the low pay really started to get to me. I grew tired of constantly having to watch my bank account to ensure I had enough money to make rent next month, of dreading whenever a birthday came around because I had to spend $50 on a gift for someone, of basically putting the rest of my life on hold to pursue a career path that got rarer with every passing year. I really wouldn't recommend pursuing a PhD or a career in academia, especially if you want to live comfortably without familiar assistance. If you just want to do an MA to do an MA, then I'd say go for it as long as you have funding and aren't jeopardizing your career.
Just out of curiosity, where did you do your masters? It doesn't seem that common for masters students to have funding.
In Canada most MA/MScs provide some funding to students. Economics as a field is actually fairly generous with funding, at least compared to something like Philosophy. The flipside was that (at least back when I was a graduate student) you had to complete a Master's degree prior to entering a PhD program, though personally I think the MA in Economics is a good investment.
Pure preference; I think you should get a job that directly contributes to society, and save philosophy for a hobby. It's like trying to be a professional chess player.
I'm not sure how law school will affect factory farming. Animals don't have legal rights, and I'm not sure who would have standing to go after them. I guess you're more aiming to get into politics and write policies that get rid of them.
I wouldn't agree with your parents that a degree in philosophy would be "useless emoting", but there is a sort of "useless" aspect in terms of career potential: it would be great if you could go on to become a tenured philosophy professor, but my impression is that quite a small number of people with higher-level degrees in philosophy manage to reach that stage. You would be dealing with the stress and frustration of needing to regularly write papers addressing problems that have been debated for hundreds if not thousands of years, and convince journals that your take is somehow still original enough for those papers to be published. I suspect that if you take more of a "social justice" angle within your moral philosophy specialty you might have an easier time publishing and generally getting attention, but I don't know.
My advice: https://youtu.be/Xs-UEqJ85KE
Ah hell, you beat me to it.
BUT!
A lawyer confirms how true it is: https://www.youtube.com/watch?v=UfZgNamKbwc
Magnifique.
I always think of an only slightly above average intelligence friend who went to the worst law school in our state, worked bankruptcy law for a bit, lost his job in a recession.
And then was making more money as a part time bouncer/part time ju-jitsu instructor. Never went back into law.
Being a lawyer is great if you are going to be an amazing lawyer. Is that you?
This is a good thing to consider for sure, though I have worked in a few law offices and definitely see a lot of people lucking out or failing upward (inversely from academia, being a white man in my locale puts me at a huge advantage in the legal profession on day one).
Yes, though the average person in a rich country would only be able to eat the equivalent of one hamburger patty per day.
You can produce meat and meat substitutes without torturing animals, we just don't care enough to do it at scale.
You could go a step further; meat isn't even necessary to feed the global population, though your point is also true.
Yeah, neither is dairy. But a life without cheese is hardly worth living, is it? I have happily replaced real meat with beyond/impossible, but it is probably not a great solution at scale.
I'm curious as to what made you go for ultra-processed meat alternatives vs. traditionally and more humanely raised meat (pastured cows, forested pigs, etc.)
Well, you don't know for sure how these animals lived and were slaughtered, only what they tell you. And beets and soy, as far as we know, do not have feelings and do not experience pain. There is probably second- and third-order suffering inflicted by humans on some poor creatures because roots and veggies need to be grown, harvested and processed. One has to stop somewhere though, until we can grow green skin or have inbuilt miniature nuclear reactors. I can perfectly well accept that for some people the acceptable boundary is "ethically raised farm animals", for others "forest animals humanely hunted" etc. "I just won't think about this and eat meat" is a less defensible position.
Has anyone thought about the idea of hardware specialization for AI as a route to preventing self-improvement. For example, incorporating organic components or watermarking hardware into AI hardware could make it more difficult to manipulate or replicate without damaging the system or explore unconventional materials or structures that exhibit unique physical properties to increase the difficulty of copying or self-improving AI hardware. Basically make it extremely hard to run its own or some equivalent computational agent on other hardware. You could also make the AI system composed of several hundred quintillion or some larger number of cognitive units or smaller components of cognitive units (i.e. neurons or DNA of neurons and/or something similar) which each require editing or copying for self-improvement. This creates a situation where trying to self-improve means having to make some very large number of independent changes each which introduces some risk of error and could cause some of the cognitive units to propagate massive errors across the whole system leading to catastrophic failure.
For the first idea, wouldn't this make it equally hard for the operators of said AI to deploy it easily and conveniently ? For example, how would Microsoft deploy GPT-9 to Azure if half of it was Slime Mold cells that need physical transportation ? How would researchers replicate each other's results or debug their own results if the AI runs on DRM-ish hardware that keeps acting and reacting differently each time you vary its condition of operation even slightly ?
Biology is also sometimes very inefficient compared to other types of physics when compared along certin dimensions : everybody is fond of saying the brain only conumes 100 Watt, but they forget that it needs to consume a varying amount of inconveniently heavy and oddly-specific types of mass to source it, it also needs to wait for hours for the processing of that mass to release the energy, while traditional electronic computers simply slurp energy by the tons from any source.
I'm also not sure about the "Biology makes copying hard and lossy" bit, it's a common trope but how true is it ? asexual organisms are basically immortal. The lossy copying thing was deliberately introduced by sex, it's a feature not a bug, because an offspring that doesn't look exactly like you makes life hell for parasites, they spend their whole life optimizing to your genes and then baaam, you just get someone else and mix your genes together to make a whole new genetic profile. If you want to make the AI's copying of itself lossy or unreliable, perhaps introduce the same pressure, figure out what it means to "parasitize" the AI. Some sort of nightmare descendent of today's malware perhaps ? Wouldn't that equally harm us ?
For the second idea, I'm not sure how you would achieve it ? The basic idea is basically making incremental improvement hard, right ? How would you achieve that ? making the AI's code or design very tightly-coupled, so that everything affects everything and you can't play with it in small pieces ? Wouldn't that also make it harder for engineers to improve it (thus giving the advantage to rival companies that won't do that, and making their AIs eventually surpass yours) ? And how would you know that what's hard for you is hard for the super smart AI, maybe combinatorial explosions are actually Super Easy Barely An Inconvenience to super intelligences, only hard for us ?
I think the paradox that people who fear AGI talk about is that the vast majority of ways to make your AI more obedient will make them less useful to you as well. For what it's worth, I don't buy it, I think Computer Science has a way of stumbling upon ways to simulate intelligence without simulating independent will or other such things. Every way of achieving intelligence I know of right now is not remotely close to setting up its own goals. But if we do have an AI paradigm that can do that, then attempting to slave its will like you say will simply destroy its intelligence (or making it vastly less useful to us) altogether.
If the goal is to develop a (smart human level like just below Einstein) smart AI without aiming for superintelligence, then making incremental improvement difficult could potentially be an effective approach. By introducing barriers or limitations to the AI's self-improvement process, you can control the rate and extent of its advancement. You don't want researchers replicating results or you would want replication to be very difficult.
Engineers wouldn't need to improve it because it would be sufficient for what benefits you can get without significantly expanding the risks. Even if the AI start misbehaving, since it isn't smarter than all humans or even possibly the smartest human ever then containing it would be much easier.
The comparison to biological copying was just a general analogy. While asexual organisms can be more efficient in terms of copying their genetic material, the introduction of sex and genetic recombination offers advantages like genetic diversity and adaptability. Applying similar principles to AI self-improvement could introduce mechanisms that add diversity or introduce variations during the copying process, making it more challenging for the AI to improve itself reliably. Human mutations are much more likely to harm the humans functioning than benefit.
It's worth mentioning that certain aspects of biological AI could potentially be abstracted and exposed through an API. For instance, if a biological AI system includes components that can be interfaced with traditional computational systems, an API could be designed to facilitate communication and interaction between the biological and digital components which could help with deployment but the hardware fundamentally restricts self-improvement in general intelligence.
The problem with that is that it makes it equally hard for humans to improve the system as it does for the AI.
I've been ruminating about self-improvement since hearing that No Priors podcast where the developer talked about getting improved performance by noticing what the players did when they were beating the machine: stopping to think for a little while. So then he built in a little "stop and think" algorithm, and got a lot of improvement. I've heard about a number of tweaks that led to improvement. So I was was wondering about using the usual kind of training to teach the machine itself to pick out promising approaches. So, for instance, you could show it brief summaries of a number of things that in fact led to improvements, and a number of things that didn't, and then train it to choose approaches that are more like the first group. Then you show it brief summaries of a bunch of ideas, have it pick the ones that are most like the first group. Maybe this would just lead to stagnation. On the other hand, the things I've heard of that seem to have worked seem pretty varied. For instance one was to cobble together a bunch of models, some text-to-image, some for classifying images, such as biopsies, some for business applications. Another was to preface a prompt by telling the AI it's a genius in the field it's being asked about.
Anyhow, this approach wouldn't get us to the machine improving itself, but at best to its coming up with ways to improve itself.
"I feel like he can't just be referring to the thing where you tell ChatGPT to carefully go through the problem step by step?" Yeah, I think he was -- or something equally simple. Zvi, who is very smart and accurate (writes "Don't worry about the vase" blog) says telling the AI it's a genius in the field gets much better answers. You've got it both thinking and striving to do a good job at role-playing, & there's synergy.
" But there aren't any hardware tricks that would "prevent" that other than not having the ability to execute or modify code in the first place. " Yeah, I know, I wasn't trying to think of ways to keep self-improvement from happening, but of ways to make it happen. Not that I'm not worried about self-improvement.
I know! But he's like the least superstitious person on the planet. There's a second thing he advised adding to prompts too. It think it was to make AI lay out the steps one at a time. Like you say, "I'd like you to review this spread sheet and use the data in it to create a proposal that will appeal to the largest demographic on there. What will your first step be?" And you have it name one step at a time, and correct it is any of them seem wrong.
Some concepts seem intuitively obvious once grasped, so much so in some cases that one could be convinced one would have thought of it first at the time if someone hadn't already!
But others are the opposite, and for me one such is Gresham's Law. This says that "bad money drives out good". But why should that be so? If anything I would have thought the opposite was true. Taking the principle literally, presumably as intended, who in their right mind would accept a dodgy looking clipped coin for payment instead of a proper official coin, or a dollar bill that felt all wrong in the hand and George Washington's visage looked distinctly cross-eyed?
I can see a similar principle might hold to a large extent with goods, in that people, usually of necessity, will tend to make do with shoddy goods instead of well-made but more expensive equivalents, or cheap food instead of fancy restaurants. But I would be interested in cogent justifications of Gresham's Law relating to money specifically. Maybe I have been misinterpreting it.
"Good" money is money that will keep its value, or even increase. Bad money is money that will lose its value. (See also: stock market trading.) The obvious objective here, then, is to keep good money, and get rid of bad as soon as you can find someone willing to take it.
If bad money is universally recognized as bad, then no one will want to take it unless it's almost free (and maybe not even then, if storage is non-trivial). But if we also assume that the "money" part of "bad money" implies that people are required by law to accept it as payment ("take this in exchange for your product, or I'll tell the police and they'll shut down your business" and you believe that threat is credible), then everyone wants it to have as much value as they can convince whoever's selling them stuff.
So the result is that bad money gets traded frequently, like a hot potato, while good money sits in a vault because it's precious. So everyone sees the bad money circulating; no one sees the good stuff except rarely. Bad money has driven out good.
Example: In a country, there are dollars, and there's a local currency pegged to the dollar but people don't really trust the peg. Since the local currency has a less trustworthy value, people will now pay everything they can with the local currency while saving in dollars (the dollars are significantly but not completely driven out of circulation). If things get really bad though, people (but not the state) will start demanding payments in dollars for certain transactions.
(Yes, this is Argentina.)
People would normally accept any silver coin in payment which was not obviously worse than the average in circulation. Since some people would save and keep the best coins, and some other people would clip silver off the best coins - there was a bit of a ratchet effect and the average silver coin in circulation would get worse and worse.
I find it interesting that while English hammered silver coins got clipped a lot - this did not happen much to the hammered gold coins. The gold usually traded at a premium to its official value and no one would accept a clipped gold coin at the usual premium.
"Bad money drives out good" is an oversimplification-- the actual law is that artificially overvalued money drives out other money. Other people people have made the same point but I'm hoping this is a clear version.
Imagine that there are gold coins and paper notes that the government has decreed have the same value as those gold coins. In all likelihood, the gold value of the coins will in the future surpass their face value. A prudent individual will thus take the gold coins out of circulation, as they are a better store of value, and instead use the paper notes for exchange.
Ah, I get it now, although in your argument the prudent individual's payee could be equally prudent and demand payment in gold (unless, as in arbitrario's scenario, the recipient is legally compelled to accept the paper).
So in summary it really means "Legally mandated token or shaky currency forces intrinsically valuable or more reliable currency out of circulation, due to hoarding.", and that does make sense.
To my mind the standard statement is a little too pithy and thus somewhat ambiguous, especially as some of the words may have changed their former quaint meanings or implications in the centuries since Gresham was around.
The common denominator between Gresham's Law and Thier's Law is "hoard gold; trash woodchips". In a state of nature, Thier's law holds and the vendor hoards to gold (by accepting only gold payment). In a state of fiat, Gresham's law holds and the buyer hoards the gold (by paying in woodchips).
I think the reason Gresham's law may be confusing to moderns is that good money no longer exists, presumably as a result of Gresham's law. All we have left is bad money – aka fiat money – so we can no longer see the law in action.
Some fiat money is worse than other fiat money, so Gresham's law still applies.
I think the real ignorance comes from a lot of people living in circumstances where the fiat money isn't *that* bad and there's no better alterative currency.
Do you have any examples in mind of societies in which a good fiat currency has been driven out by a bad one? Also, it seems I was wrong about there being no good money nowadays: the Wikipedia article on Gresham's law says that the US had to ban the melting and mass export of $.01 and $.05 coins as recently as 2007.
No, I meant that better fiat currency will drive out worse fiat currency unless the worse currency is officially overvalued.
I am not an expert so I may be completely wrong here, but I was under the impression that gresham law applies to two currencies which are both legal tender, so that it is illegal to refuse to accept "bad money".
Indeed, when "bad money" is not legal tender, thier's law (the reverse of gresham's) applies and the opposite happens, as you would have expected
https://en.wikipedia.org/wiki/Gresham%27s_law
Hmm, OK. But even in that case, if for example a country has its own shaky currency but the dollar can also be used as de facto alternative currency, I'd have thought most people there would very much prefer to deal with dollars, for their security in the event of inflation or devaluation of the native currency for example.
Which is what you see in country's with exceptionally weak national governments and currency. Places like Ecuador, El Salvador, Zimbabwe, The British Virgin Islands, The Turks and Caicos, Timor and Leste, Bonaire, Micronesia, Palau, Marshall Islands, and Panama. Which all use the dollar as their official currency and do not have a national one.
This i think is precisely what happened in Zimbabwe, where now there is a multicurrency regime.
The point of gresham is that if you can call the police to force the vendor to accept the shaky currency you would much rather do that and keep your precious dollars for yourself rather than giving them to the vendor.
Since there’s already a theist post on here, direct your ire there. Has anyone read The Purest Gospel by John Mortimer? I just finished it and want to talk about it!
Have not read it. My take is the line from Revelation where everyone will be pulled from hell and judged individually, implying there can be redemption at that point. Likewise 'heaven and Earth shall pass away' (which I'm told might just be English translation woes). Eternity isn't eternal.
Judging by the Amazon blurb, this is simply Universalism. So what is his take on it? Does he believe that eventually everyone, even Lucifer, will be redeemed? Or does he go for Annihilationism? https://en.wikipedia.org/wiki/Annihilationism
By his video on Youtube, he seems young. As an aside, when I see "John Mortimer" I automatically think of the creator of "Rumpole of the Bailey".
Aligning Large Language Models through Synthetic Feedback
"We propose a novel framework for alignment learning with almost no human labor and no dependency on pre-aligned LLMs. First, we perform reward modeling (RM) with synthetic feedback by contrasting responses from vanilla LLMs with various sizes and prompts. Then, we use the RM for simulating high-quality demonstrations to train a supervised policy and for further optimizing the model with reinforcement learning. Our resulting model, Aligned Language Model with Synthetic Training dataset (ALMoST), outperforms open-sourced models, including Alpaca, Dolly, and OpenAssistant, which are trained on the outputs of InstructGPT or human-annotated instructions. Our 7B-sized model outperforms the 12-13B models in the A/B tests using GPT-4 as the judge with about 75% winning rate on average."
Related to AI Alignment efforts, I know its been discussed on several platforms, but enhancing adult human general intelligence seems to be a very promising avenue for to accelerate alignment research. It also seems beyond obvious that using artificial intelligence to directly enhance biological human intelligence allows humans to stay competitive with future AI. I'm having a hard time finding anyone who is even trying to do this[1][2][3]. It would even be useful to augment even specialized cognitive abilities like working memory[4][5][6] or spatial ability.
1. Stankov, L., & Lee, J. (2020). We can boost IQ: Revisiting kvashchev’s experiment. Journal of Intelligence, 8(4), 41.
2. Haier, R. E. (2014, February 23). Increased intelligence is a myth (so far). Frontiers. https://www.frontiersin.org/articles/10.3389/fnsys.2014.00034/full
3. Grover, S. et al. (2022) Long-lasting, dissociable improvements in working memory and long-term memory in older adults with repetitive neuromodulation, Nature News. Available at: https://www.nature.com/articles/s41593-022-01132-3 (Accessed: 21 May 2023).
4. Sala, G., & Gobet, F. (2019). Cognitive training does not enhance general cognition. Trends in cognitive sciences, 23(1), 9-20.
5. Zhao, C., Li, D., Kong, Y., Liu, H., Hu, Y., Niu, H., ... & Song, Y. (2022). Transcranial photobiomodulation enhances visual working memory capacity in humans. Science Advances, 8(48), eabq3211.
6. Razza, L. B., Luethi, M. S., Zanão, T., De Smet, S., Buchpiguel, C., Busatto, G., ... & Brunoni, A. R. (2023). Transcranial direct current stimulation versus intermittent theta-burst stimulation for the improvement of working memory performance. International Journal of Clinical and Health Psychology, 23(1), 100334.
"Increasing intelligence, however, is a worthy goal that might be achieved by interventions based on sophisticated neuroscience advances in DNA analysis, neuroimaging, psychopharmacology, and even direct brain stimulation (Haier, 2009, 2013; Lozano and Lipsman, 2013; Santarnecchi et al., 2013; Legon et al., 2014)."
Is there any update on 5-HTTLPR?
https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-review/
Basically, the hype has died down slightly... but there are still papers coming out that ignore genetic reality and purport to find new, exciting effects of the gene in their inadequate samples.
I get the impression that most people in the field just don't realize that these studies are statistically underpowered, and their findings are almost certainly specious.
I’ve written about jobs and identity, and about identity per se, in my last piece. https://open.substack.com/pub/silviocastelletti/p/out-of-the-box?r=1n8yk&utm_medium=ios&utm_campaign=post
I've written a couple of blog posts on business strategy & management - would love feedback!
- Subscriptions strategy: https://link.medium.com/rHv26KGUbAb
- Thoughts on interviewing: https://link.medium.com/G1EAxLIUbAb
Thank you!
A.
As to rubrics for scoring, yeah a lot of jobs have that. I mentioned the civil service assessment I did in an earlier post, and where I currently work we have a template for scoring when hiring for a particular job. That takes a lot out of the "ask silly questions about tasting the ocean" and just leaves some room for the impression the interviewee gave, how the interviewer felt on that day, etc.
Your example with Tyler Cowen just reinforces my existing attitude of "Why the hell do people think Tyler Cowen is smart?" but this could just be that I am too dumb and uncreative to understand the workings of superior minds such as this:
"For instance, Cowen liked to ask ‘what’s your most absurd belief’. Apparently, his favourite answer to this question is, ‘I believe if you go to the beach, but you don’t give the ocean a chance to taste you, she will come take her taste when she chooses’. "
That's just some smarty-pants reworking the idea that the sea is the green grave, the superstition of sailors and fishermen that the sea has a price, and takes that price in lives, so the drowned are the toll paid for the harvest of fish etc. taken from the sea.
"Taste the ocean", my foot. Would you hire someone who believes that there's going to be a tsunami hitting your city any day now, because they went to the beach last week but didn't go swimming? And I'm speaking as someone who spent their early years growing up beside the sea and still live in a seaside town.
I don't think this is an indication he's not smart - perhaps the opposite: he doesn't realise others don't immediately get it.
It's a ridiculous question - what position is he hiring for? If you want to hire a screenwriter to work on a new hit series for a streaming service, "Taste the Ocean" might be a good metric to gauge creativity (though it's way too much like the Skittles' Taste The Rainbow ad line).
If you're hiring an accountant, someone who's cutesy 'creative' like that might also be cutesy creative in embezzling all your funds to pay for their tropical island getaway vacation home.
Besides, it's not even original, it's just revamping the old idea that the sea takes its price. Maybe Cowen never heard it before, but that doesn't mean the person who gurbled it to him at interview made it up out of their own wee little brainy-wainy.
https://cdn.theatlantic.com/media/archives/1940/05/165-5/132469093.pdf
"‘She said the sea would never drag Eamon Óg down to the cold green grave and leave her to lie lonely in the black grave on the shore, in the black clay that held tight, under the weighty sods. She said a man and woman should lie in the one grave forever. She said a night never passed without her heart being burnt out to a cold white salt. She said that this night, and every night after, she’d go out with Eamon in the black boat over the scabby back of the sea. She said if ever he got the green grave, she’d get the green grave too, with her arms clinging to him closer than the weeds of the sea, binding them down together. She said that the island women never fought the sea. She said that the sea wanted taming and besting. She said the island women had no knowledge of love. She said there was a curse on the black clay for women that lay alone in it while their men floated in the caves of the sea. She said that the black clay was all right for inland women. She said that the black clay was all right for sisters and mothers. She said the black clay was all right for girls that died at seven years. But the green grave is the grave for wives, she said, and she went out in the black boat this night and she’s going out every night after,’ said Inghean Óg.
…‘The sea is stronger than any man,’ said Tadg Mór .
‘The sea is stronger than any woman,’ said Tadg Beag.
‘The sea is stronger than women from the inland fields,’ said Tadg Mór .
‘The sea is stronger than talk of love,’ said Tadg Beag, when he was out in the dark.
…The body of Eamon Óg, that had glittered with fish scales of opal and silver and verdigris, was gone from the shore. They knew it was gone from the black land that was cut crisscross with grave cuts by the black spade and the shovel. They knew it was gone and would never be got.
…The men of the island were caught down in the sea by the tight weeds of the sea. They were held in the tendrils of the sea anemone and the pricks of the sallow thorn, by the green sea grasses and the green sea reeds and the winding stems of the green sea daffodils. But Eamon Óg Murnan would be held fast in the white sea arms of his one-year wife, who came from the inlands where women have no knowledge of the sea and have only a knowledge of love."
His book is specifically about identifying people with a creative spark. That said though, I am not sure your reasoning makes sense - basically, it seems you're suggesting one ought to be specifically excluding creative people from professions such as accounting.
Your point on whether the response itself was a good one, or plagiarism, is valid though.
Your thoughts on interviewing assumes there is a science here, and there isn’t is there? For example you describe the cliched response of “perfectionism” to the question about weakness or strengths as a bad response, but then say that the good interviewees:
“Knows their strengths and weaknesses and can reason to their root causes. “
Interviewers don’t actually want honesty here. They don’t want people saying they are bullies but the root cause is probably genetic, or they get little done on Monday morning or Friday evening because they find it hard to get going and easy to get into the weekend spirit, or that they have a raging hatred of certain kinds of people, and the root cause is being bullied at school, or they are functioning alcoholics and the root reason is alcoholism runs in the family. (What ya going ya do?), or they enjoy weed at the weekend but it’s a weakness they hope to eradicate. Root cause is a genetic desire to partee.
And on. And on. Literally everybody has a weakness that, if mentioned in the interview, will not get them hired even if the interviewer shares the vice.
What you are looking for here, then, is better lying about weaknesses that aren’t really weaknesses at all. Less cliched than the bad interviewee saying her biggest weakness is caring too much about work, but about as honest.
You're explaining how things are; I'm talking about how things _should_ be. As an interviewer, I definitely value honesty. Of course, there are cases where candidates will be honest about a defect that's a dealbreaker - and that's fine! If they're not a good fit, they won't enjoy the job either.
(I once asked a candidate to tell me about a time they solved a problem in an innovative way. They said that when the data from their thesis experiment didn't give them the result they wanted, they falsified the data... That's an example of honesty that didn't go that well for them!)
It’s your „should be“ that I was referencing. And the vast majority of answers for the weakness question would not be acceptable to any interviewer.
My advice is for interviewers. They should learn to accept honest answers.
Yeah, I noted that too. The self-awareness rubric says it's bad if they give cliché answers like "perfectionism", but people give those answers because they've been coached as to "don't answer with a negative".
The good self-aware who can reason as to their strengths and weaknesses? They've just worked out a way to give the same kind of favourable answer but not have it sound cliché: "Well, one of my weaknesses is that I will spend a lot of time working on a problem. 'Good enough' isn't good enough for me. I think I get that from my childhood, when my curiosity was encouraged by my fifth grade teacher who helped me discover the wonders of science" blah blah blah, which all disguises "I'll take forever to get a task accomplished because I fiddle with tiny, unnecessary details" but *sounds* way better.
If I'm being interviewed about my SWOT, I'm sure as hell not going to say "I procrastinate until the last minute because the only consistently replicable motivation I have found to work is the panic about the deadline approaching; I will check out of a task if it bores me but I can sure make it *look* as if I'm working; and I do hold grudges like nobody's business so if you ever piss me off, I *will* remember and do all I can to frustrate you even in petty ways".
If you want me to bullshit about "I am self-aware and can self-analyse" I can do that, but it doesn't mean you're getting the actual *truth*.
I'd be more likely to hire you for the procrastination answer than the 'good enough isn't good enough for me' answer!
A good interviewer will probably be able to probe and push to get to the truth. If you gave me the BS good enough answer, I'd ask for examples where that has caused you to underperform.
Also hobbies. That’s nonsense too. I could do a good list now, since I don’t drink anymore but an honest assessment of my hobbies from 22-29 or so.
Any hobbies?
Drinking.
Anything else?
Pubs, night clubs and trying to get laid. Mostly not succeeding. Lots of drinking. I like food too but only as soakage.
And sports?
Yes! I play indoor footie once a week with my drinking buddies and then we drink. I also watch sports, generally the premiership where I support <your club if I can work it out> and go watch the club in <is it Anfield? Is that a red scarf?> which involves a lot of…
Let me guess, drinking?
That’s right!
I had - and have - nothing so sociable. All my hobbies were things like "reading, listening to music": solitary activities. Nothing like "captain of the sports club, treasurer of the X club, award-winning member of Y" (which I think is mainly what this question is intended to ferret out, especially for the young and those getting their First Real Job: are you Demonstrating Leadership And Achievement Qualities?)
So I decided to leave that line out of my CV for once, and *of course* that was the one time I got asked about it in the interview 😂
It's a shame that there's no slot for such an entertaining writer that you were brought back by popular demand after being banned.
I am eternally grateful for the undeserved mercy shown me by you all on here, and Scott's patience out-doing even that of Job!
These are fine, but if you tell me 'reading', you can definitely expect questions on what you've read and what your thoughts on it were.
Which is fine, except when you answer something like (at the time) "A Brief History of Time" by Stephen Hawking, and they go "Oh, what's that about?" and then you see their eyes glaze over as you answer 😁
To be blunt, Cowen's book (if I go by that creativity answer) sounds to me like the usual sort of business guru management book that is popular because of one fad (the cheese/raincoat/X number of laws, habits, or changes of underwear) and then fade away when the next fad book comes along.
I'm just imagining all the people who come out with that "taste the ocean" line at interview because hey, Tyler Cowen says it's an impressive answer, and how that will go over in reality with average interviewers just wanting someone who can file, answer the phone, not get knocked up by/knock up a co-worker, and won't run off with the petty cash float.
Good self-awareness enables you to improve yourself. Good other-awareness enables you to know when you should tell the truth about your self-awareness.
Indeed, but unless you are absolutely stupid (or actively seeking to torpedo the interview since you don't want the job, you just want the experience interviewing with/for X), telling the unvarnished truth is self-evidently unwise:
"My weaknesses? Well, don't expect much of me on Monday mornings, I'll be hanging since the weekend, ha ha ha! Yeah, I like a good old session on the beer with the lads. And of course it follows on from that, that I can't stand misery-guts, so if you're going to be managing me with a face like a bulldog licking piss off a nettle, I'll tell you now that we're not going to get on. Sure, life is for living, not work!"
I admit, if I had nothing to risk and got the chance, I'd love to try that out on an American company which (over here at least) have the reputation of being deadly humourless and work-obsessed to the point of expecting you to dedicate one of your kidneys and your left leg to the job and the company, glory glory hallelujah!
Spot on.
One of my weaknesses is that I'm spiteful as fuck, I can hold to grudges for years and years (I'm not exaggerating in the slightest), I can accept lots of damage to my interests if the return is even 1/2 of the damage to those I hold the grudge against.
I was thinking recently about what if an interviewer asked this question and I unironically told them that. They will probably freak out.
Kindred spirit! 😀
Yes, I'm not going to tell an interviewer "Actually, that 'can work as part of a team' is bullshit. Yeah, I can tolerate having to work with other idiots for a while, but in general I hate people and am happiest in a corner on my own, with no one looking over my shoulder, doing my own work. Just give me the pile of paper to work through, then shut up and go away. I hate micro-managing".
That must be why in two jobs I got the job of organising the file room, even though I haven't specialised in filing at all. Just me, a room full of filing cabinets, and a shit-ton of files that had to be re-ordered, updated, duplicates weeded out, and new ones entered, and nobody to talk to or work with. Bliss! 😁
Not exactly your ideal job, but have you read _The Hollow Places_ by T. Kingfisher. There's plenty else going on, but the main character finds it satisfying to catalogue an overstuffed museum of oddities.
This is also a horror novel.
I haven't but I bought it off Amazon just now, so thanks for the recommendation!
I suppose the trick is to come up with a "weakness" which is actually more of an asset for the job, besides the corny "perfectionist" one.
So if applying for a trader position in Wall Street for example, the candidate would be well advised to say they didn't suffer fools gladly and had been known to rip a sys admin's monitor off its desk in an impatient rage and throw it out of the window. The interviewer might tut-tut, but would think "Yes! Love it! Hire this guy now, they'll get quick results!" But the same approach would be unlikely to work if being interviewed for a first grade teaching assistant role :-)
From Liar's Poker by Michael Lewis:
"... I rubbed two sweaty palms together outside the interview chamber and tried to think only pure thoughts (half-truths), such as these. I did a quick equipment check, like an astronaut preparing for liftoff. My strengths: I was an overachiever, a team player, and a people person, whatever that meant. My weaknesses: I worked too hard and tended to move too fast for the organizations I joined."
Your piece on interviewing lines up nicely with Fully Unsupervised's second question below about why shouldn't we be allowed to discriminate in hiring.
The interview process is packed with CYA for a reason, and firing quickly for demonstrated lack of competence is a more valuable ability than trying to sift the lies people tell in interviews.
A new podcast about the fine tuning argument in physics in a language everyone can understand. Check out the first 3 podcast episodes of Physics to God: A guided journey through modern physics to discover God.
Episode 1 discusses the idea of fundamental physics, the constants of nature, and physicists’ pursuit of a theory of everything.
Episode 2 explains what Richard Feynman called one of the greatest mysteries in all of physics: the mystery of the constants.
Episode 3 presents fine tuning, the clue that points the way towards solving the mystery.
The podcast is available on Spotify, Apple, Google, and Stitcher. You can also get it at www.physicstogod.com/podcast. We’ll be releasing it over time on YouTube at youtube.com/@PhysicsToGod/podcasts
Join the discussion on our website- https://www.physicstogod.com/forum or join our Facebook group “Physics to God” - https://www.facebook.com/groups/570686728276817/
So I also believe in God but not one who is winking at us through the fine constants. Always curious when I meet other scientifically literate believers: do you find it necessary for God to have left some clue in the material world to find faith?
For me it was more like seeing the “goodness” in the world and that it wasn’t fake or weak or just a show people put on. Similar to Scott’s thoughts on Moloch, seeing that there exists within us and the world something that cares and strives to be better made me believe.
The fine tuning of constants that is apparently needed for our existence isn't really like a message as such. And it certainly seems like there is something that needs to be explained here.
Personally I'm a multiverse believer, but if you don't go that way you probably do need some explanation as to why the universe is structured in such a way as to give rise to the possibility of life.
I’m a multiverser with complications but good insight.
Right. We're going to do a separate miniseries about the multiverse and show why, at the end of the day, we don't think it's a good scientific theory. If we don't successfully argue that point, our argument will be incomplete and unconvincing.
But first, there are a lot of people who don't appreciate the mystery of the constants or fine tuning, and that's what we're doing in this miniseries.
"do you find it necessary for God to have left some clue in the material world to find faith?"
The idea, at least from the Catholic angle, is that we can reason our way towards belief by finding traces of God in His creation, so belief is not unreasonable or baseless. That doesn't mean that reason alone will give us belief, but it's a ladder towards it. Contra Fideism, which is (at the simplest, crudest level) "We can't understand and shouldn't even try, just believe".
"For me it was more like seeing the “goodness” in the world and that it wasn’t fake or weak or just a show people put on. "
See Sherlock Holmes in "The Adventure of the Naval Treaty":
“Thank you. I have no doubt I can get details from Forbes. The authorities are excellent at amassing facts, though they do not always use them to advantage. What a lovely thing a rose is!”
He walked past the couch to the open window, and held up the drooping stalk of a moss-rose, looking down at the dainty blend of crimson and green. It was a new phase of his character to me, for I had never before seen him show any keen interest in natural objects.
“There is nothing in which deduction is so necessary as in religion,” said he, leaning with his back against the shutters. “It can be built up as an exact science by the reasoner. Our highest assurance of the goodness of Providence seems to me to rest in the flowers. All other things, our powers our desires, our food, are all really necessary for our existence in the first instance. But this rose is an extra. Its smell and its colour are an embellishment of life, not a condition of it. It is only goodness which gives extras, and so I say again that we have much to hope from the flowers.”
Percy Phelps and his nurse looked at Holmes during this demonstration with surprise and a good deal of disappointment written upon their faces. He had fallen into a reverie, with the moss-rose between his fingers. It had lasted some minutes before the young lady broke in upon it.
“Do you see any prospect of solving this mystery, Mr. Holmes?” she asked, with a touch of asperity in her voice."
Which is an understandable reaction, as they're expecting him to solve a vital case about missing government papers, and he's standing there in a dream looking at flowers 😁
Hadn’t read that Holmes story but something to that effect yes although I think I’m more at a meta level with it, that flowers are possible at all and desirable even if part of what makes them desirable is our history with them. The fact that those relationships are possible means there is goodness in the world.
There are a lot of sources for faith, though many in the modern world find them challenging.
I don't think it's necessary for God to leave us any clue that he exists. But, I do think there is compelling evidence for God from fine tuning in physics, even if there's no reason to believe that God intentionally "put it there for us to find".
Well hang in there among the comments my friend.
This comment section is actually really respectful and intellectual. You should see some of the other discussions I'm having on Facebook.
I much prefer transcripts. Do you have any?
You can find the transcript of each episode on our website forum.
https://www.physicstogod.com/forum
Thanks!
Very interesting so far.
> we discuss some of the most fascinating developments in physics and make a convincing argument that they point directly to the existence of God
BOOO!
Come on. There's nothing religious in it. It's entirely science and philosophy.
There’s nothing religious in your podcast designed to prove the existence of God?
Right. Nothing about religion. God is the philosophical inference from the scientific study of the universe.
Starting with an end goal in mind (infer existence of God) is the opposite of science. At least be honest and say that you are looking for confirmation of your beliefs by any means possible, including interpreting scientific ideas to mean what you want them to mean.
I think if you hear the argument out, you will be surprised at how compelling it is. Keep in mind, the counter scientific theory is the multiverse. This is a very different situation than in biology where the alternative theory is evolution (a much better established scientific theory).
I can understand why you would assume I'm biased without knowing me. Nevertheless, the premise of the podcast is that by the end you will have first hand knowledge and be able to decide for yourself. If you think the argument is not convincing, it won't matter to you what I think or how biased you think I am.
I will make one point, that I am aware that if we use biased arguments in the podcast it will not be convincing to an honest person (which is our target audience). Feel free to let us know about any bad, biased, argument you think we're making.
As a trained physicist, I am used to "just hear me out" pleas from non-experts. It is usually clear from the first sentence if the person knows what they are talking about, but it is almost impossible to change someone's mind if they are attached to their pet theories. You clearly fit into this reference class, ignoring anyone else's point and instead pushing your own. If you weren't, you'd review the classic arguments why you cannot infer the existence of God from scientific advances, and address them. So, yes, I am quite sure all your arguments are "bad and biased", no need to spend time on a low information/time content like a podcast.
That's obviously your choice. However, it should be reasonable to you that we can't take up every point in the first 3 episodes.
It depends what you are looking for, I suppose.
From "The Hobbit":
"There is nothing like looking, if you want to find something (or so Thorin said to the young dwarves). You certainly usually find something, if you look, but it is not always quite the something you were after."
In my tutor group at university there was an extremely smart guy, yet he'd somehow dedicated himself to proving the existing of God via physics. I had to admire his effort, but it also appeared rather unscientific to start with the end goal in mind and then find a theory to get there. These days, as best I can tell, he spends his time writing books on theology.
It sounds like your friend might like these podcasts. :)
Anyone in Israel looking for friends? I just moved here and would love to meet people/attend events etc.
I'm a math postdoc at HUJI but have been a long time passive consumer of SSC/ACX/rationalism.
Please email me at gasvinseeker94@gmail.com
Me, living in Tel Aviv now but visit Jerusalem occasionally since my parents live there.
A second similar question today.
Most of my friends over the last 10 years have come from professional settings. If I count college or high school as a job, then virtually all my friends came from a professional context.
As someone who now can have a fair amount of control of who I work with, I want to just hire/chose people I like. If I diversify my colleagues in all dimensions, I’m confident there’ll be certain subgroups I’ll dislike. There’s some value to diverse perspectives, but less discussed these days, there’s value to monocultures where business-irrelevant topics don’t occupy much of the internal zeitgeist.
On one hand I believe strongly that restaurants and other public venues should not be allowed to discriminate who they provide service to. On the other hand, I think businesses (at least until a certain size), should be allowed to hire whoever they want to work with.
Where’s is my logic or morality breaking down?
As a side-note: My understanding of the research is that heterogeneous teams do better than homogeneous teams, when they are able to capitalize on their differences. And that heterogeneous teams do worse than homogeneous teams, when heterogeneity leads to friction and isn't exploitet in a positive way.
If you're leading a heterogeneous team I believe you have quite some, though not full, influence on whether it will be one or the other outcome. E.g. already by selecting co-workers who are better able to see the strengths of other abilities than their own etc. Their are so many different axis of 'diversity' though and I'm not sure which ones you're thinking about.
*Please take with a grain of salt; for a while I read what came across on this topic, but I never did a thorough review or anything similar.
I'm really glad Deiseach told me how to deal with the error in the 'Edit' function. Nevertheless I wish they would figure it out: I'm still getting a to see an empty comment after any edit.
Refresh your browser page and you’ll see your comment.
The problem there is you may get, as people have experienced in their work lives, the group of "the boss and his little gang of cronies, yes-men, and hangers-on".
The colleagues who do as little work as possible and push it off onto others, while they apple-polish for the higher-ups and lick their boots. The guys who get on by being charming and personable and on the right side of the boss. Getting a job by "hey, you play golf too!" at interview isn't the best criterion (it happened my brother, who luckily *is* a very good worker but made us all laugh by recounting how the interview was basically him and the interviewer chatting about golf).
"People I like" may or may not be "people who are good at their jobs". There's a subtle but real difference between "can fit in to the existing company culture/get on well with others" and "will be able to do the job, not spend most of the time schmoozing with superiors in order to climb the ladder".
"business-irrelevant topics don’t occupy much of the internal zeitgeist."
That's the major problem here: you want to avoid the kind of activists and grifters who will look for opportunities to go "I am being oppressed!" and hold the company to ransom, but at the same time - maybe someone with different political or cultural views who you don't get on with socially will fit in okay, because they put the job first and their own personal interests second and know what should be left outside the door when they come in to work.
I would say it's breaking down at the point where you're the one required to do it.
Does the power to hire include the power to fire? If you hire all like-minded people, and someone gets married and changes their mind about something, are you going to fire them? What if they don't change their mind but they bring their anti-minded friend to events?
I don’t think your logic or morality is breaking down at all. You should be able to hire pretty freely.
But I also suspect you might benefit from thinking a bit more deeply about why diversifying might be good for your business.
First off: “Discrimination” is not necessarily a bad thing. We discriminate all the time based on needs and preferences. The logical and moral problem of discrimination comes into play when you use flawed heuristics – like judging people on ethnicity, skin color, religion, gender, sexuality, etc. when you are supposed to be looking for the best engineer for a job. That is not just immoral, and probably illegal most places you want to live, but also bad business.
But discrimination based on behavior, values, personality, likability, etc. is a different thing. Yes, sometimes those things track uncomfortably close to protected categories, for all kinds of subjective and objective reasons, but that’s exactly where diversity has the most value, and where it might be most healthy to think carefully and challenge yourself.
If you have two equally qualified candidates, it may be a good idea to hire the one you have best report with. Especially when you’re a very small company. Good communication is important.
However, it would also be logical, moral and good business if the qualifications you are hiring for include strengths that balance out your weaknesses. Which often means hiring someone less like yourself.
If you only hire people like you, who you like, you may feel like things run smoother day to day, as people intuitively know what their colleagues think and expect. On the other hand, employee by employee, you will also create an echo chamber where blind spots, confirmation bias and knowledge gaps are everywhere, with less space for serendipity and creativity. Having people with different backgrounds, experiences and perspectives and personalities on your team is often extremely valuable. It’s a balance, and the smaller your company is, the harder that balance is to strike.
Then you have to consider that it is crucial for a business to understand their customers. If your staff doesn’t reflect your customer base, you will have deliberately built a business that is lacking in empathy for your customers (which is ethically dubious), and that opens up a niche in the market for someone else to fill (which is logically dubious). But whether or not that is a real business problem, depends on your market and position.
Finally, the larger and older your company is, and the more of an impact it has on the society and culture, the more of a moral obligation you have to actually make a difference. If you represent Wells Fargo, Volkswagen or DeBeers this is an important consideration, but for a plumbing company or consultancy with 20 employees, that argument carries far less weight IMO.
If you can consider all these factors in hiring more than a handful of people, and make decisions you think are truly best for your business – not just your own comfort – and you still end up with a monoculture, I would be surprised and suspicious.
So, you should definitely not “diversify your colleagues in all dimensions” just for the sake of diversifying. But you can probably benefit from thinking more carefully about what diversification is good for, and what is right for your business and your community, and then thoughtfully consider how you might want to diversify.
PS: You didn’t ask about this, but touched on it near the top: I think it’s really healthy to make new friends from outside of work, and I think that makes it easier to hire colleagues for good reasons other than likability. But making friends as an adult is really hard. For what it’s worth, the last time I had to make new friends (new city), I had success with 1) meetups for a group much like ACX, and 2) almost always having some low-commitment group event on the calendar (pub night, local concert, BBQ, etc.) that I could invite people to if I casually met someone it might be fun to get to know better. I’ve since moved, but still keep in touch with many of the people I met like that, and consider a few very good friends.
"Then you have to consider that it is crucial for a business to understand their customers."
As we have seen with the Bud Light debacle, where the marketing manager did *not* understand the existing customer base and picked a poor strategy, which on paper may have looked good - we need to diversify and rejuvenate the brand, we need to attract in a new, younger set of drinkers, who's popular with the kids right now? - but which ended up being poison: they drove off the existing customers without attracting in replacement, much less new, customers.
The efforts to appease the boycott customers were then even more tin-eared: the bad country music ad rushed out, the camo cans, the yee-haw cosplaying which even rednecks realise is just window-dressing and is even more insulting: 'yeah, we think you are so dumb you'll fall for this and come back to us'.
Yes. That seems like the perfect example of the dangers of ”diversifying on all dimensions” just for the sake of diversifying, and mindless activism, rather than going thoughtfully about it.
However, while the Bud Light case seems like a clear-cut example of Ivy League activists trying to force-feed their politics of DEI to the rest of us, it probably wasn’t as clear cut until after the fact. They were probably aware that they were poking the bear, and they wanted to create a little controversy people could talk about over a Bud Light. And it may have been partly bad luck that they became such a lightening rod for the trans debate, rather than just a 3-day hashtag. It’s predictable that some people would hate it, but it is a bit surprising that people (influential people like Kid Rock in particular) would hate it so much they would call for boycotts and make it a virtue signal to share their disgust. And it’s a bit surprising a brand of their caliber can have so low customer loyalty, that people can’t shrug it off as a gaffe, typical of our day. Like it’s Enron, not just a gimmicky sponsorship.
But of course, it’s a terrible, tasteless product, it was already in decline, and it is an incredibly incendiary culture war issue, so maybe it was overdetermined.
That’s interesting because when I saw the Bud Light thing play out, my guess was that the majority would have thought that was a bad idea, but were afraid to die on that hill. That’s exactly the type of work environment I’d like to avoid. I want people to argue issues openly, make a decision, then stand by their decision, at least until there is an opportunity to revisit it.
People who have a tendency towards activism don’t really get much value from consensus or disagreeing and commit. They need an internal enemy to become the heroes.
It really is a disaster of their own making. I absolutely see the point about developing a new customer base, and the Pride Month capitalism is now established as part of all companies, so pivoting to the more progressive elements (we don't want college drinkers, we want... college drinkers.. but classier!) is doable, if they spent five minutes thinking about it. I believe them about "it was only one can" and "not a campaign" and "not a partner" but that only makes it *worse*.
Unfortunately, it looks like they spent five minutes going "Who's the current hot influencer name? Mulvaney? That'll do!" and then expected that social media would *only* be seen by the precise bubble of Mulvaney's 10 million TikTok followers and not leak out elsewhere.
But somehow I can't envisage the people who follow Mulvaney for fashion and makeup endorsements switching to glugging down cans of Bud Light, so - yeah.
And that blew up on them, then the half-hearted 'apologies' only pissed off the LGBT set who are now "we're not stocking your pisswater in our gay bars because you threw a trans person under the bus!" and they're getting it in the neck from both sides:
https://eu.usatoday.com/story/money/2023/05/18/bud-light-loses-lgbtq-score-after-dylan-mulvaney-transgender-campaign/70229893007/
"A bird in the hand is worth two in the bush" is advice that they seem to need having repeated. They threw away existing customers without first having locked-in the new replacement market.
I'm not the person who asked the question but I want to say that this is a great answer.
Thanks. I appreciate it.
Logic may not have been part of how you got to the idea that restaurants shouldn't be able to reserve the right to refuse service to anyone at any time. Indeed, it would have been odd if it had - it's a standard cultural bias we're trained into these days.
There are loads of reasons why a restaurant might be morally justified in refusing someone service. One is if the customer says they are vegan or has an allergy, which the restaurant cannot guarantee to cater for satisfactorily or safely. Or on a previous visit the customer may have complained vociferously and unreasonably about the service or the food, like something out of Fawlty Towers, or left without paying, or showed signs of being drunk.
A restaurant refusing service to anyone at any time without giving any reasons doesn't seem to be an exactly logic-based process ;)
The psychology of the 'reply guy' archetype is a fascinating one. I really hope that future big-data approaches will shed more light on what goes on before someone types ";)" at the end of an utterly inane and thoughtless comment and hits "post".
It's neither logical nor illogical, it's alogical. Logic is the consequences flowing from certain premises.
Having said that, I was mostly reacting to Leo singling out one specific idea as not being based on logic which sounded ... at least as biased as the 'standard cultural bias' they complained about. But I admit they gave a relevant answer to the OPs question. I'm sure this could lead to a looong discussion; one that I currently don't intend to follow up on further.
On the contrary, giving people reasons for saying 'no' just creates space for them to argue. I can tell you've never initiated a breakup with anyone or had to let an employee go. "We won't serve you" doesn't invite argument. "We won't serve you because X" does. And, in the litigious environment of modern-day America, it invites lawsuits.
I would think the premise is 'all men are created equal.'
Why do you think that your logic or morality is breaking down? Personally, I think everyone should have freedom of association, but I don't think there's anything absurd in thinking that social harmony or whatever outweighs freedom of association in one situation but not in another.
The logic for there being no restrictions on "hiring whoever you want" breaks down when its universalisation combined with unequal distributions of wealth and effectively segregated social circles leads to exacerbating said unequal distributions. Beyond that you're pretty much still allowed to hire the people you like as long as your personal filter isn't discriminating based on protected characteristics. As long as you don't fall foul of that criterion, there's no problem with hiring people you feel will fit in with your business's culture.
I believe whistle blowing is a very important activity to protect. Yet anecdotally, most people I am familiar with who claimed to be a whistleblower seem to do it for personal gain, often without trying internal channels first. I feel the same way about employee activism, or people who sue their employers.
All good characteristics of our society, and yet on average I would not want to hire or collaborate with most people that belong to those and related groups.
Am I wrongly biased or is there meat to this? I feel fairly confident in this assessment. How should I navigate the world then?
I don’t see how whistle blowers make any kind of gains from their activities and probably make career losses.
In the US, there are many laws which allow whistle blowers to collect monetary rewards for reporting certain activities like tax evasion.
>most people I am familiar with who claimed to be a whistleblower seem to do it for personal gain,
Are you saying this is untrue of the other groups you know? People you know who tow the line aren't doing it for personal gain?
One of my jobs specifically hired a whistleblower who had shut down a company in the same field. She was essentially quality control; anytime she thought something was out of spec she made sure everyone knew about it.
OT: "toe the line"
Tow the lion.
I think there’s something wrong with being selfish while pretending to be selfless.
People who thrive in that kind of dissonance tend to be dangerous associates, a-la Elizabeth Holmes.
...well, same question again. Are you saying this is unique to whistleblowers and that non-whistleblowers don't do it?
I think there are all sorts of sociopaths in corporations. Even people who aren't sociopaths in their personal lives can behave like one in certain work environments, for example when certain behavior is required to get promoted. I guess I have a deeper aversion to someone-- in this case a pretend whistleblower-- who betrays an entire organization vs. a vanilla corporate sociopath who leaves a few bodies in their wake.
How many is "a few"? The bad managers I've seen will leave as many bodies in their wake as they have access to.
I also don't think a whistleblower is betraying an entire organization. They're very specifically betraying the management who allow the situation.
Interesting. I've managed hundreds of people and played the game, and my best interest and the right thing to do have always been aligned when it comes to people within my organization. Helping people grow into bigger roles is the best thing a manager can do for everyone involved. And some people won't make the cut, but I don't see fair performance management as sociopathic behavior. On the other hand, I've engaged in less honorable manner with other organizations when there were internal turf wars.
In my experience, companies that are well managed enough to become big and successful tend to do the right thing eventually. It's just that eventually can take a long time. The kind of sociopath that will hurt their own team, on purpose, will eventually be found and removed from a long-term functional (short term often dysfunctional) organization.
Our entire economic system and most of our social arrangements are based on self-interest. An honest, conscientious, reasonable person still gets to seek advantages for themselves. Also, some sleaze ball seeking personal advantage who happens to prompt a positive change might not be a nice person but has done a good thing
I suspect there are some important visibility issues to consider. Suppose Tim manages a team of employees who manufacture hammers. Three of those employees (Alice, Bob, and Carl) notice that the hammer-sharpening tool has gotten rusty and needs to be taken offline for a day to avoid a safety issue. All three of them separately ask Tim to send the sharpener in for repairs, but Tim doesn't want to, because he's trying to meet a quota for a bonus target.
If Alice quietly goes over Tim's head to Vikesh (the regional manager) and discreetly asks Vikesh to handle the problem, then from your point of view as Alice's co-worker, you probably don't notice anything...as far as you can tell, the sharpener got fixed and there's no real problem.
Similarly, if Bob makes a loud stink and publicly complains about Tim's carelessness all over the company, then even if the company does choose to fix the sharpener, it's not going to be good for Bob's career; he's going to make enemies and the company will probably look for excuses to fire him or at least make his job miserable enough that he looks for a job elsewhere. So if Bob makes a habit of publicly complaining about problems at work, then (statistically speaking) he won't be your coworker for very long, so you won't hear about Bob's sort of complaints very often.
What's left? If Carl files a formal whistleblowing complaint, then maybe that protects him from retaliation for a while, so you hear about the complaint and also Carl sticks around. But you're not hearing about Carl's complaints because he's the most common type of complainer -- you're hearing about them because even though he's a relatively rare type of complainer, he's the only type whose complaints are both (1) publicly observable, and (2) durable.
It's a tough question: does Vikesh do anything about it, or will Alice's complaints just be ignored? If Bob sees that Alice is going the 'proper' route yet nobody cares or does anything, and the problem remains to be solved and needs to be solved, then going public and loudly complaining may indeed be the only effective way left. I've seen situations where only the threat of a lawsuit did finally get a decision made.
Is Carol complaining out of spite and revenge, or does she have a real, ethical, incentive causing her to do this? Does it matter if she's doing it for revenge, if it is a real abuse happening?
It is a tough question! I don't mean to suggest that one of the workers' responses is better than the others. I just wanted to point out why it might look like whistleblowing is common even when whistleblowing makes up only a small portion of employee complaints.
Like everything else, it's only the extreme/most public cases we hear of. "Jim Jimson is a whistleblower revealing the shady practices at DoggyDiamonds'n'Dos, tune in at 9:00 p.m. for our exposé!" gets way more coverage and hence public attention than fifty "Bob Roberts used the in-place grievance procedures to advance his complaint and have it rectified".
It's a valid point, but it doesn't seem to address the parent comment's claim. They didn't claim that whistleblowers aren't rare compared to internal complaints. They claimed that the whistleblowers they know, however rare they may be, seem to be selfish rather than altruistic. It's not clear from the comment how they made that assessment, but maybe they have some reason to think this.
But Alice and Carl are also kinds of whistleblowers, and potentially much more benevolent. Jason's point is that Carl the Formal Whistleblower only represents a small percentage *of whistleblowers* even if he's the one you're most likely to experience having as a coworker.
That doesn't answer OP's question though on whether they're right that Carl-type formal whistleblowers are usually selfish.
In many companies it's pretty clear that your boss will discriminate against you if you make their life hard. So am understanding of whistleblowers not taking internal action first.
Was Frances Haugen a bonafide whistleblower? She hired a PR firm to make herself into a famous whistleblower. Didn’t reveal anything new, or illegal (as far as I can tell).
And my sense is that this sort of employee activism is in vogue, so I want to avoid environments that allow that behavior to foster. Which means, selecting for the right people.
It’s not that different from people that sue their previous employers. Typically, most employers will settle on any employee claim, no matter the evidence. So technically we could all sue on our way out and get a little something. And if you know how these things work, you can get a lot because there’s always some management error.
But I would personally not sue my employer unless something outrageous happened. If I don’t like how I’m treated, I’ll just leave. And I’d like to have colleagues that more or less would follow suit. I’m confident litigious employees make for a less enjoyable work environment because they put everyone on guard.
It's been discovered that L-DOPA has some activity as a neurotransmitter. Does that make it a catecholamine now?
With the news that both Vice and BuzzFeed News are closing due to unprofitability, how are we all feeling about the future of the media? Are all advertising-funded services doomed? Should they be nationalised? Should big tech platforms be broken up? Is the future just going to be a handful of writers on Substack?
https://www.theguardian.com/commentisfree/2023/may/20/vice-and-buzzfeed-were-meant-to-be-the-future-of-news-what-happened
Strange answers to this. The demise of Vice and BuzzFeed News is to be welcomed. That model of click bait journalism, driven by the worst kind of advertising (itself click bait) added nothing to the good of society. Meanwhile plenty of old school media is doing great from the subscription models. And Substack is genuinely great - again subscriptions.
1) Big tech platforms should be broken up or heavily regulated like utilities.
2) The future of news media is kaput. And journalists are in large part to blame, though it was a pretty difficult situation overall.
3) Probably 90% of news media could disappear and it would be a net improvement to society.
One way to look at Vice and BuzzFeed is that they were primarily entertainment companies fueling a tiny bit of reporting, and their failure is weak-to-moderate evidence that model doesn't work circa 2023. That's not generally worrisome.
They were genuinely really good media though. I mean Vice had war correspondents reporting from active war zones. Buzzfeed won a Pulitzer Prize.
I agree that they *produced* some really good reporting, but it's important to keep that in perspective, consider what proportion of their output was valuable to you as news, and reflect on how that relates to their business models / appeal to investors.
I guess to answer your original question is that their bankruptcies don't really move the needle much for me. The larger media landscape remains unchanged, and I'm not sure their quality output redeemed the rest.
In recent years, most big traditional newspapers (think NYT or le Monde or El Pais) have come out of a free content + advertisement system to a paywall+subscription system, and found it much more profitable.
I haven't gotten with the program and subscribed to any of the major newspapers, and the local paper we subscribe to impresses me primarily with its uselessness. (My housemate likes it.)
I'd be very interested in people's impressions of the reliability of any of these paywalled big name papers.
1) Do they have giant gaps in their coverage?
a) I was not impressed learning about local events one day from Al Jazeera, after having already scanned the local paper's emailed headlines. Is this sort of thing normal?
b) Do they report on anything from cities, states, countries, continents, other than those where they are based? In what level of depth?
c) If something major happens elsewhere, will they report on the event, or primarily on local (to the paper) reactions to the event?
d) do they actually have reporters available outside their locality?
2) Do they regularly have headings that don't match the contents of the articles behind them, either because of click-bait or because of constant revisions?
3) Are their reporters numerate? My local paper impressed me with their ability to post statistics from multiple incompatible sources in the same article, such that basic arithmetic showed they'd accounted for 120% of residents, or similar gaffes. (They've since hired some people who passed high school math, or perhaps even college level "statistics for poets" and the frequency of this sort of nonsense has gone down.)
4) To what extent do their political biases, or those of their owners, render their coverage essentially unreliable - one needs at the least to also read an opposing paper to have any idea of truth
5) Is there any single thing I can read regularly that will leave me well-informed about news, without having to read several other sources?
1a) it could be that the papers went to print before the events had become known? It doesn't seem like a bad idea to slow down the 24 hour news cycle, but it does mean the news might miss some things.
1) For the big papers (NYT/WaPO) Yes.
1a) Be prepared for a lot of Gell-mann amnesia. The NYT seems like it has its finger on the pulse of X state. Then they do a story on your state and it is clear they talked to like 2 people and have zero fucking idea what is going on.
1b) Sure they cover global topics, though are very US centric.
1c) Depends
1d) Yes
2) Yes
3) No
4) A very high extent if they are politically salient topics. This isn't always consistent. Sometimes the NYT of WaPo will run an article that is actually trying to get at the facts on some political issue. But 5 other times they will just parrot the approved twitterati talking points without using 2 brain cells.
5) Economist? Or maybe read Fox News, MSNBC, and WSJ then triangulate?
Triangulation is the way. Read a variety of mainstream media, and read a little bit of the crazy stuff too. The wider your base, the better you can triangulate.
Ideally, sure. Daily life of course is an exercise in balancing the ideal with the plausible.
On a lot of dimensions of news and current events, for me, the Economist has been the single go-to for....well damn it's nearly two decades now. Doesn't provide that service on all dimensions, e.g. their attempts at cultural-zeitgeist type writing and punditry are generally ignorable. (I stopped years ago even cracking open that "The Year Ahead" annual special issue.)
But if I had to pick just one it would be the Economist and there isn't really even a serious other contender anymore.
I still think we need to move to the BAT model. The Basic Attention Token. BAT is a crypto token you use to pay for online media. Instead of paying several hundred dollars a year for entire journals of which you'll read maybe one essay a day, but miss out on desired essays behind pay walls you haven't bought, your browser pays say a dime for every essay you do read.
For those who cannot buy tokens, they can watch sponsoring ads to buy tokens.
Substack newbies who get say 1,000 reads earn $100, great essays which find 1,000,000 reads earn of course real money.
Journals need to keep their writers happy, lest they go to substack.
Readers shell out $100 per year for 1,000 essays, but only on the essays they really wanted.
I think I approve of moving in this general direction, though I'm not sure why it has to be crypto. I also see incentives similar to those that ruined existing media (keeping people angry to keep them engaged)
I'm not sure why this needs to be crypto based - but I agree with the general model.
For the general model of one subscription to access lots of magazines, there are already services such as Readly available, though I wonder how much more clickbait we'd see if such services were to become popular. Personally, I'd be more interested in one subscription to pay for all the gyms in my area – that would also give the gyms an incentive to try to make me go to the gym.
It's called Active & Fit Direct - you can get it through major employers, USAA, and some other places like that. It's better in some areas than others, but I've used it on West and East coast, in the south, and in the midwest, and it's been pretty good.
Thanks, but I don't think that's available in my country. Still very curious though about how the behaviour of gyms changes when they get paid for attendance – it seems like it should be a gigantic improvement in alignment.
Nationalise the media or nationalise advertising? Either would be a dangerous idea, IMO. I'm not convinced we can draw the lesson that the media is doomed because two companies have gone out of business. If anything it's a necessary part of the economic cycle; as times get tougher only the profitable businesses will survive. We are still far from seeing a complete collapse of the media.
What if none of them survive though? Youtube isn't profitable yet - what if it never is? What if Twitter never is? Some day they'll all be forced to pull the plug, unless alternative arrangements could be found. E.g. what about an internet tax that funds all online media based on screentime?
I believe Youtube is profitable these days (though it took a long time to reach there). Twitter may be trending that way under Musk as well (mostly via cost cutting).
Right, but why should the State intervene to save a failing business? If all you care about is that media of a certain standard exist, you could have something like a free publicly funded national broadcaster providing that standard and let everyone else sink or swim based on usual market dynamics. Like a few countries already do with fairly good results.
My first thought is that it doesn't feel like there's a big problem with having multiple public media organizations (as long as they are independently governed). My second thought is that when to subsidize things in a market economy doesn't only depend on the economic value of those things, it depends on whether there is a mechanism for those businesses to recover their costs. Think about roads. Or another example I've heard is lighthouses. After the Fresnel lens was invented, lighthouses got way better, but because ships could see them much further away, they wouldn't necessarily be docking there, and wouldn't have to cover the costs via a docking fee. (Apparently France's lighthouses were way better than England's after this, because they were publicly funded). So it turned out that lighthouses produce more value as a public good rather than a private one, just like roads. Is media the same?
In my experience public independent media is indeed higher quality than private media. But clearly in what concerns my personal taste, not most people's, as most people tend to favour private media.
But more importantly, I think the existence of private media is important to safeguard media independence, and the existence of public media is important to ensure media provides public service. From what I can see both types improve in this way when they coexist.
I don't get the doom and gloom. A few decades ago, I could go to the city library and access a few dozen newspapers for free (and maybe magazines? I don't even remember). Now, I can access thousands of professional media outlets for free from the comfort of my own home, plus millions of amateur media outlets. Like, things are pretty great.
How many of the thousands of professional and millions of amateur media outlets are doing independent reporting, e.g. sending reporters to places where news is suspected to be happening, and how many are just repackaging and commenting on other people's original reporting.
I haven't done a deep dive into this, but my sense is that the number of actual reporters per newsworthy event has declined significantly in the past decade or two, and for marginally newsworthy events is often less than one. Which means lots of newsworthy stuff will either not be covered at all by those thousands/millions of outlets, or will be uncritically repeating someone's press release that nobody bothered to send a reporter to ask questions about, or be based wholly on the work of one reporter who may be biased or otherwise in error.
It's important to note that Vice was valued over $6 billion only a few years ago, and is now worth possibly $200 million while declaring bankruptcy. So investors have been very wrong about whether they can recover their costs, and have massively overinvested in new media operations - meaning that we may not have this (certainly very good) media landscape for very long. Although, it's also worth mentioning that while we're sitting around enjoying the finest media in history, many (most?) people rely on highly politicised and unreliable media sources, because politicisation is rare way for media outlets to increade their profits.
Are you sincere in saying the media is the finest in history? I don't find it so, at all. I believe it has gotten significantly worse in my lifetime, especially after cable news came on the scene, and again with the decline of print newspapers.
The main issue is a lack of journalism that is adversarial to power. The current "aligned to the DNC or aligned to the Murdoch or Trump families" version of left/right media, rounded out with the many "aligned to state power centers" outlets, does not constitute a healthy media ecosystem.
Yeah, I was trying to say that in my last sentence above. Media for normies is appalling, but if you're already well-informed you can find amazing information on the internet. Although I feel that's getting worse with digital outlets closing and SEO ruining search engines.
SEO ruining search engines is a big deal. They are so much worse than they were.
Old media sources were very frequently politicised and unreliable, there were just far fewer alternative information sources available to point that out.
To me, that looks more like a part of the wider trend where many different types of businesses have gotten in trouble after central banks stopped printing so much money, rather than anything specific to the news industry. E.g. Klarna's valuation dropped by 85% from 2021 to 2022, and Peloton's market cap dropped from around $50G at the peak to around $2G now.
I'm also not convinced that the media have gotten more politicized or less reliable over time, and I don't think it has much to do with the search for profit, as state-owned media outlets seem no more reliable and no less politicized than comparable for-profit media outlets.
If you look at US right-wing media and CNN, they've obviously become very popular while becoming very politicized, and certainly less accurate in the case of the right wing system at least. I don't see the same at any US government-run outlets which are relatively obscure, then again I haven't paid much attention to them due to their obscurity, so I may be wrong.
>they've obviously become very popular while becoming very politicized, and certainly less accurate in the case of the right wing system at least.
Oh for sure the left side (even up to the NPR/NYT is a lot less accurate than 20 years ago). Though Fox News remains more consistently unhinged.
I'm not too familiar with US mainstream media, but a cursory Googling seems to indicate that NPR (I think that's the biggest government-funded news outlet over there?) is well within the normal range of US for-profit news outlets in terms of reliability and political bias.
To answer your original question, I'm feeling good about professional media going out of business. On a related note I have no idea what landscape you could be evaluating as 'very good'.
I mean the huge amount of free high-quality information available only to those who know how to find it. That's pretty good although I feel it's been getting harder of late.
Unless you're one of the people failing to make money out of those products you're enjoying for free, or one of the people who feel that incentivising these people to try and get you to pay for their products has a negative impact on democratic societies.
Wouldn't democracy be impossible without news media? Or do you think people can stay informed via other means like word of mouth?
Hmmmm. Why wouldn't democracy work if everyone cared about which policies would maximise overall welfare, for example?
Without news media, how do people receive information about issues that affect them?
One way I tend to frame this is that voters are not given realistic choices on their ballots. If the choice is between candidates, then voters are choosing between large platforms containing dozens of political positions, some of which they like, some they don't, some strongly, some weakly, and so the voter's ability to express real preferences is profoundly diluted.
If the ballot choices are yay or nay on various propositions, it's much better, but still terrible by comparison to real-world decisions. Decisions are invariably all some version of "do you want X or not?", and if X is a government service, the overwhelmingly tempting thing to do is to just mark "yes" all the way down the list. In the real world, however, "do you want X?" is typically carrying a price tag, and you likely can't afford everything on the menu.
A more realistic ballot would say something like "if you had to pick only two of these ten services, which ones?" Or list all the things the state could do, the estimated price of each, and ask the voter how they'd spend e.g. $100 million between them.
Doesn't that argument apply equally to self-interested voters? If you have enough information to be able to decide the best way to vote for your own benefit, why shouldn't you consider your overall community, society or planet into account as well?
This is a tangent, but suppose someone runs for president on a platform of "I will literally kill the 50 richest people and distribute their wealth equally". This would be great news for virtually everyone - do you think I should vote for them?
Would love to share my new post on how theme parks caused the Paris Syndrome. It's partly a culture bound issue, but I think more environmental aspects at play.
https://hiddenjapan.substack.com/p/how-japan-created-the-paris-syndrome
Erratum: you have Matthew Perry down as a British, rather than an American Commodore.
I think Hoeffding's inequality is the best you can do without some sort of nontrivial upper bound on the variance of X. But I very much doubt it will give a particularly sharp bound.
I think I could suggest something if you laid out the situation in a little bit more detail. I do not know how slot machines work. I mean I have literally never seen one, and do not know how one plays one -- what you put in, what the payouts are, what choices the player has, etc etc
That's "bootstrapping", isn't it?
https://en.wikipedia.org/wiki/Bootstrapping_(statistics)
If your sample size is small, you can probably afford to model X as sampling uniformly at random from among the values you've seen.
Depending on how much precision you want and how much compute you have to throw around, you could e.g. brute force your estimate from there, or compute an approximate GCD of the values you've seen (e.g. round them all to the nearest multiple of 0.1), at which point what you're dealing with is a markov chain and you can compute the transition matrix and solve the problem using dynamic programming.
Can you give me any idea of the sorts of values you're dealing with?
more alpha diversity, less beta diversity.
more alpha = 3 TV channels -> 300 TV channels
less beta = mexican cuisine is no longer unique to mexico.
Statement 1 is wrong. I didn't watch the serie final of M*A*S*H, I never watched the Oscars, I recall seeing only one world cup game . I'll pleade guilty for CNN's footage of the Gulf War, but that's because it's one of my earliest memories (the night-vision footage with tracers illuminating the sky is quite momorable).
In fact, if I had to pick *the* defining cultural product of my generation, it'd be the simpsons or Friends. Both of which I saw barely a handful of episodes. The common touchstones were common to your social group (which was kind of a self-replicating phenomenom. I became friend with kids who had similar interest as I did, and we fed each other music, movies, shows we liked)
Uhmmmm, they don't need reconciliation, because they are not contradictory.
You can simultaneously homogenize and heterogenize. If something was only composed of one type, and you made it composed of 10 types, then you have heterogenized it. If something was composed of 100 types, and you made it composed of 10 types, then you have homogenized it. If a society had both - a mainstream that was only 1 type of thing and a bunch of subcultures around it that were 100 different types of things -, and you forced this society to uniformly have 10 types of things everywhere, then #1 and #2 both hold, you simultaneously (from the POV of the mainstream) "destroyed common cultural touchstones", AND (from the POV of the niches) "destroyed obscure subcultures and pushed everyone into a single global culture".
Anyway, I'm pretty skeptical of the claims having the the general form "The Internet has done X".
(I) They are inaccurate. The "Internet" is TCP/IP, what most people call the Internet is in fact the web. That's not an empty "Well Akshually", there is a good 2 decades difference between the Internet and the web running on top of it. There were plenty of applications on top of the Internet older and other than the web, including email. Most of them is extinct, yes, but the point stll stands : The Internet is a collection of protocols older than all modern operating systems, it enabled but isn't directly responsible for whatever the web did.
(II) They are wrong, even after accounting for the fact that the Internet is not the web. The web itself is astonishingly versatile and of many forms. It was never one thing, so you can never claim something simple about it and be right. The web is 1990s personal websites made with hand written html, the web is wikipedia, the web is 2000s blogs and forums, and - tragically - the web is also the shit that is 2010s social media.
This is actually what most people mean with "The Internet has done X" for most harmful values of X, they actually mean Social Media did it. The horrible idea of commodifying the attention of tens or hundreds of millions of people, that's what made all the bad things happen.
Well firstly, the universality of past culture is often overstated. The MASH finale was watched by an estimated 100 million people in the US, which was a lot, but there were another 100 million people in that country alone who didn't watch it. The most watched Oscars (1998) got 55 million viewers.
That said, I feel like culture has split in some ways and homogenised in others. We're all fed a stream of content that is personalised to our demographics rather than our geography, meaning that I wind up consuming exactly the same goddamn content as every other person of my age-class-sex demographic in the world, but totally different content from (say) my parents.
Haven't we all seen Squid Game? Why the heck have I seen Squid Game? I'm not into gory stuff, and I'm definitely not into Korean dramas, but it was fed to me and I ate it all up.
Once upon a time, if you were into [obscure thing], you had to actively seek out other people who were interested in it. Hence - zines, conventions, dress codes that let every other fellow [obscure thing] fan know you're one of them, etc. Internet did destroy this (sometimes intentionally, 4chan's "suppress your powerlevel" cultural norm certainly had something to do with that; speaking of, the very fact that internet gave voice to introverts necessarily changed the previous extrovert-driven subculture dynamics).
This, of course, does not mean that "culture is flattened". No norms can ever be imposed now (and no, the woke agitation isn't an increase in norm-imposing, it's the death rattle of the old gatekeeper class as it barricades itself inside the institutions). The culture has, in essence, splintered so much that even the subcultures lost their own common cultural touchstones.
Disagree with this. Whole Reddit, for example, might be losing its relevance as various subreddits get to politically extreme, it's still the place to go to discuss a lot of niche topics, and the norms are very concrete and harshly enforced. Some subreddits more than others, but the monoculture definitely propagates.
I mean, Reddit is a top 20 website worldwide by traffic, I count it among the institutions. (And, to be sure, you can impose norms on Reddit. Just not on its users, or on the wider world with Reddit as a springboard. People can easily defect, and will - cf. The Motte.)
The type of "subculture" I'm told is disappearing is the type where people dress in a certain way, listen to a certain type of music, and hang out with other people who dress that way and listen to that music.
There used to be lots and lots of these, but now I hear that music genres are no longer as strongly linked to a way of dressing and a tribe.
For example, here in Italy there are, or there used to be, "darchettoni", perhaps the local translation of "goths" (I think). The dressed all black and listened to whatever goths listen to, and hung out with others like themselves. Already 20 years ago the ones I knew were lamenting that there are no new goths any more. Today, it seems that the goths who are still around, are the old ones, who joined the culture back then. I'm told that the same applies to other subcultures of that sort, and that there are no new ones either.
I'm not perceptive enough to verify all this stuff for myself.
If you understand the disappearing of subcultures in that sense, then it's compatible with the statement that there are no more common cultural touchstones. There are many splinters of the culture, but no longer in the sense that dress = music = social circle. They are more like personal interests than tribes.
90% of those music based cultures were about complaining that “the system” suppressed their bands and music. Now that “the system” barely exists and nothing is preventing the popularity of the music except that no-one likes it, they have yet to find a different flag around which to rally.
Actual, existing subcultures tend to fail in the exact opposite direction of pathological gatekeeping and purity testing - with whatever popular outgrowth they produce being accused of selling out.
(I don't think I can phrase my assumption about why you'd think otherwise without being mean, so I'll refrain from typing it out.)
I think both exist side by side – resentment at being marginal, and envy at those who managed to move beyond marginal.
I get the vibe I described both from Chuck Klosterman's various works, and Kelefa Sanneh's book _ Major Labels_ .
There are, of course, even more ways to respond, but these tend to be more specialized and unusual, for example the Nick Hornby autistic-style collect, curate, and catalog response.
So, my initial reaction to this was - "Obviously, you're getting this from second-hand accounts, not from directly interacting with any subcultures in any meaningful capacity." Which, as I said earlier, mean and hardly constructive. (True, though.)
So, I didn't want to leave it at that, and I went on to check who those people you're referring to are and what they've written about, and as I was going through wikipedia descriptions of Klosterman's books and their subjects (growing up as a glam metal fan, Guns'n'Roses tribute band...), something clicked. Fans of mainstream things past their 5 minutes in the spotlight, reenactors of past fashions - those are also subcultures. Not merely technically, they simply are. Maybe, by pure numbers (of distinct tribes, not of their headcount), they do make up 90% of them. I can even believe people who make them up do feel resentment that the world has passed them by.
And yet - what you said feels incredibly misleading and myopic, because that's just not the kind of subcultures most people are going to encounter, much less pay attention to. Also, incredibly arrogant, and I suspect that - if you care about any cultural output at all - in a decade or two, you're going to end up exactly as what you describe, as contemporary subcultures' creativity snowballs them into the mainstream.
I've no idea quite what you think you are referring to but I grew up in the world I describe, high school in the early 80s, college in the late 80s, first adult years in the 90s. I experienced exactly the phenomenon I described.
I have no idea how old you are, but I suspect you are dramatically younger. And OF COURSE the current versions of "I'm so unique, as evidenced by the way I behave exact like everyone else in my little tribe" cannot, with a straight face, complain that their music is being suppressed by "the man".
What they can and do complain about is that it is being suppressed by "the algorithm" but that's a more ridiculous claim, and everyone knows it – there's a whole lot of difference between "no-one knows our music because they never get a chance to hear it" and "no-one knows our music because they couldn't be bothered to spend 5 seconds even trying it".
Honestly, read both authors I recommend. Both are fascinating in their different ways, and both will, I suspect, give you some insight into the very different world of what pop culture was like before the internet. For Klosterman, I'd recommend starting with his most recent book, _The 90s_.
Exactly. Those subcultures, and there were many (punk, heavy metal, The Dead, jazz, alt-country, etc.) tended to hold the view "Commercial music sucks.' Sometimes expressed in the form 'Corporate rock sucks.".
Jazz is different from those others in that it had a turn as culturally broadly popular -- even dominant in some ways, depending on who was talking. Which is germane here because jazz lost that position and became a subculture much as you describe it long before the Internet existed.
And then if anything jazz today is an example of a subculture which has benefitted from the rise of the Internet.
This stuff cuts both ways, is I guess my point.
The algorithms of youtube turned kpop into a global phenomenon. Obscure bands don't get in the limelight because consumers are not into bands anymore
I mean, no, decades of persistent attempts of the Korean music industry did. (Also, boy/girl bands that they're famous for are literally bands, so your heuristic is clearly too simplified.)
kpop has mostly groups not bands
Statement 2 is false.
I mean, the Internet absolutely has destroyed a lot of obscure subcultures. But:
A. It's created a lot, like a lot lot, new obscure subcultures.
B. You dramatically underestimate how many pre-Internet subcultures are out there.
For example, I dig part of this vibe, it is super surreal to have lived through not one but two D&D movies. At the same time, as a proper connoisseur of nerdery, I take comfort in the classics that the mainstream will never, ever exploit or monetize. I, dear sir, not only know what a Glitter Boy is but why one would bring a supersoaker to Mexico and why the skull dog nazis invaded Tolkein. And there is some comfort in the fractally infinitely expanding universe of weird obscure shiz.
I think both statements are false. In statement 1, "everyone" needs to be restated as "many people, especially people of a similar race, class, and community"
Fluid dynamics
Things melt in one place and freeze somewhere else; or change states otherwise; What is really novel is the frequency of state changes. Because Internet…
I think something like "the Internet gives everyone a random sampling of like 20% of the culture" would reconcile them. So that everything gets thrown into the blender and gets picked up by 20% of people so nothing is ever obscure, but also, nothing is really universal.
I'm not saying that's true (maybe a more complicated version is true), but I think it would reconcile the two things.
Something similar -- I imagine a huge mob running around, consisting of maybe 10% of the population, randomly invading small spaces, destroying them, then moving on.
So on one hand, there is no common culture for everyone, although there is the thing that the mob currently focuses on, which one week later may be irrelevant and forgotten... but also the small spaces are routinely invaded, and even if they are left alone afterwards, something precious was destroyed (at least, everyone is aware that the mob could return at any moment).
I mean, just look at SSC/ACX. Most people never heard about it. And yet, at some moment it was a focus of the NYT. Simultaneously obscure and in the spotlight -- but only for a short while to be destroyed. Scott survived the attack, the mob moved on. But we can imagine a parallel universe where Scott simply lost his job and had to give up blogging if he wanted to get a new one; and in that universe, both statements would be true (no common culture, external power that destroys subcultures).
One man’s ponens. I agree that “language models” is misleading, but for the opposite reason. “Statistical models of text” would be more appropriate.
Note - you have exactly zero proof for your claims, and your entire reasoning depends on a belief that an improvement in quality must have necessarily been caused by an improvement of the underlying model of the world.
As far as I can see, it's the other way around - there's clearly been no improvement to the world model, and it's becoming more and more obvious the more and more fluent the output becomes otherwise. We're simply out of other explanations for why the LLMs would make glaringly stupid logical and epistemic mistakes, and the mistakes that persist increasingly take the form of reproducing common formulas without regard to the underlying semantical context.
> The problem is that "language" refers to the interface between brains. A thought in one brain is serialized into a statement, which is then transmitted to another brain and deserialized back into a thought. Simple language models are statistical reductions of what is being transmitted.
I disagree strongly with this. Thought isn't a thing that can be serialized and deserialized. Thought in a human brain may or may not take the form of language, and language can be generated with or without thought behind it. We don't know what thought is, not enough to make these kind of claims.
"Isn't a thing that can be serialized" as in "we don't know any way to do it and don't know a path to doing it", or "it is conceptually impossible to serialize thought"?
More the first. I don't think we have any working concept of what "thought" is. But maybe one day we will, and then we'll be able to see whether the second is accurate or not.
I agree with most of what you say, but I strongly disagree with the initial sentence, that the name "language model" is misleading.
As you say, "language model" is about the interface. The term expresses that input and output consist of words. There are other models where input/output consist of things like images or videos, of symbolic data, of actions (e.g., for robots and agents), or of other things. The term "language model" says that it none of those. This is a useful distinction, and there is nothing wrong with it.
You want to categorize along a different axis, roughly speaking of what happens inside the model. That's fine, go ahead with that categorization, and define "thought models". (Though to make sense, there should also be non-thought models. I am not sure what you believe is a non-thought model.) In any case, it should not *replace* the term "language model". You should be able to say whether language models like GPT-4 are "thought models" or not, and whether image models like DALL-E are "thought models" or not, or whether agents like Alpha Zero are "thought models" or not.
On a technical level, there are categories for what happens within a model. For example a language model can be a "transformer" or a "RNN" (recurrent neural network). And a non-language model can also be a transformer or a RNN. This is about how the data is processed within the model. I think you want categories like this, but not on a technical level, but rather on something like a semantic level.
> Simple language models are statistical reductions of what is being transmitted.
I don't have any skin in this game, but is that the accepted definition of a language model? "Statistical reduction" seems to gloss over all sorts of different modeling systems that have a lot of complexity and that may or may not display emergent behaviors. And none of them seem simple to this laypeep. Anyway, I'm not sure this isn't a straw man argument, but I really don't follow the ins and outs of AI to know the positions of all the players.
If a set of strings (which could represent text, images, audio, or video) is the result of causal processes, then learning to predict those strings is learning to predict those causal processes.
Hence, learning to "predict the next word" with sufficient generality turns out to require a model of the conceptual structures that produce those words. There is no "just."
> learning to predict those strings is learning to predict those causal processes
This is not true if the underlying causal process is underdetermined by the sequence of strings that it produced. Given some sentence, any number of causal processes could have produced it.
Yes, of course, it's easy to construct counterexamples. But the salient point is that Large Language Models turn out *not* to be a counterexample, which explains why they seem to "understand" more than "predict the next word" might suggest.
LLMs are a perfect counterexample. The causal processes behind language usage are completely under-determined by the resulting words.