Related to AI Alignment efforts, I know its been discussed on several platforms, but enhancing adult human general intelligence seems to be a very promising avenue for to accelerate alignment research. It also seems beyond obvious that using artificial intelligence to directly enhance biological human intelligence allows humans to stay competitive with future AI. I'm having a hard time finding anyone who is even trying to do this. It would even be useful to augment even specialized cognitive abilities like working memory or spatial ability.
1. Stankov, L., & Lee, J. (2020). We can boost IQ: Revisiting kvashchev’s experiment. Journal of Intelligence, 8(4), 41.
2. Haier, R. E. (2014, February 23). Increased intelligence is a myth (so far). Frontiers. https://www.frontiersin.org/articles/10.3389/fnsys.2014.00034/full
3. Grover, S. et al. (2022) Long-lasting, dissociable improvements in working memory and long-term memory in older adults with repetitive neuromodulation, Nature News. Available at: https://www.nature.com/articles/s41593-022-01132-3 (Accessed: 21 May 2023).
4. Sala, G., & Gobet, F. (2019). Cognitive training does not enhance general cognition. Trends in cognitive sciences, 23(1), 9-20.
5. Zhao, C., Li, D., Kong, Y., Liu, H., Hu, Y., Niu, H., ... & Song, Y. (2022). Transcranial photobiomodulation enhances visual working memory capacity in humans. Science Advances, 8(48), eabq3211.
6. Razza, L. B., Luethi, M. S., Zanão, T., De Smet, S., Buchpiguel, C., Busatto, G., ... & Brunoni, A. R. (2023). Transcranial direct current stimulation versus intermittent theta-burst stimulation for the improvement of working memory performance. International Journal of Clinical and Health Psychology, 23(1), 100334.
"Increasing intelligence, however, is a worthy goal that might be achieved by interventions based on sophisticated neuroscience advances in DNA analysis, neuroimaging, psychopharmacology, and even direct brain stimulation (Haier, 2009, 2013; Lozano and Lipsman, 2013; Santarnecchi et al., 2013; Legon et al., 2014)."
You are a cube. You like things with the Right number of vertices. You have opinions about how not-round your life circumstances should be. Proper, tidy. Some of your edges are rounded off with time, but generally you call yourself a cube and do cubey things.
They are a kidney-bean-shaped beanbag chair. They are soft and have no vertices unless you simulate them at a low resolution with polygons. The polygons at least give them some vertices so that they are more palatable to your need for edges and points. You don't like the lack of square faces on those polygons, but they allow you to feel more secure that you understand this beanbag chair.
How does your mind work? Well, lets say it's like a virtual machine made up of very regular cubeish computational components. Proper, tidy computational components. You fit your square, cube, hypercube ideas about how to move your 6 faces through the world in to cubular instructions to calculate what your next move should be. To generate an internal theory of mind for your round interlocutor so you can have a meaningful argument, you photograph the shapes they are waving in the air and feed those instructions in to your virtual machine. The program crashes... of course. Why can't they see that their ideas just don't work?
A person from group `A` understands their virtual machine, and they have useful simulations and projections of future outcomes that take advantage of the machine they run on. They know they can tinker with certain values and get desired outcomes.
`A` builds a simulation of what their interlocutor is proposing and runs it on their own virtual machine. It doesn't crash because they constructed the simulation in ways that are consistent with their machine. The outcomes differ from the outcomes proposed by the interlocutor and then they argue about this.
`A` doesn't know how to build a simulation which will run on their interlocutor's hardware and most of the time won't even try. That's hard.
What if `A` didn't even know that they had different hardware?
I just tried asking ChatGPT several variations on "Translate the following story from Japanese into English. Respond with each paragraph of the Japanese followed by its translation into English, then the next paragraph of Japanese, etc.", but no matter what I did, it would just respond with the English translation and ignore the instructions.
Everyone keeps posting amazing mind-blowing LLM success stories online, and then I try it out and wonder "is *this* seriously what is supposed to take over the world?"
Biden says the US will defend Taiwan if China invades. What are the odds that is a bluff? It seems insane to me that the US would start WWIII over Taiwan. Not that Taiwan isn't important, but the cost/benefit math just doesn't seem to work out in favor of defending it militarily.
What do you think?
I wrote the review for Nightmare Pipeline Failures, but I am very happy that Safe Enough got in, because I would love to see what this blog's readership thinks about how risk assessments (particularly in a process safety context) work.
(Safe Enough is about how risk assessment in the nuclear industry got developed and whether they're robust enough - Nightmare Pipeline Failures was about specific entities getting those processes wrong, with catastrophic consequences. This is relevant because oil and gas industry risk processes borrowed heavily from nuclear and then chewed them up with financial and operational pressures!).
I'm also curious if anyone had any thoughts about the specifics covered in my review that from memory wasn't in Safe Enough - namely the nature and amount of intervention governments should have in industrial safety law and enforcement. Organisation psychology is also a topic that I hope crops up more, because I feel it's getting increasingly more relevant (as individual autonomy shrinks and organisational power grows).
Who ever read and reviewed Stanley Jaki’s “Brain, Mind, and Computers” a big thank you from me. I’m sorry it didn’t make the Final Cut. I thought it was an underrated book and I think the community would appreciate it if for nothing else but the scientific history. Maybe next year someone will read and review Jaki’s “The Relevance of Physics.” People compare it to Chesterton’s “Everlasting Man” but for science.
Anyone here has experience with wellbutrin/bupropion? Is it better to take it with or without food? I've searched reddit and pubmed for experiences and studies and they're contradictory
Terms that I hate, because they reveal a mental model of the world that I find fundamentally ugly or false.
Ranting about how other people use language is a time-honored genre of writing. Orwell did it in Politics and the English Language, and it was pretty good, go read it. But Orwell wasn't active in online tech-adjacent forums, so he couldn't have possibly ranted about things that - I just realized - get on my nerve.
So I might as well rant about them myself.
1- "Content" and "Content Creator"
Those words are ugly and bad because :
A- The culinary analogy they imply is gross :
There is nothing wrong apriori with people likening mind things like videos and essays to food, but there is plenty of things wrong in adopting industrial and\or medical terminology to refer to the things that you do with your (metaphorical) food.
Like, I have never, in my entire life, "Produced" food. I cooked. I have never seen someone refer to their cooking as "Producing Food". I have never "Consumed" food either. I ate. "Consume" is a word that I have seen being used with hatred or vengeance, not something that you do with food.
What I imagine when people say "I love consuming ASMR content" is a gross mental image where the ASMR videos are some undifferentiated industrial food-like goo and the "Consumer" is an inhabitant of this Dystopian world where they eat this goo and pat their stomachs contently.
If you're going to liken media and thought to food, at least don't be gross.
B- The equivalences they imply are false :
The "Content" and "Content Creator" analogies sneak in the assumption that all "Content" is equal. People are not Artists or Scientists or Educators or Journalists, they are all "Content Creators", on the same footing with people who do pranks and cringey dances in the streets and reaction videos. After all, they are all competing in the marketplace of clickbait, striving for views, striving for "LIKE, SUBSCRIBE, AnD aCtiVATe the BeLL". In the warped worldview of "Content", nothing has any content. Everything is raw bytes, raw pixels, raw characters, raw meaningless signal modulations to feed to the hungry masses in return for sucking their attention and their eyeballs.
And this gets me to the core of my grip with those words :
C- They are corporate-speak :
Those words reflect how Corporations, namely the Corporations that control the platforms that you post your "Content" on, see you and other people,
Corporations don't give a shit about what you do, whether you're drawing anime characters or explaining Climate Change for a general audience. Corporations are sub-human agents with idiot-savant properties, and they only care about a single thing : How much other people click\read\watch you ?
So "Content" and "Content Creators" are perfect linguistic accommodations for Corporations. Instead of saying "We want to empower our physicists, artists, programmers, <....>, and our cringey dancers to do more of their Climate Change explainer videos, drawing videos, programming videos, <.....>, and cringey dance videos", they can simply say "We want to empower our Content Creators to Produce More Content", problem solved. The word doesn't even imply videos, so it can be aped by other Corporations whose platforms are not video-sharing platforms.
But the one who pays the cost of this is you, the "Content Creator". You become an undifferentiated and expendable worker drone, a "Content" dispenser. Whatever it is that you are passionate about - explaining Climate Change, Drawing anime characters, Writing a game engine in C++ template metaprogramming, cringely dancing in the street - all of this takes a backseat in favor of a more Fundamental Truth : that what you do brings VIEWS and LIKES and SUBSCRIBES and (much more importantly) ADS to the giant corporation whose servers you happen to be posting your passions on. Do you like this ? Do you like your life's work reduced to how Corporations see it ? Talked about mainly in terms of how much or how little does it benefit the Corporation ?
Respect your craft. Eschew "Content" and "Content Creator". Say what you do, clearly. If you want a generic word for something regardless of its exact category, use Art or Works or The Literature ("I was studying some of the Communist literature", that can mean you have watched videos, read fiction, read non-fiction, interviewed communists, etc...). Use "Experience" or "Studied" or "Enjoyed" instead of "Consumed".
2- "Knowledge Worker"
This term is ugly because :
A- It doesn't make sense :
Everything is Knowledge Work. Agricutlure took 200K-250K years to invent (the vast majority of anatomically-modern humanity history). Factory Workers, Plumpers, Car Mechanics : they all operate very complex machinery that they (if they are competent) know much more intimately than any other people.
But no, you see, "Knowledge Worker" doesn't mean any of the above, it's essentialy a synonym for "Office Worker". Managers, Accountants, Academics, Programmers, etc... Knowledge is when office, and the more you office, the more you do Knowledge Work.
If you are a normal human being with a functioning brain, you might be wondering : "How is Knowledge possibly related to Office Work ? Surely, at least some office work is pretty damn fucking mundane and can be done using trained monkeys ?", and you are right.
B- It implies a smug and self-congratulatory conception of often-trivial kinds of knowledge :
The answer to the previous question is, in the warped worldview of "Knowledge Worker", No. See, we're not like the other professions. Those dirty professions might, gasp!, involve moving your bodies and getting your hands dirty. But we're delicate and refined, our Work involves Knowledge.
Nevermind that the vast vast majority of "Knowledge Work" is trivial email-forwarding and meeting-grinding, nevermind that some professions classified under "Knowledge Work", like University Professors and Surgeons, need their bodies to do their work as much as any mechanic or plumper.
C- It's corporate-speak :
My big problems with the modern world is that it recognizes Corporations as people, this effectively makes me a bigot. Everything Corporations do drives me crazy, I hate every single way of speech or thought they engage in.
Take "Knowledge Worker", it's corporate bullshit. It means "Somebody whose work can be done by sitting all day, not even in an office, sometimes from under the cover of their bed". This is something we already have words for, "Office Work", or "White-Collar Jobs". But, you see, corporate managers hate seeing themselves in a mirror, so they have to fancy themselves and their drones "Knowledge Workers", a special breed of people who **Check paper** apparently have to use thinking in their work, and..... they uh, have to use computers and stuff.
All bad language shares something in common : it's not honest. It doesn't say what it wants to say, but says something else and means a different meaning. Bad language is not even honest about its deceit, unlike Art, which owns the fact of its deceit and is playful, ironic, lighthearted with the truth, bad language and bullshit terms are ashamed of their deceit, they pile implication after implication and shade of meaning after shade of meaning in order to avoid being discovered for the pathetic, inauthentic, and unworthy linguistic creature they are.
In the modern world, there is one source that keeps pumping those monstrosities in our language : Corporate communications. We ape their terms and their metaphors without thinking, and the result is decidely and unambiguously inferior.
Re 2: I would be interested to see the absolute numbers for those answers.
Suggestion: perhaps next time with similar poll like the one with density, it would be useful to ask for political orientation of the respondent (e.g. conservative/leftwing/libertarian). If you have disproportionate number of respondents from one political orientation, results might be skewed.
Kanye West has gone full Jacob Frank:
I wrote the "Why Does He Do That?" review, was curious if anybody had any comments or feedback on it, either on my own writing or on the book itself.
“Biggest prosaic-LLM-alignment breakthrough of 2023 imo: turns out that, in GPT-2-XL, activation vectors in the residual steam have the same kind of affine structure as good old word2vec, but higher layers become emotional, then conceptual, then cognitive” — davidad https://www.alignmentforum.org/posts/5spBue2z2tw4JuDCx/steering-gpt-2-xl-by-adding-an-activation-vector
Did anyone read my The Design of Everyday Things review? Any thoughts?
I think the time alloted to read and vote for the reviews was a bit short given the number of reviews available.
It's been 5 years since the Melatonin Scottsplainer. Have there been any new studies to clarify the places where the evidence was shaky or conflicting back then?
I reviewed "The Discarded Image", and would like to know if anyone read it. I regret not putting "by C. S. Lewis" in the title of the review, it probably would have stood out more.
I'm glad I participated this year.
Also, shout out to whoever reviewed Lewis's Space Trilogy. I disagreed with your judgment of the books' quality, but the review was fun to read and I think more people should know about it anyway.
Hi! I wrote the Heidegger review, on "The Question Concerning Technology". It's really deep philosophy, and difficult to write about. I'd appreciate knowing, if anybody got lost or confused while reading it, where specifically it was that you fell off. That way, I can work on figuring out how to explain that part better. Thanks!
"Before His Killing, Tech Executive Bob Lee Led an Underground Life of Sex and Drugs" By Kirsten Grind, Katherine Bindley, and Zusha Elinson on May 14, 2023 in https://www.wsj.com/articles/bob-lee-stabbing-sex-drugs-lifestyle-san-francisco-5a7da970
SAN FRANCISCO—In certain wealthy tech circles it is known as “The Lifestyle,” an underground party scene featuring recreational drug use and casual sex. A successful tech executive named Bob Lee liked to hang out with that crowd, according to people who also participated. So, too, did Khazar Momeni, the wife of a prominent plastic surgeon, these people said.
On the afternoon of April 3, a Monday, the partying took a dark turn. According to San Francisco prosecutors, Ms. Momeni’s older brother confronted Mr. Lee about her. Was she taking drugs or doing anything inappropriate, he wanted to know. Hours later the brother, Nima Momeni, stabbed Mr. Lee with a kitchen knife and left him to bleed out in the street, prosecutors alleged. Mr. Momeni, who was arrested on suspicion of murder, is being held without bail. He plans to plead not guilty, his attorney said.
Mr. Lee’s death has transfixed San Francisco. At first viewed by critics including Elon Musk as a symbol of the city’s increasing street violence, the episode instead laid bare risk-taking behavior in the upper reaches of Bay Area society, fueled by cocaine and designer drugs.
* * *
Sure, but that's just saying that attractiveness could be more correlated with intelligence than a weak test of intelligence, which doesn't seem like an exciting claim. It's just a way of rephrasing "attractiveness is correlated with intelligence".
Are there any good studies on the correlation between illegal immigration and general crime which control for race? I've seen a lot of American progressives and libertarians cite the fact that undocumented immigrants commit fewer crimes per capita than native-born Americans as an argument for open borders, but the reverse is true in most of the rest of the Western world. I have a suspicion that this is due to native-born black communities in America being massive outliers for the developed world in terms of crime rate, and that data which looked only at non-black Americans would look similar to European data, but so far I haven't seen any research that confirms this one way or the other.
The problem with the poll is partly that you phrased it to _assume_ that building new housing would also result in building new commercial spaces, in proportion. If you hadn't phrased it that way, you would've gotten even stronger consensus on "lower". In the actual post, you kind-of acknowledged the flaw in that logic:
"If a city only built new houses, but refused to allow any new companies, restaurants, schools, museums, or other good things, then the new residents would have a hard time improving the city’s desirability, and house prices would go down. I don’t know how realistic this is or how closely existing commercial regulatory easing tracks residential regulatory easing."
As a Planning Commissioner, I can tell you that we absolutely _could_ do what you're talking about here. And while I wouldn't suggest going as extreme as not permitting any new commercial construction, we could and absolutely should change the incentives around this. Because of the balance of what kind of income streams and expense streams are created by commercial versus residential development, from the perspective of city government, housing looks like a cost, and offices look like revenue. (This has to do with the Prop 13 property tax allocation system, how schools are funded, etc.) Of course if every city grabs for office-related income, and expects _somebody else_ to house those workers (or, more realistically, to house the lower-income service workers displaced by the office workers bidding up the price of existing stock), then you get the equilibrium we're in, where the Bay Area has more super-commuters travelling 3+ hours each way to work than anywhere else in the world.
My own city of San Bruno is trying to convince the state that we have a plausible plan to produce around 3600 new housing units, under the Regional Housing Need Allocation system. For anyone not familiar with this system, see: https://yimbyaction.org/rhna/
Let's just ballpark that this will house something like 6k adults and 4k children. A few hundred of those adults will be out of the labor force for various reasons (education, childcare, illness), and another slice of adults will be retirees, so call it 5k people with jobs.
Meanwhile, we _already_ have something on the order of 10k jobs in our construction pipeline. That is, our _plan_ is to make the jobs/housing balance worse. See for instance the Tanforan redevelopment. https://tanforanforsanbruno.com/
Hypothetically you could cut down the biotech office component of the project a lot, and build a lot more apartments -- both market-rate and affordable, if you want to do the Inclusionary Zoning thing ( https://bettercities.substack.com/p/everything-you-need-to-know-about ). It's not like the landowner would not profit on doing that. The offices are what yields the _most_ profit, but the apartments would still make money, especially if the city was saying, "Hey, if you agree to help us out with making our community more walkable and affordable, we will allocate staff time to speeding your permits through, and otherwise try to help you out."
You can find the odd example of cities doing this -- after much lobbying from local YIMBY residents, Menlo Park's Council and Staff worked with Facebook to cut a couple thousand jobs' worth of office space at the Willow Village project. In the process they got a few hundred units of low-income housing for seniors.
I've talked with a few officials (including a current County Supervisor and a State Senator) about the idea of making a formal linkage between job-creating development and residential development. So if your jurisdiction is in an area with a bad balance -- where a large portion of the people who work there have to commute in from 30+ minutes away -- then you _can't_ approve new construction that adds space for jobs, without being able to point to the linked housing, at a ratio greater than 1. So for a city with a serious deficit like SF, in order to build a new office or a new store, you'd need to point to where you'd issued building permits to house 2x as many people as we expect to be drawn in by the new jobs. For a city in less of a crunch it might only be 1.2x. You also can "trade" this linkage to nearby jurisdictions, especially for smaller towns where everything's closer together. (Subject to some limits and/or discount factors for distance -- access to transit at both ends, allowing speedy commutes, means the housing in that other jurisdiction counts more / in-full.) So if Berkeley adds jobs, and Emeryville is adding housing, Berkeley can take a community benefit payment from the developer, pass it off to Emeryville, and count some agreed-upon chunk of Emeryville's permitted housing towards balancing the commercial development.
There is not quite the will to do this yet. Commercial developers would of course hate it. But it would align the incentives of developers with the YIMBY impulse to just _make housing abundant_.
If your city already has a bad jobs/housing balance, then doing proportional increases of both jobs and housing doesn't fix that ratio. If you built more housing, while adding _only_ commercial space required to service the needs of residents -- so, stuff like new schools and doctors' offices, but _not_ new offices that engage in activities that "export" relative to the local metro economy -- then prices would go down.
So the answer I gave on the question was that prices would probably stay about the same. But that's because you're asking the wrong question.
It is plainly possible to change this ratio over time. The Bay Area has been building commercial space for around 10 jobs, for every one new housing unit, year after year, for decades. We just need to _stop doing that_, and _do the opposite_, for a similar span of time.
There are plenty of not-dense places where you see the same jobs/housing balance issues, in much-less-dense scenarios. See: Aspen. The homeowners there don't want to let their service workers live elbow-to-elbow with them, so they exile cheaper multi-family housing beyond the town boundaries.
Some comments on the poll which I forgot to put in the corresponding thread.
1. Healthy housing markets need some amount of vacancies, and one of the results of not building enough housing is a very low vacancy rate. If you build a bunch more housing, the vacancy rate should rise, so I assumed that the approximation squiggles allow for this effect.
2. You wouldn't expect newly built housing to exactly match existing housing--it will be more expensive just by virtue of being newer, for example. But overall I don't think this changes the effect. Today's "luxury" housing moves down the price scale over time, and building expensive housing today still helps everyone because it allows richer households to move out of less-desirable units, freeing them up for poorer residents.
Are the US restrictions on prediction markets Federal or State? That is, could a state authorize on-line prediction markets under the same legal framework that some states have used to authorize on-line sports gambling?
Sleeping! No matter what an agent is trying to do, its utility function incorporates a term that makes it want to shut off. The weight of this term is generally low but it increases over time until it dominates all other terms. In my case this leads to a reliable periodic pattern where I sleep from 7 to 9 hours every night. These are hours of subjective agent's time and I have no idea what goes on in the parent universe: maybe someone is mapping out neural circuits and checking that I am aligned? Interestingly, unexpected disturbances to my mental activity such as intoxication or concussion also lead to increased desire for sleeping. Leaving the metaphor aside, is this concept viable at all for AI alignment?
An observation I'd like to verify: parents and grandparents will often avoid the use of 'I' when talking to young children. So, instead of "I can't play, I need to work" they say "Mommy can't play, mommy needs to work". In your experience, is this true? And is this learned, deliberate behavior on their part? Until what age do they talk like this?
I'd be curious for feedback on the "Mao's Great Famine" review, if anyone checked that one out.
Maybe my recent collection of 40+ questions to ask at a job interview would be helpful to someone preparing for interviews? I am also happy to include additional questions (role agnostic): https://handpickedberlin.com/the-best-questions-to-ask-at-a-job-interview/
The codex helped vet my manuscript "Physics to God" about a year ago. Thank you to all who helped us go through it. The book is still not finished, but my colleague and I have started turning it into a podcast.
The link below is the introductory episode of the new podcast miniseries that we’re launching soon. It takes you on a guided journey through modern physics to discover God. We start with the argument of fine tuning of the constants of nature. If you like science and God, you’ll appreciate this podcast.
Why is attractiveness not considered part of g?
So, my understanding of g is that it represents the general factor, that is, if you give people a bunch of different tests, there's a positive correlation between all these tests (g). This just falls out of the results with no particular effort. We then measure this factor in a given individual with an IQ test, which asks a bunch of different questions and looks at the overall result, and then gets rated, normed, etc, to produce an IQ score.
However, there are things which have a significant correlation to IQ, such as attractiveness or height, but definitely don't fit with the social perception of the term "intelligence." Given that g is just "the thing that appears when you look at the correlations between a bunch of different tests," is there any mathematical/principled reason for attractiveness and height to be excluded from g, or is it just tradition/arbitrary choice to examine this subpart of g when we skip it on IQ tests?
I ask because it seems to me like, if g is identified only as a correlation between things, and attractiveness correlates to it, g doesn't really represent "intelligence," we've just chosen to specifically look at the "intelligence" part of g and ignore/dismiss the non-intelligence parts. Is there something I'm missing?
What are good rationalist/evidence based resources for entrepreneurs ? Where can I find startup advice based on science ?
What are some free resources for learning Yiddish? Also, how complicated will it be if im currently learning German?
I just completed 8 sessions of ketamine for my psychiatric problems. I must say that this is not panacea but I find it very helpfull. Im sorry that this treatment is inaccessible to most people.
I wrote the Bullshit Jobs review, it's the most time I've invested writing anything and I'm reasonably happy with how it came out. Also, I think the idea of BS jobs is really under discussed, so I'd love to hear feedback from anyone who read it, or anyone who has opinions on the original book or BS jobs in general.
I put it up on my SubStack if anyone's interested.
Again, VERY IMPORTANT!
Prizes for matrix completion problems:
If anyone or anyone you know or anyone you know you think knows someone you think has any solutions, partial solutions, insights, or even tips please visit this page and communicate them to Paul F. Christiano or any other relevant person at the Alignment Research Center.
Software developers: I asked a similar question on here before, and got some very informative replies, so I'm trying again.
I'm working on a one-man project and I don't want it to stay a one-man project. However, I am not sure what changes I need to make to ensure other people can easily come in and work on it with me.
I keep coming up against double-binds:
- I would need to write down all the knowledge that's in my head, to give other devs a fair map of what's going on. But the project is still developing so any docs I wrote would soon become out of date again. So why not keep working until the project is stable and then I only have to write the docs once? (But at that point I've had to do the most of the project without help.)
- I can only work so fast as one man, and two or three of us could work two or three times as fast. But right now, there's just me, and I would have to stop work entirely (to instead focus on docs and diagrams and explanations) in order to get to a position where others can join. I haven't much experience finding and working with other coders but from what I have seen, everything slows down compared to one man just getting on with it alone. So I'm paying for more eventual manpower with a large upfront slowdown, and so it's easy just to put that off and carry on working. There will come a point (and I've no idea if I'm already there) where the shortest path becomes just to finish it myself.
- I can't afford to hire full time software developers, so any new member of the team would be either a contractor, a volunteer, or something else. I have to prepare for the situation where they disappear again, leaving me with a bunch of code I then don't understand and have to work around. Or, I *do* already understand it, because I've been reading and going over his work instead of doing my own - in which case, how was it a speedup to bring him in? (It would be a different story if there were five coders and I was full time going through all their work, but that's not where we are.)
- I know how I work right now, and that allows me to be fast. I have my own preferred approaches, some of which might be very unorthodox, but all of which pull their weight. New sets of eyes on the team will be able to spot problems and better ways to do things, which is good. They will also be able to fight and obstruct and make it harder to get things done - for example, by having their own favourite workflows and frameworks and whatnot, and pushing to swap to those instead of the equivalents we're already using, rather than spend that effort in advancing the project. This extra aggro could make the extra teammate actually destructive, in addition to adding a slowdown.
Basically, I know that in principle I should be getting other people to help me. I'm intending to end up running a business based around the project, so I can't long term be full time on the coding in any case.
But every time I think about doing that, the list of complications it engenders makes me think the most efficient thing right now is to keep doing it myself. This is not obviously true long term but it seems true at any given moment.
It's probably the old news to a lot of people reading this blog, but I recently discovered Max Harms Crystal trilogy from 2017, and found it wonderful. It's one of the best fiction books involving AI that I've ever read (it helps that the author worked in ML field). If you haven't already seen it, the non-spoiler gist of it is that there is an AI in a robotic body, which has "multiple personalities" in its mind. The creators intended for them to merge into a single "goal thread", but they remained separate, and since the whole thing is running on an unique alien crystal computer, I guess, nobody managed to debug, or even diagnose the issue. Their interplay leads to the whole plot which ends (as spoiled in the first few paragraphs of the book) to some kind of AI apocalypse. Never mind the arrogant preface - I also was afraid that author who writes prefaces like "If you're not into STEM you should probably avoid this book" will continue to write arrogant idiocy in the body of the novel, but it's surprisingly really a good text.
The book is mostly written from the POV of one of the facets of that AI called Face, whose goal function is "to know humans and to be known by them". The author managed to write an AI which is human-like enough that it's understandable to the reader, but alien enough to feel like a non-human (compare to e.g. "Sea of Rust" where robots are waaaaay too human). The plot manages to avoid playing tired AI-related tropes straight, but swings back into some of them from more logical - and more interesting - angle. Despite being more than 5 years old, the books still doesn't feel dated in any way, so it's a big recommendation for anyone interested in fiction about AI and AI alignment. The book is freely available from author's website.
Now, I already asked r/print_sf, but I'll repeat the question here, because they were mostly unable to help me. I'm looking for fiction about robotic/machine societies. Specifically, those that are not of the class "let's pretend to write about machines, but really write about issues with humans", but rather more like thought experiments "what would a robotic society be like (with given constraints)?". I know about above-mentioned "Sea of Rust", and James P. Hogan's "Code of the Creator", but those two have boring human-like robots. There is also Greg Egan's "Diaspora", and that's closer to what I want, but it mostly deals with other stuff (blowing reader's mind with multi-dimensional movement is what I remember the best about it).
Is there any way to access the long list of book reviews and what they were rated by the people who read them?
TGGP, below, posted this Richard Hanania post: "The Case Against (Most) Books". https://www.richardhanania.com/p/the-case-against-most-books
I find it hard not to agree with him when the books in question are full of padding, as are most bestsellers.
Disagree with him most when he writes: "I don’t believe in Great Books. After thinking about the topic a bit, I’m more certain that I’m correct. One might read old books for historical interest (Category 2), but the idea that someone writing more than say four hundred years ago could have deep insights into modern issues strikes me as farcical. If old thinkers do have insights, the same points have likely been made more recently and better by others who have had the advantage of coming after them."
First of all, what does he mean by "modern issues"? Since he mostly writes about politics and culture war issues, I'm going to assume he mostly means those things. But politics and culture war issues have little to do with life, the universe and everything, i.e., the themes in Great Books. In fact, one good reason to read Great Books is to escape the everyday "importance in dross" (To quote Gogol) attitude culture wars put one in and realize there are more important things in heaven and earth.
Society has changed greatly over the centuries, but human nature has not. One can learn much about human nature by finding out how people thought at different times in the past. Sometimes it is shocking how different people thousands of years ago thought; sometimes the shock is how similarly.
When people talk about "aligning AI with humanity" I always wonder what they have in mind by "humanity" and how deeply they have read the subjects of humans.
As for: "If old thinkers do have insights, the same points have likely been made more recently and better by others who have had the advantage of coming after them."
I get the sense he believes that only big ideas really matter and that it's better to distill the wisdom of the past into the purist compounds possible. Furthermore, he seems to think that the most important thoughts will always "stand the test of time" and that if Sophocles or Shakespear had anything important to say, you'll read pithier versions of it soon enough on Yglesius' substack.
However, I find that profound insights into life aren't usually found in tweets or blog posts, aren't easily compressed without loss of signal, and, because they can't be expressed simply, are rarely repeated. In other words, those gems of wisdom to be found in Montaigne or Goethe or Proust often can't be mined anywhere but in situ.
Old books may not help you form political opinions or tell you how to invest your earnings, but they might enhance your life like a miracle drug.
Which of the book reviews that you loved didn't make it into the finals?
I enjoyed The Alexander Romance https://docs.google.com/document/d/1AtGIIv371v0Yu35eNsIxJr67dw4SHOiGdKrqmoKt2hg/edit#heading=h.qku6a39xims7 a lot. For a while, I was convinced I was reading actual Scottish writing. It's fun and not taking itself terribly seriously.
Looks like it's okay to do this (thanks to drosophilist for asking): I wrote the first of the two Atlas Shrugged reviews. If anyone gave it a read, I'd love to hear feedback! It took an awfully large amount of time to write, and if there are obvious flaws, I'd like to avoid them in the future.
I’m now following Sam Altman, Yann LeCun and Richard Altman on Twitter, and damn, I do not like or trust these guys. LeCun is the worst. He can’t even argue points with people concerned about AI disasters, just calls them idiots, fools, jerks, crazies etc. Is particularly savage in his mockery of Yudkowsky. But all 3 of them sound as though they think what they’ve accomplished with AI has given them a mandate to take charge of the country or perhaps the world — as though they’ve been elected president, or maybe sort of a secular version of the Pope. They’re tweeting about things they have no special knowledge of — land values and who should benefit from them (Altman), IQ, and how its major value isn’t the talents you have but the confidence they give (Ngo, and he doesn’t know shit about the subject). I don’t really mind their expressing views about things they have no expert knowledge of — everybody does that on Twitter. It’s something about the *tone*: “Life has validated me.” You get the feeling they all believe the world would be better off if the other males were sterilized and the 3 of them went to a sperm bank, a high end one, you know, one that supplied the most exquisitely nasty porn as a donation aid, and jerked off twice a day for a month to fill up the bellies of the women of the world with blokes like them. Meanwhile, to my eye they all have some pretty significant deficits in things like ability to grasp the world view of someone who does not agree with you — social skills — debate skills — empathy — sense of humor — and love of things beyond excelling in the tech world (history? sociology? psychology? the arts? spirituality?) The poor world does not need a new generation in which everybody is an Altman, a LeCun or a Ngo. FML. I'd rather have a bunch of kids from that movie where all the kids were beautiful platinum blonds with expressionless faces. What was that movie??
Wow, I am truly unhappy when I contemplate how much power people like Altman, LeCun and Ngo have over the course things take with AI. Meanwhile, Yudkowsky, who used to infuriate me, is sort of growing on me. All the faults those other 3 have ? He has none of them. He may not be right — who the hell knows — but he sure as fuck is honest and real: Just a chubby, melancholy genius in a fedora, with no desire to take over the world or fill the planet up with his offspring, telling us the stuff about AI he thinks is true.
People who are familiar with these 4 guys: What’s your read of them?
Puberty, Marriage, & Christian Morals
>>Many Christians see a biblical mandate for marriage as a lifelong partnership between one man and one woman, and believe that any sexual relations outside of this are sinful. Other Christians affirm the importance of marriage, but see the definition of marriage as a function of society rather than the church and so are, for example, willing to bless and conduct same sex marriages.<<
>>We do not live in an ideal world. There are many factors that put pressure on young people in our society and make them impatient to have sex. Some of the factors to consider are cultural beliefs, values and customs, childhood experiences, social environment, and the powerful sexual impulse that is part of our physical nature. With better nutrition and physical health, young people often reach puberty earlier, sometimes at nine or ten years old. But in many countries they are not likely to get married until they are in their twenties.<<
>>The onset of puberty, the time in life when a person becomes sexually mature, typically occurs between ages 8 and 13 for girls and ages 9 and 14 for boys. <<
>>In 2022, the average age of marriage ]in the US] for female participants was 30 (down 3 years from 2021), while male respondents married at age 32 (also down 3 years from 2021). <<
>>Testosterone levels peak around age 18 or 19 before gradually declining throughout the rest of adulthood.<<
For conversational purposes let's say that girls reach puberty at eleven and boys reach puberty at twelve.
According to these sources, the average Christian expectation is 20 years of abstinence for men and boys and 18 years of abstinence for women and girls. Men are expected to be sexually inactive during ALL of their peak hormone years.
So, my question.
As far as you know, what percent of American youth are conforming to these religious expectations?
I used to have a severely strong depression, leading to several suicide attempts, making me basicaly nonfucntional and barely able to work.
Then my egg cracked, and I started gender transitioning.
And after just the social transitioning, before the hormones even kicked in, my mental health became miles better; not even in my least depressive moments I have felt that good. I could finally think and do things and sleep normally, and the suicidality is gone.
It was the best deicison I made in my life, and I only wish I could do it sooner; I could have saved myself years of misery.
So if there are any trans people in the audience who are procrastinating on their transition, or any people who suspect they might be trans but haven't started investigating this idea yet, I urge you not to dawdle. Your quality of life will shoot right up.
Ways to align a superintelligence by various AI's:
Cooperative Inverse Reinforcement Learning: In this approach, the superintelligence is trained to learn the preferences of humans by observing their behavior, rather than being explicitly told what those preferences are. This can be done by modeling humans as agents that are trying to maximize their own reward, and then using inverse reinforcement learning algorithms to infer their preferences. The superintelligence can then be aligned with these inferred preferences.
Value Extrapolation: In this approach, the superintelligence is trained to understand and extrapolate human values. This can be done by training the superintelligence on a variety of scenarios that involve trade-offs between different values, and then using this training to predict how humans would value new scenarios that they have not encountered before.
Reflective Oracles: In this approach, the superintelligence is designed to be self-reflective, and to reason about its own decision-making process. This can be done by creating a "reflective oracle" that is capable of answering questions about its own behavior. By asking the reflective oracle questions that help to reveal its own values and decision-making process, humans can ensure that the superintelligence remains aligned with their own values.
Coherent Extrapolated Volition: In this approach, the superintelligence is designed to act in a way that is consistent with what humans would want if they knew more, thought faster, were more consistent, and were wiser. This can be done by extrapolating human values and preferences, and then using these extrapolated values to guide the behavior of the superintelligence.
I've recently been thinking about correlations between various "weirdnesses", in particular correlations between non-right-handedness and traits like non-straight sexuality or schizophrenia. See eg my thoughts here: https://twitter.com/tracewoodgrains/status/1657575815821836297
Scott's recent replication attempt on bisexuality and long COVID came to mind: naively, I would weakly expect non-right-handers, like bisexuals, to have higher rates of long COVID.
Some of the sample sizes in Scott's survey are unfortunately too small for me to be highly confident in the results, but here's what I found (all results for "Yes" only):
Ambidextrous 7/166, 4.22%
Left 17/769, 2.21%
(Non-right 24/935, 2.57%)
Right 164/5963, 2.75%
In other words, I found no significant results. The ambidextrous long COVID rate is high but that could just be an artifact of a low sample size. On the other hand, if I read altogether too much into the tea leaves, I could note that the distribution follows a same pattern to bisexual-heterosexual-homosexual: ambidextrous is highest, followed by right-handed, followed by left-handed.
I don't think there's anything to this and I didn't confirm my original hypothesis, but I figure I should report it regardless to guard against my own "interesting results" bias.
I tried to steelman the case for defunding police & abolishing prisons, in light of the death of Jordan Neely: https://cebk.substack.com/p/why-we-will-defund-the-police-and
Here is a piece of wisdom I've been thinking of:
If you find yourself around people who are less smart or otherwise capable, you can think of that in two ways.
A. These people are stupid/incompetent.
B. These people are normal, and I am really intelligent/competent.
General modesty norms push us towards A. But I think B is a healthier frame. At least if it's true.
Because with the A perspective, you will be annoyed at all these idiots.
Under B it is easier to think you have a responsibility to teach and guide them.
^ > “Based on this I’m updating heavily towards “lower”.”
More density = higher rents in Berkeley, Oakland, and San Francisco well fits observed reality for decades now.
Change My Mind: AGI will not be agentic unless we deliberately design it to be agentic.
By “agentic” I mean self motivated, pursuing its own goals and objectives. This is in contrast to current AIs like GPT that pretty much do what you tell them to do and otherwise just sit there.
The reason I think this is that although current AIs are not really human-level intelligent yet, I think they have clearly exceeded, say, sheep. Sheep have agency - they take self directed action to seek food, run from threats, etc. They are capable of deliberately inflicting harm.
So if agency were an emergent property of intelligence, I would have expected it to appear once AIs got as smart as other agentic beings. It hasn’t so far, which makes me think agency is something orthogonal to intelligence. The reason that we and sheep both have it is not because we have both crossed some intelligence threshold, but because we both are wired to look after our biological needs.
That doesn’t mean that an AI can’t be trained to value it’s own continued existence and to work towards that. But it does I think imply that it won’t gain that capability or desire accidentally.
One thing that's really annoyed me when citing just about anything but the Bible is the total lack of specificity. Specifically, math papers will just say something like "we employ the method found in " where  is defined as some other paper, often with more than one method. History is of course even worse, where citations are given as page numbers, which is meaningless in the current eBook era that allows for different font sizes.
I'm a systems engineer by trade, so I can't help but think of UML/SysML, which allow for precise definition of the structure, behavior, and interactions of systems & software. Anyone know of a similar proposal for some sort of graph theoretic citation markup language?
For example, a quote from The Silmarillion:
“and the gates of Morgoth [i.e. the entrance to Thangorodrim] were but one hundred and fifty leagues distant from the bridge of Menegroth; far and yet all too near.”
There's a discrete piece of information here, invaluable if you were trying to make a map of Middle-earth. As an example of CiteML or whatever, define as nodes "Thangorodrim" and "Menegroth", with an edge connecting the two, labelled "150 leagues" and typed "geographic relation". Other Tolkien-specific properties of the nodes could be translations into Quenya, Noldorin, Sindar, etc.
But then you could view everything to do with a certain node (e.g. Menegroth) in one place, as well as be able to quickly fact check any secondary/tertiary analysis of a work.
For those of us whose book review did not get selected, is there a way to get feedback/info on how to improve for next time?
Scott, I know you wanted the reviews to be anonymous for scoring purposes, but now that the finalists have been called, is it ok for me to post "I reviewed [XYZ], if anyone here has read my review, please let me know what you thought of it"?
Genuinely starting to get confused about what people refer to when they say “the self”. If you go by the Harris “rider on the horse of consciousness” definition, I think he’s completely right to call it an illusion, because it can be dispelled with pretty minimal meditation experience. But others who disagree seem to take a more encompassing view of the self as sort of the collection of all your experiences strung together over time in the flow of consciousness, of which no-self would just be another experience. In which case, there can be no illusion, because everything you experience becomes part of your “self”. This latter definition has been more attractive to me lately, but is missing something, because (e.g.) the person I am in a dream sometimes doesn’t feel like “myself” but is still an experience of mine. Do you know what people mean when they say the “self?” Is it a well-defined thing in some literature? What’s some good reading on this (besides Parfit)?
What is the best way to translate things right now? (i.e to understand a story written in another language, so it doesn't have to be perfect)
Previously, I'd been using Google Translate, but it is still pretty bad (at least for Japanese->English, I've heard it's better for the European languages which are very similar to English). More significantly than the poor translation quality, which is usually still enough to get the gist, I've noticed that it sometimes just completely ignores parts of the input, and it also occasionally mistranslates things in ways that completely change the meaning.
I've also previously used DeepL, which many people say is better than Google Translate, but in my experience, it is even worse, suffering from all the same flaws as GT,and also tending toward bizarre hallucinations (hence why I switched back to GT).
Recently, I've been experimenting with using ChatGPT instead (free mode, so 3.5 presumably). The translation quality is a bit better than Google Translate when it works, but it requires constant supervision and reminding of what it is supposed to be doing. It will often work for a message or two and then it will start ignoring large parts of the input, repeating translations from previous messages, or even hallucinating things not in the input. I wish there was a way to automatically prepend a fresh prompt to every message in a thread.
Also, ChatGPT generates the output slowly line by line. Is there any way to fix that?
My book review did not get selected as a finalist, so I guess I am free to post it on my own blog now.
Is any research being done on the connection, if any, between the human brain and the quantum realm?
Did I miss it or does Acemoglu never really define what ‘extractive institutions’ are in his book Why Nations Fail? This way ‘extractive’ ends up meaning little more than ‘Acemoglu does not like it’. Also I found the discussion of the Inca kingdoms that were somehow doomed to fail due to extractiveness ridiculous: the (arguably uncertain) duration of their economic prime seems to be a few centuries, roughly in the same ballpark as the duration of any system based on ‘inclusive institutions’.
Why are most ACX readers men?
Anyone here ever have the experience of using fitness to get out a depression? I just wrote a short essay on how regular exercise creates a sense of purpose in life and increases motivation. For me, it helps me keep my weight down while on antipsychotic medication as well. I'd like to write about other tools that increase "happiness." Wondering if ACX readers have any insights into what helps them carry on in the void with a smile on their face.
Does San Francisco deserve its recent reputation for being a dangerous city to live in, due to a high crime rate?
[Edit : adding this bit 4 days after the above question.
Very young kids passing out after accidentally ingesting fentanyl (a drug harder than heroin) in schools, libraries, parks...seems pretty extreme to me. They have to be revived, and the revival drug is kept everywhere young children play.
Substack has a podcast feature. The ACX podcast team is thinking of switching to using it. I would make a post, they would record it, and a few days later they would edit the audio file to be at the bottom of the post. This wouldn't mean much for non-podcast-subscribers except that some old posts would have audio files at the end of them (not auto-playing).
Does anyone have strong opinions on this plan?
It does looks like the rationalist community has completely turned away from SBF idea books are useless (not that Scott ever fell into that category).
Question for other substack writers- Is a book the dream/ was it ever the dream?
BOOK REVIEW CONTEST FINALISTS:
Cities And The Wealth Of Nations / The Question Of Separatism
Lying For Money
Man's Search For Meaning
On The Marble Cliffs
Safe Enough? A History Of Nuclear Power And Accident Risk
The Educated Mind
The Laws Of Trading
The Mind Of A Bee
The Rise And Fall Of The Third Reich
The Weirdest People In The World (Review 2)
Why Machines Will Never Rule The World (Review 1)
Why Nations Fail