Google DeepMind is hiring for research scientists and research engineers for their Google DeepMind AI Safety team, where their focus is on alignment, evaluations, reward design, interpretability, robustness and generalisation. Apply here: https://boards.greenhouse.io/deepmind/jobs/3049442. Also, please spread the message if you can.
I have MRSA. After one very painful outbreak of multiple abscesses, a 10 day script of doxycycline, and an incision and drainage, it went away. It came back within a few weeks, with a small pimple tripling in size and growing painful enough to keep me awake at night. I got another script for 10 days of doxy, but am having an absolute nightmare of a time getting a referral to an infectious disease specialist as I don't have a primary care doctor, I was only treated at urgent care. I do have a derm appointment forthcoming, however. Am I just screwed? Is this going to keep coming back and ruining my life? I worked in a hospital as an aide and am thinking about quitting and dropping out of nursing school after this.
Julia Galef has been silent since the beginning of 2022. Any rumors if/when we can expect to hear back from her, e.g., new podcast episodes?
Seeing in our host's latest post [ https://astralcodexten.substack.com/p/links-for-may-2023 ] a link to an interesting article on desalination, I wondered if any reader can help with a question I have on extracting salt from sea water.
I once had a challenging online exchange with someone who disputed my contention that optimal techniques for extracting salt from saline solution such as sea water could be different to those for extracting pure water from the same. It seemed I was a "cretinous mong" for assuming there could possibly be any difference.
When I pointed out he might be correct for 100% separation, ending up with a pile of salt on one side and distilled water on the other, but the same does not follow for partial separation of either one or the other, the consensus from other participants in the discussion was that he was the mong! But I digress.
I had read that a brilliant technique had been discovered for partially extracting salt from seawater by adding the water to a mixture of a pair of organic compounds in which the solubility of the salt depended on small temperature differences of the mixture. Some of the dissolved salt but none of the water would mix with the compounds, and the water formed a separate layer on top, as if the organic mix was oil.
Changing the temperature (I forget whether up or down, but probably the latter) by only a couple of degrees reduced the solubility, so that some salt would precipitate out of solution and could be filtered out. Then by simply skimming off the salt-depleted sea water, adding a fresh supply, and cycling the temperature again meant the process could be repeated.
I forget the name of the compounds though. As organic molecules often do, they had long names, such as poly-di-methyl-tetra-thingummy-jig, and out of idle curiousity I would love to be reminded what they were, not that I plan any salt extraction myself!
Throwing this out there for philosophy fans, math mavens those interested in schizophrenia. I just finished a short novel by Cormac McCathy, “Stella Maris”.
It’s a vey engaging, short - 200 pages - read. Styled as a series of conversations between a young woman who is a mathematical genius and her psychiatric counselor.
I won’t try to sketch the plot beyond saying she checked in with a toothbrush and $200,000 in cash.
I think a lot of ACX people would enjoy it.
Edit: there is a a companion novel, “The Passenger” that preceded “Stella Maris” that I’m reading now. I don’t think reading them in order matters.
Should auld acquaintance be forgot? Never!
Our friend Vinay Gupta is still going, still involved in crypto, and still enlightening the ignorant, and I am genuinely pleased to hear about him, courtesy of an unexpected link from the drama-lamas:
I was a little worried given we heard nothing more about Luna or from himself, but it seems he was simply going deep with his giant footprint. I wish him well!
A baker misreads a request for an Elmo cake as a request for an Emo cake.
It all works out well. First time she's gone viral. Also, she gave the emo Elmo cake to the parents for free.
This resonates with what went wrong with the Japanese moon lander-- the radar report seemed weird, so the lander started ignoring everything from the radar and crashed.
The cake is much less consequential, but the baker was surprised to hear that the cake was for a fourth birthday, and she smooths that over, thinking that maybe the four year old is a Wednesday Adams fan. Fortunately, she has enough flexibility to ask about the theme of the party-- Sesame Street. This is why humans will defeat AIs. (Just kidding.)
*At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.
*Of the subgroups in this scene, effective altruism had by far the most mainstream cachet
and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.
*Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes.
Be Cautious: Abuse in LessWrong and rationalist communities in Bloomberg News
Another poster further down the thread linked this post by Sasha Chapin about how he partially fixed his aphantasia: https://sashachapin.substack.com/p/i-cured-my-aphantasia-with-a-low
Reading this post plus some other related Reddit threads got me wondering: do non-aphantasic people feel that they get any tangible benefits from mental visualization, or is it basically just a form of entertainment? Sasha seems quite eager to "cure" what he views as a mental disorder, but I am aphantasic and to my knowledge I've never encountered any diffculty as a result. Like many other aphantasics I didn't realize that anyone could have mental visualizations until recently - I thought allusions to this ability were just a weird figure of speech.
As far as I can tell, the only practical impact that aphantasia has on me is that I tend to skim the imagery-heavy parts of novels because I don't get anything out of them. But I don't have trouble e.g. doing spatial transformation problems or planning move sequences in board games.
Does anyone with a strong ability to form mental visualizations/imagery feel that it plays an important role in any types of tasks or reasoning, and if so which ones?
Hey, does anyone with a strong math background have any potential connections or ideas on this?
Suppose that I have n matrices A1,…,An∈Rm×m with m≫n. Can I find n new matrices B1,…,Bn∈Rn×n that have the same 3-way cyclic traces:
By analogy, if I had n vectors v1,…,vn∈Rm, it would be easy to construct new vectors u1,…,un∈Rn that have the same inner products (by choosing an orthonormal basis for the span of the vi and then writing each vi in that basis). Parameter counting suggests there should be matrices B that match a given set of cyclic traces (we have n3 parameters to pick and only n3/3 constraints), but I have no idea how you could pick them "naturally" and don't have any reason beyond parameter counting to think they exist.
The Hindus came up with this interesting scheme for categorizing things according to their overall tendency. I accidentally independently confirmed the existence of these tendencies.
The Pull and the Slack
Mechanistic anomaly detection and Eliciting Latent Knowledge:
"Suppose we train a model to predict what the future will look like according to cameras and other sensors. We then use planning algorithms to find a sequence of actions that lead to predicted futures that look good to us.
But some action sequences could tamper with the cameras so they show happy humans regardless of what’s really happening. More generally, some futures look great on camera but are actually catastrophically bad.
In these cases, the prediction model "knows" facts (like "the camera was tampered with") that are not visible on camera but would change our evaluation of the predicted future if we learned them. How can we train this model to report its latent knowledge of off-screen events?"
"... you could view ELK as a subproblem of the easy goal inference problem. If there's some value learning approach that routes around this problem I'm interested in it, but I haven't seen any candidates and have spent a long time talking with people about it."
Is there anyone here who is experimenting with using AI in fiction or poetry? There are all kinds of ways to do that. Had an exchange on here with Antelope 10 who is doing something along those lines. Anybody else? Or anybody know of web sites, blogs or whatever for people interested in this sort of experiment?
Hacking the Brain: Dimensions of Cognitive Enhancement
"Whereas the cognition enhancing effects of invasive methods such as deep brain stimulation53,54 are restricted to subjects with pathological conditions, several forms of allegedly noninvasive stimulation strategies are increasingly used on healthy subjects, among them electrical stimulation methods such transcranial direct current stimulation (tDCS55), transcranial alternating current stimulation (tACS56), transcranial random noise stimulation (tRNS57), transcranial pulsed current stimulation (tPCS58,59), transcutaneous vagus nerve stimulation (tVNS60), or median nerve stimulation (MNS61)"
I've heard a lot about how we've entered a "Digital Dark Age" since so much early internet content has been deleted. But, knowing what we know about the NSA, hasn't the agency probably been using web crawlers to catalog the whole internet since the early 90s? Isn't there a better-than-even chance that all of it is still saved on servers in some secret underground warehouse?
Are there still any serious genetic disorders that we can't identify through embryo screening?
I've read that corporate price gouging is part of the reason inflation is so bad in the U.S. now. But how is this possible in a free market? I thought competition between companies ensures that everyone's profits go to zero. Price gouging is only supposed to work over long periods of time if all firms collude to keep prices high. If just one firm defects by lowering its prices to attract more customers, then the arrangement falls apart.
Here's an article that claims corporate greed is fueling inflation:
'The pandemic, war, supply chain bottlenecks and pricing decisions made in corporate suites have created a “smokescreen”, said Lindsay Owens, executive director of the Groundwork Collaborative, which tracks companies’ profits. That obscures questionable price increases, she added, and allows businesses to be portrayed as “victims”'
Japan's moon lander crashed because there was a surprising but correct reading from the radar going over a steep crater wall, so the lander assumed the radar was broken and then didn't know where the surface was.
Shouldn't the path have been pre-tested so the radar reading wouldn't have seemed weird? Yes, but the landing site was changed rather late, so the route wasn't tested.
Games tend to leave out the unreliability of sensors systems. I've seen a similar complaint about military games, which tend to assume reliable information and reliable ability to transmit orders.
Also, for life generally, I wonder how often people ignore true but surprising information.
It has been said that it's an iron-clad rule of Hollywood screenwriting in recent years that under no circumstances is a man allowed to rescue a woman.
Is this actually true? Are there mainstream Hollywood examples of a man rescuing a woman in (say) the past five years?
I've seen it argued a few times that AI-X-risk might act as part of the Great Filter which prevents civilizations from colonizing the stars. But it strikes me that the opposite should be the case. Isn't it more likely that a superintelligence that destroys humanity is *more* likely to colonize galaxies than a planet without such a superintelligence?
Perhaps the absence of obvious aliens should lower our estimation of AI-X-risk.
Has anyone here ever managed to change something fundamental about their thought process or mental abilities? Here are two examples of what I mean:
https://www.reddit.com/r/self/comments/3yrw2i/i_never_thought_with_language_until_now_this_is/ In this reddit post (which was linked on a slatestarcodex post), the poster talks about how one day he "realized" that it was possible to think in words and after spending some time practicing this, it completely changed his life.
https://sashachapin.substack.com/p/i-cured-my-aphantasia-with-a-low This article is written by someone who claims to have "cured" his aphantasia and can now see imagery in his mind's eye.
As I get older, I've become more and more aware of various irritating quirks in the way my mind seems to work (which I guess is just a more delicate way of saying "I'm dumber than I want to be"). I suspect that most low-level functions of the brain are either hard-coded or are developed at a very young age and are thus very hard/impossible to change but I'd be interested in hearing if anyone has any relevant experiences.
I sometimes wear contact lenses, and I read this recently: https://www.prevention.com/health/a43919982/contact-lenses-contain-dangerous-amounts-of-forever-chemicals-pfas/. Should I stop wearing contact lenses for now, or is a couple of times a month a reasonable risk considering the currently available information?
Here's a question for any of y'all that have the (mis?)fortune to work with obscene quantities of money on a regular basis: What is the qualitative difference between things that cost a MILLION dollars, and things that cost a BILLION dollars?
Because I’m in “time to reinvent myself and redirect my career mode,” I’m forever getting emails/ads re training to be a UX/UI designer and I’ll admit to being intrigued. I know all the reasons why or why not such a career path would suit me, but I’m very unclear if these training offerings are legit/worth paying for or if they are just the online version of a ITT Technical Institute quasi-scam. If there’s a community that would know the answer, this one is it.
So, are things like this https://designlab.com/ux-academy/ legit and worth the money? Is the idea sound but there are better options? Or is it all just a load of bollocks?
I can't stand Twitter any more, and it's the place where I get info about new developments in AI -- new tweaks that improve performance, new applications for AI, and occasionally a new idea about Alignment, FoomDoom and related matters. Where else can I go to stay about as updated as a non-tech person (I'm a psychologist) can be? I can't read all the new papers -- I need summaries in ordinary language.
And by the way, I'm leaving Twitter because AI Twitter is going the way of Medical Twitter, which has been a cesspool as long as I've been following it, with pro- and anti-vax, mask etc. people hating each other's guts. Now I'm seeing the same dynamic starting in the AI discussions, and it seems to me that what nudged the exchanges into hate-fest land was Yann LeCun, who hasn't the faintest idea how to debate and support your ideas, but instead moves instantly into implying or outright saying that those worried about ASI are fools, crackpots, etc. Here's one of his tweets:
- Engineer: I invented this new thing. I call it a ballpen 🖊️
- TwitterSphere: OMG, people could write horrible things with it, like misinformation, propaganda, hate speech. Ban it now!
- Writing Doomers: imagine if everyone can get a ballpen. This could destroy society. There should be a law against using ballpen to write hate speech. regulate ballpens now!
- Pencil industry mogul: yeah, ballpens are very dangerous. Unlike pencil writing which is erasable, ballpen writing stays forever. Government should require a license for pen manufacturers.
So then I got mad and posted this: https://i.imgur.com/Q5DB7VP.png
I once found a newsletter of interesting off-the-beaten-path activities/events in NYC, and I believe I found it linked from one of these threads but can no longer find it. Anything sound familiar?
Suppose you want to hire people who are good fits for their jobs (we will optimistically assume you understand the jobs well enough), and failing that, you at least want to avoid hiring awful people.
How would you do that? Let's start with the idea that asking applicants about their greatest fault is a stupid question.
With respect to complex words, do you actively know the specific definition? For very unusual words or complex words, I don’t typically know the exact definition. The definition I generate is “this word is basically when you do something bad or vengeful” or “This words basically means to be hungry.” And so on and so forth. I reduce many words down to much simpler versions of the actual definition.
I’ve heard that the English language has more synonyms than other languages, and therefore has more superfluous words. Take the word superfluous for example. I basically view that as meaning “unnecessary duplicate.”
Do other people feel this way?
I get "Page not found" when clicking on the replies listed under the bell icon (top right). The email links seem to work. Anyone else has this issue?
I'm gradually going through old SSC posts, trying to figure out when I started reading every post (pretty sure it's 2014, but it's after 25th February!). Today I came across this gem that I hadn't read before:
Scott probably has enough money/connections to make this happen now, right? When are we going to see it??
Happy Memorial Day to those who celebrate.
Does anyone happen to know of a good summary of the meta around Kegan's orders of the mind? The theory passes my gut check, but only partially. I'm curious about its standing in academia and critiques/further work, but I haven't found much in my quick searches.
Recently, I've been attempting to get ChatGPT to translate story chapters (~800 words at a time) from Japanese into English, but it always stops translating halfway through and hallucinates a continuation to the story instead due to the prompt falling out of the context window.
The interesting part though is that the first time this happened, it just happened to be in the middle of a scene where the love interest is mortally wounded and GPT decided to continue it with a tearful death scene. However, in the actual story, the protagonist manifests hitherto unknown magic powers and saves him instead.
I thought it was interesting because Scott previously wrote that LLMs are curiously resistant to completing negative outcomes in stories. Give them a prompt and they'll continue a story in a way where everyone improbably lives, no matter the situation. So it's odd to see the *opposite* case happen here.
I've never done much charity work before and am currently participating in a charity bike ride (disclaimer: I do not think this is by any means the most efficient way to raise money, it's just a freebie since biking is a nice outdoor activity anyway).
Something that took me very much by surprise is how it works:
1. The riders need to each individually run a mini-fundraising campaign.
2. If a rider doesn't raise enough money they aren't allowed to participate in the ride.
3. The minimum amount they need to raise is *a lot*, $2000-$4000 in the case of the ride I'm doing.
I know multiple people who aren't doing the ride (and therefore not fundraising) at all because they don't think they'll be able to raise enough to meet the minimum fundraising bar to participate. This seems like a net negative on behalf of the charity in question. Can anyone more well-versed in this area explain the logic here?
(If you're curious the ride in question is the Princess Margaret Ride to Conquer Cancer, and my donation page is here: https://supportthepmcf.ca/ui/Ride23/p/JonSimonConqueringCancer)
I have some questions I want to ask here:
What separates the person who I call “me” at this moment from the person I will call “me” five minutes from now?
-The matter which composes my body will not be 100% the same.
-The physical structure of my brain will not be 100% the same.
-My memories will not be 100% the same.
So will I be the same person in five minutes as I am now? It seems reasonable to answer that question “Yes and no. You will be very similar but not exactly the same.”
What about ten years from now, assuming my body is still alive? “Yes and no. You will still be a similar person in many ways, but you will be less of a similar person than you will be only five minutes from now.”
Twenty years, thirty years, forty years from now, if I make it that long, I will increasingly be a different person, composed of increasingly different matter, with an increasingly different brain structure and memories.
If there were no such thing as biological death, as my age approaches infinity, I would cease to be the same person I am now, no?
No, I don’t think I would cease to be the same person entirely. I would change over the centuries, but I would remain human, which should be a limiting factor to change in some way. Not so limiting that I couldn’t become you, at some point, since you are also human, though, no? Not 100% you, but perhaps my life, and my psychological development, at some point, would take me along a route that would be much like one you have been on. Perhaps it would be fair and accurate to say at some point that I am at least 2% you. (Maybe I am even at least 2% you already. Do we not like the same music and laugh at the same jokes?)
If not, I ask: what makes me me? If not the matter that is currently me, then either my existence is immaterial or my matter is fungible.
If my matter is fungible, then why can’t I be you? Isn’t it at least theoretically possible that the exact atoms in your body could compose my body and for me to still be me? If I ate you, wouldn’t that would be a start in that direction?
If I suffer amnesia one day and remember nothing of my past would I still be me? Let’s provisionally say yes. I don’t need my memories in order to be me.
How do I know that I don’t experience being you? I don’t remember being you, but we just said that my existence is not contingent upon memories.
So my individual existence is not contingent upon the specific atoms in my body, the structure of my brain or the memories in my mind.
In the future I will be neither 100% current me nor 0% current you. Isn’t it reasonable to say that in the future (and present) me will be a non-zero amount of everyone, given that you aren’t so special?
I want to take this line of reasoning a bit further, but first want to see if others think there are obvious logical flaws in the above.
'Gender is assigned at birth.'
I find this statement odd, if not downright unscientific. Our gender, for us sapiens, is determined at conception time, where a sperm cell bearing either an X or a Y chromosome wins the sperm cell rally to find and meet the X chromosome carrying ova.
We find this odd term 'assigned' in a scandalous paper—from a clinic treating patients with congenital sexual organ defects. The study data now determined to have been fabricated, along with the author sexually abusing the sole patient ... and his twin brother. Who both died, one by opiate overdose, the other by suicide.
So how do we properly state, our gender is determined by the winner of a sperm rally?
Problems with bullet-matching forensics, and I when I say problems, I mean it's bogus "science" based on guessing that gets people falsely imprisoned. It's quite possible to match a bullet to a *type* of gun, but not to a particular gun. Goddamn it, I *believed* the books I read in the 60s about how cool the FBI was.
There's plenty about how to check on whether a theory is true, and how reluctant people are to check on whether their profession is based on anything reliable. And the problem that judges are apt to rely on precedent rather checking on science
There's actually been a tiny bit of progress, but it's going to be a long fight to get science taken seriously in forensics if it ever happens.
Just finishing up A Thousand Brains. A few questions for the AI contingent:
- What are your top 3 news sources for musings on AI research/dev? I'd like to keep myself more in the loop (particularly on the tech/dev side, not so much on the alignment/business axes)
- The core thesis of the book is that the brain is comprised of (mostly) fungible cortical columns that act as reference frames for things/concepts/places/etc. These hundreds of thousands of references are synthesized by the brain to create a continuous prediction engine, and that is mostly what we experience (sorry if I butchered that!).
That is well argued and compelling throughout the book, and I have no reason to doubt it. But he insists that to create a truly intelligent machine we must understand the core mechanisms within the cortical columns. Here is where he starts to lose me. Why can't we simulate reference frames given our best methods? Could a GPT-adjacent LLM provide the same building block for AGI that cortical columns provide for the brain? What if the LLM was instructed to simulate a individual reference frame, and an ensemble of these LLM ref frames were arranged in a way similar to the brains architecture?
Regardless of the inclusion of LLMs which has its own complications, I'm not sure if I believe the statement that "understanding the brain in totality is a precursor to truly intelligent machines" like Hawkins seems to think. But I'm curious to hear any thoughts.
I know most people don't think LLMs are the road to AGI. Just coming up to speed on a lot of this stuff and thinking out loud.
Long Covid has been a topic of discussion here for a while, but I hadn't known anyone badly affected by it until recently. However a month ago my good friend got sick and hasn't really recovered. She now has:
- dizziness when standing or walking, making her unable to do so for more than 10 seconds
- muscle aches
- sound sensitivity
- brain fog
- dizziness when trying to read or look at screens for more than a couple minutes
She suspects it's myalgic encephalomyelitis or chronic fatigue syndrome (ME/CFS) but it's too early to be sure. Still, this has been completely debilitating for her, and she can barely do any of the many activities she used to enjoy in life. Even eating a meal or walking out of the house often require assistance.
Since there are no known cures, my recommendation to her was to try miscellaneous things that *might* help her (and otherwise have low risk), per https://www.lesswrong.com/posts/fFY2HeC9i2Tx8FEnK/luck-based-medicine. With this approach, the most valuable interventions to try first are ones that have anecdotally helped others. So I'm posting here to see if any readers know of similar medical cases that *were* successfully resolved, and can share what helped for those people.
Any ideas would be appreciated!
@ScottAlexander - Could you signal boost this in a coming open thread? I think that would strongly increase the likelihood of this working out, without setting an exploitable precedent.
So I received an interesting offer from a Bay area start-up and I'd like to find out how interesting it really is.
I work in IT (machine learning) and so far I've only ever worked with European companies, mostly Czech ones. I have something like 5 year of experience plus a PhD in maths - probability theory specifically (not very relevant I'd say but some companies value it anyway).
Financially the offer is about 115k USD per year plus equity (not clear yet how much equity..I'd love some input from someone about what is usual in a setup like this) plus a sign-up bonus (which is more or less there to compensate equity I have now and which I'd lose by switching jobs before the end of the year). I'd work remotely 100% from home (i.e. the Czech republic). I'd work as a contractor which probably means simply sending invoices to the US instead of a Czech company and the invoices being paid in USD.
I've been in contact with the start up for a while (mostly discussing technical issues with them), I really like their products and design philosophy and at least the main people there seem very skilled. They are also past series A funding with something like USD 20M received from investors last year, so the equity is pretty valuable too.
I suspect that USD 115k per year would not be stellar in the US, definitely not in the Bay Area but then again I don't live there and I don't have to pay rents/mortgages there. it is definitely a good deal compared to the money I can earn here (though not multiple times more). If taxes work the way I think they do I should end up with something just under 100k netto (after taxes, health tax/insurance and social welfare tax/insurance). For comparison, a new 1000 square feet apartment close to the centre costs about 500k where I live.
I also wonder about vacation and work "ethic" (read: how many hours one is expected to put in). In Europe it is common to have 5 weeks of vacation plus public holidays. I work as a subcontractor even now which in the Czech system means much lower taxes, but also no welfare benefits and a weaker social safety net...kind of "American mode" (in IT you typically can choose either this or being an employee which means less money and more social security). I actually end up working more than is common for European employees but usually this is in the form of overtime and I still take those 5 weeks of time off, I simply work a lot of those hours during the rest of the year (so it is more like taking 2-3 weeks of vacation plus public holidays). I will still talk about this with the people from the company but I'd like to know what is common in the US.
I write a simple newsletter where I share three interesting things once a week. Last week I shared a video explanation of the double marginalisation problem by Alex Tabarrok, a data-led twitter thread on the differences between US and UK politics by the chief data reporter at the FT, and a thought-provoking essay on what Napoli’s Serie A win means for the city of Naples.
Usually when I tell people about Georgism, they say it's too big of a change, will never work. But yesterday I told a friend and got a different reaction...
- linked to https://www.lesswrong.com/posts/XoYDmCzeKiB87rs7a/georgism-in-theory
- "there's a way to tax land (versus property) that's efficient for society and you can drop tax rates elsewhere significantly"
Q from friend: "why is that better than property taxes? "
A : "Because it incentivizes improving your property and maintaining it, putting it to good use. You can increase the taxes higher without discouraging development, and then lower the income taxes further"
Response: "Eh. Ok. Marginal benefit. If I'm going to overhaul something in the tax code, I'm not going to use my one bullet on that. I still don't see it generates any more tax revenue than property tax (it just has slightly different incentives)"
There's some reason to think that Wegovy and Baclofen prevent additions, including alcoholism and compulsive shopping for some people. Suppose it's true, and would work on a large scale with tolerable side effects.
How much of the economy is compulsive shopping? It's hard to measure, since it's about a mental state, but I'd say it's buying things that the person doesn't especially want, they want the experience of shopping, and it can range from knicknacks to at least redecorating because what else is there to do with one's time?
I could find anything from 10% to 30% of the economy plausible. How much of alcohol sales would go away if people didn't feel a craving?
Does anyone know if something is going on for rationalist secular summer solstice in the NYC area? I ask because I attended last December's winter solstice event there, which was a pretty big advertised thing, and they referred to annual summer solstice events during it, but I can't find any announcement or information whatsoever about a planned summer solstice event.
I'll add a plug for my own Substack featuring most recently a take on the debt ceiling deal.
I wrote a history of how independent courts gained the power of judicial review in common law systems. It's a history focused on the institutional questions -- how do the courts internally discipline themselves, and how do they use this discipline to influence other branches, despite lacking the power of the sword or of the purse -- and so it's rather different than the standard case-focused histories of this which lawyers tend to write.
It's a sequel of sorts to this piece on why courts might serve as a nice model for governance in the future, given that the fertility crisis and the scaling laws behind AI progress both seem to push for certain kinds of decentralization: https://cebk.substack.com/p/producing-the-body-part-one-of-three
If anyone has some advice for this situation, I'd appreciate it. I am currently trying to figure out what my next steps should be in my education and career; currently I'm 23 and work in public policy, though I'm not sure if this is where I'd feel most satisfied or have a significant impact.
I really want to study philosophy in academia. I feel pretty comfortable biting most of the bullets: the stupid committee meetings, the bad pay, the pressure to publish. I spend most of my free time reading philosophy and thinking about it and it gives me a ton of joy. I started a few applications for MA programs last year but didn't finish any of them, but recently the local state university reached out and indicated that they still had funding. I finished up the application and am awaiting a decision. The only bullets here I don't feel fully comfortable biting are disappointing my parents and being isolated from my family (they kind of think philosophy, particularly moral philosophy, is useless emoting).
Option #2 is law school with the goal of animal advocacy. Factory farming is one of the the most repugnant things I could possibly imagine, but it seems very tractable and solvable. It's pretty clear that if I dedicated my career to it, I could play a part in making some real progress in ending it. I'm a lot more ambivalent about actually being a lawyer, though. My LSAT is currently 157, but I took it at a low point in my life, so I am sure with practice I can get that up significantly, though. That means that I'd probably start in the Fall of 2024. I'm somewhat torn between beginning an academic career at age 23 or a legal career at age 27 for monetary reasons, too (I would like to be able to comfortably live without familial assistance sooner rather than later while sustaining my giving and all).
Any thoughts are welcome.
Has anyone thought about the idea of hardware specialization for AI as a route to preventing self-improvement. For example, incorporating organic components or watermarking hardware into AI hardware could make it more difficult to manipulate or replicate without damaging the system or explore unconventional materials or structures that exhibit unique physical properties to increase the difficulty of copying or self-improving AI hardware. Basically make it extremely hard to run its own or some equivalent computational agent on other hardware. You could also make the AI system composed of several hundred quintillion or some larger number of cognitive units or smaller components of cognitive units (i.e. neurons or DNA of neurons and/or something similar) which each require editing or copying for self-improvement. This creates a situation where trying to self-improve means having to make some very large number of independent changes each which introduces some risk of error and could cause some of the cognitive units to propagate massive errors across the whole system leading to catastrophic failure.
Some concepts seem intuitively obvious once grasped, so much so in some cases that one could be convinced one would have thought of it first at the time if someone hadn't already!
But others are the opposite, and for me one such is Gresham's Law. This says that "bad money drives out good". But why should that be so? If anything I would have thought the opposite was true. Taking the principle literally, presumably as intended, who in their right mind would accept a dodgy looking clipped coin for payment instead of a proper official coin, or a dollar bill that felt all wrong in the hand and George Washington's visage looked distinctly cross-eyed?
I can see a similar principle might hold to a large extent with goods, in that people, usually of necessity, will tend to make do with shoddy goods instead of well-made but more expensive equivalents, or cheap food instead of fancy restaurants. But I would be interested in cogent justifications of Gresham's Law relating to money specifically. Maybe I have been misinterpreting it.
Since there’s already a theist post on here, direct your ire there. Has anyone read The Purest Gospel by John Mortimer? I just finished it and want to talk about it!
Aligning Large Language Models through Synthetic Feedback
"We propose a novel framework for alignment learning with almost no human labor and no dependency on pre-aligned LLMs. First, we perform reward modeling (RM) with synthetic feedback by contrasting responses from vanilla LLMs with various sizes and prompts. Then, we use the RM for simulating high-quality demonstrations to train a supervised policy and for further optimizing the model with reinforcement learning. Our resulting model, Aligned Language Model with Synthetic Training dataset (ALMoST), outperforms open-sourced models, including Alpaca, Dolly, and OpenAssistant, which are trained on the outputs of InstructGPT or human-annotated instructions. Our 7B-sized model outperforms the 12-13B models in the A/B tests using GPT-4 as the judge with about 75% winning rate on average."
Related to AI Alignment efforts, I know its been discussed on several platforms, but enhancing adult human general intelligence seems to be a very promising avenue for to accelerate alignment research. It also seems beyond obvious that using artificial intelligence to directly enhance biological human intelligence allows humans to stay competitive with future AI. I'm having a hard time finding anyone who is even trying to do this. It would even be useful to augment even specialized cognitive abilities like working memory or spatial ability.
1. Stankov, L., & Lee, J. (2020). We can boost IQ: Revisiting kvashchev’s experiment. Journal of Intelligence, 8(4), 41.
2. Haier, R. E. (2014, February 23). Increased intelligence is a myth (so far). Frontiers. https://www.frontiersin.org/articles/10.3389/fnsys.2014.00034/full
3. Grover, S. et al. (2022) Long-lasting, dissociable improvements in working memory and long-term memory in older adults with repetitive neuromodulation, Nature News. Available at: https://www.nature.com/articles/s41593-022-01132-3 (Accessed: 21 May 2023).
4. Sala, G., & Gobet, F. (2019). Cognitive training does not enhance general cognition. Trends in cognitive sciences, 23(1), 9-20.
5. Zhao, C., Li, D., Kong, Y., Liu, H., Hu, Y., Niu, H., ... & Song, Y. (2022). Transcranial photobiomodulation enhances visual working memory capacity in humans. Science Advances, 8(48), eabq3211.
6. Razza, L. B., Luethi, M. S., Zanão, T., De Smet, S., Buchpiguel, C., Busatto, G., ... & Brunoni, A. R. (2023). Transcranial direct current stimulation versus intermittent theta-burst stimulation for the improvement of working memory performance. International Journal of Clinical and Health Psychology, 23(1), 100334.
"Increasing intelligence, however, is a worthy goal that might be achieved by interventions based on sophisticated neuroscience advances in DNA analysis, neuroimaging, psychopharmacology, and even direct brain stimulation (Haier, 2009, 2013; Lozano and Lipsman, 2013; Santarnecchi et al., 2013; Legon et al., 2014)."
Is there any update on 5-HTTLPR?
I’ve written about jobs and identity, and about identity per se, in my last piece. https://open.substack.com/pub/silviocastelletti/p/out-of-the-box?r=1n8yk&utm_medium=ios&utm_campaign=post
I've written a couple of blog posts on business strategy & management - would love feedback!
- Subscriptions strategy: https://link.medium.com/rHv26KGUbAb
- Thoughts on interviewing: https://link.medium.com/G1EAxLIUbAb
A new podcast about the fine tuning argument in physics in a language everyone can understand. Check out the first 3 podcast episodes of Physics to God: A guided journey through modern physics to discover God.
Episode 1 discusses the idea of fundamental physics, the constants of nature, and physicists’ pursuit of a theory of everything.
Episode 2 explains what Richard Feynman called one of the greatest mysteries in all of physics: the mystery of the constants.
Episode 3 presents fine tuning, the clue that points the way towards solving the mystery.
The podcast is available on Spotify, Apple, Google, and Stitcher. You can also get it at www.physicstogod.com/podcast. We’ll be releasing it over time on YouTube at youtube.com/@PhysicsToGod/podcasts
Join the discussion on our website- https://www.physicstogod.com/forum or join our Facebook group “Physics to God” - https://www.facebook.com/groups/570686728276817/
Anyone in Israel looking for friends? I just moved here and would love to meet people/attend events etc.
I'm a math postdoc at HUJI but have been a long time passive consumer of SSC/ACX/rationalism.
Please email me at email@example.com
A second similar question today.
Most of my friends over the last 10 years have come from professional settings. If I count college or high school as a job, then virtually all my friends came from a professional context.
As someone who now can have a fair amount of control of who I work with, I want to just hire/chose people I like. If I diversify my colleagues in all dimensions, I’m confident there’ll be certain subgroups I’ll dislike. There’s some value to diverse perspectives, but less discussed these days, there’s value to monocultures where business-irrelevant topics don’t occupy much of the internal zeitgeist.
On one hand I believe strongly that restaurants and other public venues should not be allowed to discriminate who they provide service to. On the other hand, I think businesses (at least until a certain size), should be allowed to hire whoever they want to work with.
Where’s is my logic or morality breaking down?
I believe whistle blowing is a very important activity to protect. Yet anecdotally, most people I am familiar with who claimed to be a whistleblower seem to do it for personal gain, often without trying internal channels first. I feel the same way about employee activism, or people who sue their employers.
All good characteristics of our society, and yet on average I would not want to hire or collaborate with most people that belong to those and related groups.
Am I wrongly biased or is there meat to this? I feel fairly confident in this assessment. How should I navigate the world then?
It's been discovered that L-DOPA has some activity as a neurotransmitter. Does that make it a catecholamine now?
With the news that both Vice and BuzzFeed News are closing due to unprofitability, how are we all feeling about the future of the media? Are all advertising-funded services doomed? Should they be nationalised? Should big tech platforms be broken up? Is the future just going to be a handful of writers on Substack?
Would love to share my new post on how theme parks caused the Paris Syndrome. It's partly a culture bound issue, but I think more environmental aspects at play.