Okay, I’ll be the one to touch the third rail. My advice to Dems on the 34 felony convictions: Do not spike the ball on this. Really. I would have CapLocked ‘do not’ if not for my own habit of dismissing CapLocked comments.
I’ve seen this dynamic play out on a smaller scale locally. I’d describe it but, as usual, I’m on my phone and I don’t want to tire my thumb. The result was not what the over eager wanted.
If the Dems think they are dealing with a 30% of American voters cult of personality now they can fully expect to have that number grow if they start to gloat about a conviction based on a novel interpretation of law against a former US president.
I'd expect that no matter what the median Democrat says or does, some combination of news sites and algorithms optimizing engagement with conflict, and right leaning news and political efforts seeking to optimize voter turnout with the same, will ensure that regardless of how large or small the contingent of "gloating" Dems are, right leaning voters will be sure to see them overrepresented as "how the other side is."
As an adult in a household that is currently infected with skibidi fever, I will just say that I’m glad the original Little Big song isn’t getting played.
Many young people seem to think we live in hard times. Some of the book reviews reflect that. It's a popular meme: these are hard times.
Obviously, these people are crazy and have no sense of even recent history.
I think the reasons for young people thinking they live in hard times have a lot to say about the future of AI.
In spite of a massive increase in the standard of living for Americans objectively over the past 20 years, young Americans reject that narrative. I think the disconnect is that technology, despite changing life tremendously, hasn't improved it subjectively enough that people notice that they are better off.
This leads me to believe that the same will likely to be true when AI changes things massively on an objective level. The standard of living will improve but will hardly be noticed, because subjectively we are asymptotically approaching optimal conditions for humanity.
Although young people generations hence will continue to complain about the economy sucking, their real complaints are that technological advances aren't helping them achieve what makes most people happy: good relationships, family, interesting work and optimism.
It is a basic Stoic advice to consider that it *could* be worse; and if you know history, you know that most people actually *had* it worse.
Like, when people complain about all the inconveniences related to covid, I think about Black Death and conclude that we have it too easy. Most of your family survives the pandemic, and you complain about having to wear a mask for a few months? Seriously? Read "A Journal of the Plague Year" to get some perspective!
Old people remember the old times in person, but young people can get similar perspective from reading books about the old times. It is a very natural mistake to assume that the past was exactly like the present. My kids find it difficult to imagine a childhood without internet.
It is natural for humans to imagine a golden age in the past. Christianity believes that Adam and Eve lived in a paradise; Marxism believes that out ancestors lived in a perfect egalitarian society; Taoism believes that ancient people were all virtuous and lived in harmony with Tao; feminism believes that noble savages lived in enlightened matriarchal societies. The only difference is that young people these days seem to believe that the Golden Age happened in a very recent past, so they can accuse their own parents and grandparents of being right there and having ruined it by eating some forbidden fruit. But maybe even this fits into the general pattern of accelerating progress.
The last several generations have also perceived hard times as young adults. The core reason for this is that the transition to independent adult living is genuinely hard for most people. And it seems worse than it is because you're comparing your own lifestyle as a new-minted adult fresh out of school to that of your parents who are 2-3 decades further along in their careers than you and have had a similar amount of time to build up a stock of household capital. And the other major baseline for comparison is "slice of life" sitcoms and other media depictions, which tend to show middle-class or struggling-young-adult characters with an unrealistically high material standard of living, particularly in terms of living space and dining-out-and-entertainment budget (c.f. "'Friends' Rent Control" on TvTropes).
On top of this, the transition to independent adulthood has probably gotten significantly harder over the past couple decades. Credentialism makes it harder to get good entry-level jobs than it used to be, people are graduating with more student debt than was the norm in the 90s and before, and the cost of basic housing has been growing faster than the overall inflation rate.
I suppose I am talking about "popular vibes". I was a young adult in the '90s and don't remember the vibe being "these are hard times". I would say that shows like "Friends" perhaps because of its unrealistic portrayal of life captured the positive vibe of the times. Indy movies like Office Space captured generational disenchantment with the workplace but also demonstrated that financial insecurity was not a big concern of twenty-somethings. By comparison, I can imagine a 22-year-old in 2010 watching Office Space and thinking "These punks are gainfully employed at desk jobs but are too spoiled by the '90s economy to appreciate it!"
I do agree that young adulthood is a hard time in life. But there's a difference between recognizing that versus thinking: "My grandfather's generation had it easier at this age. The 2020s are a bad time to be young."
I distinctly remember "these are hard times" vibes as a young adult in the early 2000s, and I'd assumed the vibes went back earlier than that. As a teenager in the 90s, I did notice a fair amount of pessimistic vibes in media, particularly stuff focused on young adults. Bringing up "Friends" again, remember how the theme song goes:
"So no one told you life was gonna be this way
Your job's a joke, you're broke
Your love life's DOA
It's like you're always stuck in second gear
When it hasn't been your day, your week, your month
Or even your year, but..."
It's a perky, upbeat song and the chorus is optimistic in tone, but it's optimistic about social support, not material conditions.
Later seasons of the show were a lot more materially optimistic than the earlier, IIRC. In the early seasons, Ross and Chandler had decent jobs, but the financial struggles of the other four main characters were major recurring themes even though the depicted standard of living implied that Monica and Rachel were the best-paid line cook and barista in New York.
I feel like the transition happened somewhere around 1999-2000. I can't speculate as to underlying causes, but the economy wasn't doing as well as it used to. And there were some signs of "things going wrong", like Columbine, and the WTO riots, and Bush v. Gore, and then 9/11.
"Many young people seem to think we live in hard times. Some of the book reviews reflect that. It's a popular meme: these are hard times.
Obviously, these people are crazy and have no sense of even recent history.
...
Although young people generations hence will continue to complain about the economy sucking, their real complaints are that technological advances aren't helping them achieve what makes most people happy: good relationships, family, interesting work and optimism.
"
Less of this please.
This is one of the least productive ways you could have phrased this. You certainly understand why young people might complain; modern technology either doesn't help people's attempts to cultivate the things that make them happy: relationships, family, rewarding work, and optimism. You're also probably aware that there hasn't been a massive increase in the standard of living, it's been ~18% real increase over 20 years (https://fred.stlouisfed.org/series/MEPAINUSA672N) and it's certainly probable that, given variance, some people are actually significantly worse off than they were 20 years ago, on top all the social consequences you list.
We're all adults, we can discuss things calmly, you can just say that lots of people are frustrated with declining social relationships in a situation of moderate economic growth if that's what you actually believe.
Without some kind of clear criteria, it just seems like an evergreen excuse to belittle and ignore other people’s complaints as long as we can define some time period in the past we can plausibly allege to be harder. Young people in the 1970s are whiners for complaining about inflation and the Vietnam war - those aren't "hard times," because previous generations lived through the depression and WW2, young people living through the great depression and WW2 are whiners - those aren't "hard times," because the civil war was far bloodier and those people didn’t even have antibiotics and “medicine” meant hacking a leg off with an unwashed saw, and so on.
To a first order, my criteria of "hard times" is a time of war vs. a time of peace, and the economy. Right now we have peace in the USA (as always, there are places where that isn't the case), and the unemployment rate has been very low for quite some time.
So I believe the major years of the Vietnam War, WW2, The Great Depression, and the Great Recession were "hard times". The '80s, '90s, '00s, and this decade not.
But maybe I am missing something. What am I missing?
> Many young people seem to think we live in hard times.
I think ragebait is to blame.
People really like reading about how shit everything is because it's engaging and confirms that they're not to blame for whatever hardship they're experiencing.
Perusing reddit.com/r/all you'll often find memes about what things cost and how cheap it was to buy a house in the 80ies (with completely made up insane numbers)
I suspect this is in part driven by troll farms, it would make sense for russia/china/whatever to try and convince western youths that everything is hopeless.
(But I think it would probably happen organically either way)
I think it's possible that the milieu we live in is making people better off in specific ways and worse off in other possibly more salient ways.
Better off is the obvious improvements in tech, medical care, etc. Lots of things are now massively more convenient.
Worse off tends to be things that are really important pillars of wellbeing - security in your place in the community (the number of people who are self-employed or employed in very small, close-knit businesses is way down compared to before - most of us now work as individual employees, surrounded by coworkers who rapidly move on or are made redundant - this is not at all conducive to building a sense of security in ones place in the social hierarchy when the faces change constantly), we're very dependant on scarce positional goods (I don't think it's controversial to say both housing and jobs are now more scarce), and a lot of us sleep a lot worse (some of this is screentime, some of this is higher population density).
Its completely possible that the psychological impact of being secure in community, vocation, and shelter is higher than "nicer stuff". People aren't necessarily opposed to work that is hard or unpleasant if the work is also respectable and able to afford a living (if that was the case there wouldn't be anyone signing up to become a doctor, which is well known to be gruelling). But the availability of these jobs seem to be shrinking, and a lot of the respectable jobs have had the hard aspects get harder without the respectable aspect changing (seems to have occurred in teaching especially)
Housing is probably scarcer. I don't see how jobs can be considered scarcer now with such a low unemployment rate for so long now. Jobs were truly scarcer during the Great Recession, 2009 - 2014. I would consider those years to be "hard times" for twenty-somethings.
Interesting point about working in "small, close-knit businesses". I suppose I don't understand why that would be preferable. Small businesses are more like families in ways both good and bad. Many small businesses are run by abusive owners, although many are run by wonderful owners. The modern HR-run corporation kind of irons out those extremes. You get neither too wonderful nor too abusive.
But perhaps you have a point about lack of community in a geographical sense. That is clearly something we have less of.
Jobs in general might not be scarce, but desirable jobs specifically are. Its well known that contract and temp positions make up a larger proportion of the workforce now.
Bouncing between companies every 2 - 3 years is not conducive to forming very strong bonds within an industry, and that's especially true if a big chunk of the workforce is doing that simultaneously.
Permanent roles aren't a guarantee that it's not going to happen, either, because those tend to be offered at very large organisations (> 300 employees), and frequent internal moves due to the company restructuring would be similarly destabilising.
A similar thing happens in housing - renters bounce from place to place due to having short (normally 1 year) leases, and it's not just that all of the neighbours are also on short term leases, you don't really get to become a "regular" at local businesses if you have to move again soon after (and some businesses, eg a supermarket, have high enough turnover that the checkout staff have changed like 8 times while you were there).
So that's my thesis - in the last 30 years, the two places most of us spend most of our time (work and home) tend to change too often for us to form lasting relationships with the people and businesses nearby. Even if you succeed personally in locking down these two locations, everyone else around you struggles to do that, so those people and businesses change constantly anyway, mostly to our social detriment, because it causes most of us to focus most of our social energy on a tiny group of people (your spouse and immediate family). I do think a lot of us no longer have "medium intensity" social bonds - people we know quite well, socialise with often, but don't literally live in your house. Most of us only have the low intensity (colleagues and acquaintances 0 - 3 years) and high intensity (spouse and kids if we're lucky) bonds. Young people don't really have the high intensity bonds yet, too.
Maybe, a lot of us feel like our personal "tribes" are too small to feel safe, because it's just so difficult to build a proper tribe under the current economic conditions.
First, I want to agree with WoolyAl that my post could have been phrased better and more generously.
As to your post: I agree with your words, but how *are* we to measure reality as to how hard the times are if we don't consider the most common metrics such as the unemployment rate or that we live in a time of peace not war?
What measures do you have in mind that show the current times are relatively bad?
I'm baffled by the idea that low fertility rates are compelling evidence of relatively bad times. It seems a pretty universal law of the last century that fertility rates decline as infant mortality declines and material well-being increases. Very plausibly, lower infant mortality and increased standard of living *causes* lower fertility. No one aspires to move to the places with really high fertility!
There's a gap between actually-achieved and desired fertility though - Zvi quotes them all the time, but there's surveys showing most developed world women want ~2-3 kids, and have ~1 kid. The gap would be the indication that current times are relatively bad - either financial, social, or other concerns are leading people to have fewer children than they say they want.
The point is that if you have some metric by which sub-Sarahan Africa appears to be a uniformly better place than Europe, then you should not be using this as your primary metric for the question "are things Good or Bad?" If things were Good in sub-Sarahan Africa and Bad in Europe, then migration flows would be going in the other direction!
"they have a smaller proportion of total household wealth than older generations did at the same age:"
They don't control for demographic changes.
In 1995 the share of young people was much greater than it is now
So it's hard to tell whether there's a real effect there when it comes to a persons access to the economy, or whether there's just comparatively more baby boomers.
Thanks for the recommendation. I've posted about Lucy Letby before, and predicted more murders and attempted murders to be revealed at the upcoming enquiry. The New Yorker story raised some interesting points - my prediction is mainly based on similarity to Shipman, where the Police narrowed it down to 7 prosecutions, then after conviction opened an enquiry into the hundreds of other cases where Shipman probably murdered the patient. The NY piece contends that the Letby case is being approached by the Police in the way it is precisely because of a fixation on Shipman, in which case I'm being taken in by the Shipman vibe the Police are engineering. Private Eye have a good record of exposing weak convictions, and they have a piece on Letby ready to publish when the restrictions are lifted. So we'll see. But many of the arguments in the NY piece are lame....
...for example the doctors who snitched on Letby are depicted as being overly confident in their own opinion - which may be a failing in a journalist, but it's not clear to me that it's a fault in a doctor. Also, "but she had friends!!!" is a terrible argument - Shipman (sorry!) was a well liked and well trusted family doctor with a family of his own. Anyway, we'll see what the public enquiry brings out - I'm willing to be persuaded. But the very existence of the enquiry is probably bad news for Letby
The logic of The War on Terror after 9/11 was "We fight them overseas so they can't fight us here (In the USA)"
I generally think that the USA's War on Terror was overkill and idiotic. We spend trillions of dollars on it we could have spent at home on infrastructure. (An argument that outsourcing to China for manufacturing wasn't so bad because China turned around and invested trillions in the US bond markets which kept interest rates low for Americans. The problem was that instead of investing that Chinese money in the US we blew it up in Iraq and Afghanistan.)
The War on Terror weakened US hegemony because it showed the US to be capricious, strategically weak (particularly in the case of Afghanistan), and politically divided. OTOH, we haven't had a foreign terrorist attack of not since 2001. So maybe that war against an emotion was sort of effective?
Does anyone today think the War on Terror was worth it? Even partially? Are there parts of it that are defensible? (I think dropping some bombs on the Taliban and assassinating Osama bin Laden were good things but not much beyond that.)
> Does anyone today think the War on Terror was worth it? Even partially? Are there parts of it that are defensible?
Aside from everyone else's point about the money, as a frequent flyer, the cost in US time to the TSA's security theater has now killed ten times more people than 9-11 itself, and which anytime it is audited with red-teams trying to get weapons through, has a *95% failure rate*.
I went back and forth with another ACX poster on various assumptions, and we arrived at a floor of ~35k US lives lost in US citizen-hours due to the TSA, which is 10x the actual toll of 9-11, and whose ongoing cost (with a 95% failure rate, remember) wastes something like 800M USA person-hours annually.
>We spend trillions of dollars on it we could have spent at home on infrastructure
Surely most of that money was indeed spent at home.
>Was the war in Iraq worth it?
For whom? For Iraqis, who will never be ruled by Qusay Hussein? Very possibly. Just as Afghanistan was very possibly worth it for the hundreds of thousands of girls who were educated during the 20 years after the Taliban were forced out of power. It is impossible to say, because weighing the costs and benefits requires a normative judgment.
Most of the money was spent at home to produce what? Ammunition and other supplies to support an army overseas? IOW, disposable not durable goods. Not advanced infrastructure that could exist as part of the wealth of the nation such as cross-country power lines delivering the abundant solar and wind power from the desert side of the nation to the more populated, less breezy and darker parts of the country.
Defense contractors benefited from that money, but couldn't it have been better spent, even on defense technology, without that war? I don't know what we spent on bombs and missiles that were launched and detonated, but they were all a deadweight loss, not an investment in future military technology for a war that might be worth fighting.
And as I say above, I think giving a lot of money directly to those who lost jobs directly due to the China trade would have been much more worthwhile than spending it on those wars.
Yes, of course it could have been better spent. I was merely objecting to the implication that it was simply thrown away. (And you seem to imply that again, when you refer to supporting "an army overseas." It matters little where a soldier is located, if the spending is domestic.
Note also that the only way to get to an 8 trillion cost is to include future spending on things like veterans' benefits.
Finally, "we could have spent it on infrastructure" is a bit of a red herring, given how little of the federal budget is spent Theron. The vast majority is spent on providing services, which in your formulation is also wasted spending.
The War on Terror cost us about $8 trillion. I know that seems like an absurd amount of money, but it would buy you Microsoft, Apple, and Nvidia. The three largest American companies, but you're still missing Google, Amazon, Tesla, etc.
If you're going to spend that money on infrastructure, prepare to be disappointed. Take the Bay Bridge replacement in California. Cost 6.5 billion to replace a bridge, originally estimated at 2 billion. At that rate, you'd be getting somewhere between 1,500 and 4,000 bridges, and it'll take about 11 years to get them done. Or look at California's High-Speed Rail. In 2008, it was estimated that the project would take 33 billion to get rail from Anaheim to San Francisco. As of 2023, we've sunk in 20 billion and gotten zero miles of usable track. It's currently estimated to cost 100 billion to get from Anaheim to San Francisco. So I can totally imagine spending that 8 trillion on projects that never go anywhere (a la NEOM in Saudi Arabia).
Or put another way, if you took that 8 billion off the U.S. national debt and we lived in the counterfactual world where that money had never been spent, the current debt would be at its 2017 level. I was alive in 2017 and I don't remember feeling like the U.S. was just drowning in wealth - quite the contrary, it was received wisdom that the U.S. was drowning in debt.
We certainly didn't win the War in Iraq or Afghanistan. But the U.S. largely believed - foolishly perhaps - in the idea that ordinary people could, if given a chance, successfully govern themselves without dictators or autocrats. Unfortunately in the past few years, we're seeing the naivety of that concept -- authoritarianism is flourishing everywhere. But I don't fault the U.S. for believing and some small part of me still believes that we'll one day see the good ending to the Arab Spring.
>We certainly didn't win the War in Iraq or Afghanistan.
I never understand it when people say this, esp about Iraq. The regime that governed Iraq was completely destroyed. The current constitution of Iraq is the one that was written under US supervision. If that isn't winning, I am not sure what is.
Leaving aside that "dominated" is a gross overstatement, so what? Isn't that evidence that the war was won? Because if the old Ba'athist regime had regained power (and note that "the old regime regained power" is precisely the argument underlying the statement that Afghanistan was a loss) Iran would obviously have less influence. More broadly, the overall US policy re the Middle East has not been "anti-Iran." It has been pro-stability. And the former regime in Iraq was the source of enormous instability, what with their propensity to start wars with their neighbors and all.
I think the best use of the low interest loans from China would have been to compensate factory workers whose jobs were displaced by outsourcing to China. It would have benefited them, and it could have benefitted trade policy going forward as international trade wouldn't be viewed as such a negative thing by the masses if "trade-offs" != "the working class gets fucked".
Ack, sorry! The math should be correct. It’s like 16 years worth of US infrastructure spending, which seems like a lot, but really wouldn’t change reality all that much.
I think it was mostly not worth it either, and that the wars were a mistake. I think there are still some contrarians here who support the Iraq war, but for the most part, yours is the standard take.
Perhaps theoretically if they had not invaded Iraq and focused only on Afghanistan, Afghanistan might have turned out better, but in our timeline it certainly didn't end well and it's hard to put much faith in hypotheticals going better.
I just watched some videos of the songs from Wish and wow. I'd already heard that Wish was criticized for lack of story, excessive references and awkward song lyrics, but I had no idea it had such a weird-looking art style as well. The characters are still the same detailed-3d models as every past Disney movie, but the backgrounds are all 2d, making the whole thing look very stupid.
Also, is it me or is the exposition song just an inferior ripoff of the one from Encanto? It seems so similar.
It doesn't even have a pretty dress. It should be a no-brainer when making a Disney movie to put your heroine in a pretty dress so you can sell ten million copies of it to little girls, but apparently pretty dresses are now too heteronormative or male-gazey or something, so it's a shapeless purple long-sleeved tunic on our shapeless big-nosed heroine-of-undefinable-ethnicity.
The art style was supposed to be an homage to classic Disney movies like Snow White or Sleeping beauty, with the simple 3d models meant to resemble a kind of combination between classic watercolor backgrounds and modern CGI. It was actually an impressive technical feat to pull off, but the problem is that it looks weird. File it under the category of things that are hard to do, and also suck.
The moment that got me was the "You're a Star" number. At the end of the movie you can sort of see why they needed it, or something like it, since "everyone being a star" is basically how they defeat the big-bad. But situated as it was in the movie it was just this random mediocre "everybody is special" pop number that just flew in out of left field.
I think that the word evil is a really good example of Sapir-Whorf type effects where a word that doesn't do a good job of carving reality at the joints leads to sloppy thinking (by contrast, "good" in the moral sense is less harmful).
I think that "evil" conflates (at least) two different things - "does not try hard to do what they consider to be the right thing" and "is wrong about what the right thing to do is".
Consider, say:
:- Serial killer Ted Bundy
:- Indian independence leader Mahatma Gandhi
:- A relatively principled politician from a party you disapprove of.
:- 9/11 suicide bomber Mohammed Atta
:- Me.
On the first scale, "tries hard to do the right thing", I would rank these people
Suicide bomber > Gandhi > Politician > Me > Serial killer.
On the second scale, "is correct about what the right thing to do is", I would rank them
Me > Gandhi > Serial killer > Politician > Suicide bomber
So what sorts of insights can we express with these two scales that we can't with just the word "evil"?
Getting low on either scale gets you into the territory we refer to as "evil", but in very different ways - I think that both Atta (an incredibly brave and principled man, demonstrably giving to give up his life for what he believed was right, but whose principles and beliefs about what that constituted were diametrically wrong) and Bundy (who for all I know may have had a perfectly good moral compass, but just chose to ignore it) did terrible things, but they did them for very different reasons, and the condemnations I would offer of them as people (as opposed to of their actions) don't really over lap.
By contrast, the only people who do really good things are people who score high on both scales. The people at the top of the "tries hard to do good" scale include some saints, but also a lot of really terrible monsters; no-one else's moral values align with mine as well as my own do, but I'm not a very good person because I don't make the effort and sacrifices required to be. Since "good" requires being high on both scales, it - unlike "evil" - does a good job of referring to a natural cluster in the 2D space they span.
One slight complication is that beliefs are often downstream from intentions. If you are a cruel person who enjoys the suffering of others, you'll find yourself drawn to ideologies that say that cruelty is justified. Evil ideologies attract people who are already predisposed to be evil.
> On the second scale, "is correct about what the right thing to do is", I would rank them
Me > Gandhi > Serial killer > Politician > Suicide bomber
You demonstrate a severe deficiency of hate for the political party "you disapprove of". I would recommend watching YouTube videos from the lunatic fringe of their side of aisle until you are cured.
Serial killers are ranked higher on the "correct about the right thing to do" scale though. So I guess murdering innocent people is less wrong than outgroup political ideas.
I think Bundy was chosen because he's generally understood to have known that his crimes were morally wrong but didn't really care.
There are other serial killers who justified or excused their crimes, e.g. Ed Gein or David Berkowitz, and thus would have been ranked lower on the "correct about the right thing to do" scale.
Notice that I'd class someone like Bundy at the very bottom of the "does what he thinks is right" scale, whereas plenty of politicians I despise try harder to do what they (in my view wrongly) think is right (as do suicide bombers, regardless of their motives).
Are you implying politicians you disagree with have worse moral compasses than suicide bombers? That's the only person he ranked them above. That's quite extreme
I *am* implying that there are serial killers with better moral compasses than politicians I disagree with (and possibly even politicians I broadly agree with), the difference being that the serial killer knows right from wrong and chooses to do wrong.
Well, not ALL suicide bombers, of course. Some of them probably have the same political views as said politicians. But I imagine suicide bombers to be more likely to be of the political persuasion NOT in power, and that might be skewing my representative instance morally-rightward.
At any rate, my point is that the hypothetical politician, by construction, is selected to have a "bad" moral compass. The hypothetical suicide bomber, in contrast, is only selected for passionate belief and a willingness to die for SOME cause.
I agree with you to the extent that most people use good and evil the way you defined. But they aren't treating the two terms symmetrically. In this framework, being good requires both intent and action, whereas evil only requires one or the other. This is why it seems like good is better defined and more exclusive. Basically, evil is over-defined as everything not explicitly good.
Good intent, good action > Good
Good intent, evil action > Evil
Evil intent, good action > Evil
Evil intent, evil action > Evil
If the terms were treated symmetrically, both good and evil would be narrowly defined as good acts with good intent and evil acts with evil intent, respectively. And both would be useful descriptive terms.
Interesting way of thinking about this, but there needs to be a "neutral" possibility for the "action" label.
And then on the "intent" label an unstated variable in how people think/talk about this is degree of selfishness. How people judge others' purely-self-interested actions varies enormously, so much so that it seems like a significant confounder to this framework's real-world usefulness.
I have a silly question (theory?) about the mechanism of the antidepressant mirtazapine.
For background I've been pretty severely depressed for a while (5+ years) and have had allergies/asthma/eczema/etc for all of my life; I had pretty severe allergies and asthma as a child, ended up in the hospital semi-frequently. I've read the theories that depression is linked to inflammation or whatever and there seems to be a reasonably robust association between asthma, food allergies, and depression risk.
At any rate I went through four separate medications before trying mirtazapine, mostly SSRIs, and they did absolutely nothing except give me some mild side effects. Ditto with therapy. Then I tried mirtazapine and it was magical; felt better after two weeks than I had in many years, and it's lasted for a while (more than a year now).
Of course this is completely anecdotal data. But I was reading up on the mechanism of mirtazapine, mostly out of curiosity, and I noticed that in addition to the main antidepressant mechanisms of á2-heteroreceptors antagonismand 5-HT2 receptor & 5-HT3 blocking, it's an extremely potent antihistamine. I'm curious if the antihistamine effect in itself is helpful for depression (and is maybe related to the fast onset of effect for mirtazapine). I can't find any studies for this. Has anyone tried injecting rats with histamines to see if they get depressed? Maybe in people with a long history of severe allergies/asthma it makes sense to prescribe a TeCA first? Would appreciate thoughts from someone who understands psychopharmacology better than me.
I don't understand it better than you, but I do have some thoughts for you: I doubt you will find much the by straightforwardly researching your exact question. But I just went on google scholar and searched "antidepressant effects antihistamines" and all kinds of stuff turned up. I think I also saw some stuff about antidepressants as a treatment for allergies, too. I'm guessing you will get confirmation that your basic idea is plausible. So then you might just try to figure out ways to test the idea with yourself. If you expose yourself to allergens, do you get a bit depressed? Does a course of allergy shots have an effect on your mood? Does adding a bit of benedryl to what you're already taking make a difference? (Check to see whether this is safe before trying it, though). I think you have an unusual and hard-to-treat kind of depression, and it would be good try empower yourself by figuring out as much as you can about how your system works. You can't count on psychopharmacologists to do that, or to be up on the research. Many seem to have very little intellectual curiosity. Anyhow, congratulations on finally finding something that works.
The Israel-Palestine situation has gone through several months' worth of events in the last week, here's a summary, as much for my own benefit as anyone else interested.
(1) On Friday, The ICJ ordered Israel to halt the operation in Rafah.
From [1], the 25:30 mark:
>>> By 13 votes to 2: [The court orders Israel to] immediately halt its military offensive, and any other actions in the Rafah governorate, which may inflict on the Palestinian group in Gaza conditions of life that will bring about its physical destruction, in whole or in part.
=====================
(2) Israel has interpreted (1) as saying that the military offensive should only be halted if it threatens to inflict genocide on the Palestinians, i.e. instead of understanding the sentence structure as "(a) halt military offensive (b) any other action which may ...", Israeli politicians and media appear to have deliberately understood (1) as "(a) halt military offensive and any other actions that satisfy (b), (b) that which may ....". Then declared that the military offensive in Rafah doesn't satisfy (b), and thus won't be halted.
It's notable how nearly every international newspaper understood (1) to mean the immediate and unconditional halting of hostilities in Rafah:
-- (2-a) NYT: U.N. Court Orders Israel to Halt Rafah Offensive [Subtitle:] The International Court of Justice ruling deepens Israel’s international isolation, but the court has no enforcement powers.
-- (2-b) WaPo: U.N. court order deepens Israel’s isolation as it fights on in Rafah [Subtitle:] Though a rebuke to Israel’s conduct of its war, the World Court ruling will be difficult to enforce without the backing of the United States.
Four of the ICJ judges - 2 of whom were dissenters - supported Israel's ass-backward interpretation in public statements, while 1 (South African ad-hoc) supported the mainstream interpretation, and 10 judges stayed silent.
=====================
(3) The ICC hasn't yet granted arrest warrants against Netanyahu and Gallant. This is within range, Putin's warrants took 1 months to grant, while Omar Al-Bashir's (Sudan's dictator) were granted in 9 months. It's unclear yet how the ICC classifies Netanyahu, and whether further developments will accelerate or decelerate the granting of the warrants.
=====================
(4) On Sunday, Israel has torched a safe zone in Rafah. Claiming it was targeting 2 Hamas officials, initial figures say that 45 Palestinians, 32 of which are children, were burnt to death as their tents caught fire from the air strike [2]. The IDF alleges that it used lighter ammunition to strike a nearby location, and that the fire is instead the result of a shrapnel from that attack that hit a fuel tank, starting the fire.
This attack received widespread condemnations from outside Israel. The EU is reportedly mulling sanctions, pushing the stricter interpretation of the ICJ ruling above that means immediate and unconditional retreat from Rafah. Amnesty International [3] further called the ICC to investigate the incident among the war crimes it's investigating.
Meanwhile, inside Israel, some journalists have celebrated the torching and likened it to the Lag Ba'Omer bonfire [4]. (a ritual of celebration of a Jewish holiday of the same name that coincided on the day of the attack, usually celebrated in Mount Meron in the North, but this year was celebrated in East Jerusalem's Shaikh Jarrah neighborhood.)
=====================
(5) Hamas claims to have ambushed and captured soldiers in Jabalya. The IDF denies the validity of the claims, but didn't offer any additional details or explanation of Hamas' footage. If true, it would be the first time that new hostages were added to Hamas' bunkers since October 7th.
=====================
(6) 2 Egyptian soldiers were killed in an exchange of fire with the IDF on Rafah, one immediately with a sniper shot, and the other due to injuries. Both militaries conducting investigation, heavily restricting information and issuing few public statements.
=====================
(7) On Saturday, a video appeared of a masked IDF personnel threatening Yoav Gallant of disobedience in case he orders a retreat from Gaza and/or handing over the territory to any Arab-affiliated government. The video was shared by Netanyahu's son, to widespread outrage and condemnation in Israel.
I'm not making any comment on the underlying issues, but going solely off of the sentence you quote, I would agree with the Israeli interpretation. "the court orders Israel to immediately halt [its military offensive, and] any [other] actions in the Rafah governate..." Grammatically you should be able to remove the bracketed section and have the sentence still make sense. But, at least in my experience with English, "halt... any other actions" is almost always followed by a "that" or "which" qualifying what subset of all possible actions are prohibited.
Again putting aside the object level and only focusing on the grammar, your post still makes no sense to me. That's not how English works at all, to the point where it is difficult to see how someone could interpret it in the way that you did unless they really really want to and are just looking for a fig leaf.
If you say "I want you to stop eating cows or any other animals that chew their cud", it is simply not reasonable to respond "cows don't chew their cud so that means we can eat them". The word "and" includes *extra* stuff, it doesn't limit that which is explicitly named. "Any other" also implies that the description of the second part describes the first part, but simply disagreeing with that implication doesn't mean you get to throw out the explicit plain meaning of the words.
I will admit that there's an extra comma before the "which" in LHHIP's quote which is pretty weird and shouldn't be there, but even with the comma, the rules of English just don't allow for an alternate interpretation here.
> there's an extra comma before the "which" in LHHIP's quote which is pretty weird and shouldn't be there,
Yes, I originally wrote the quote without the comma, then - for honesty's sake - I went and checked the official ICJ transcription [1], which does include the comma, a fact that Israeli media like Times of Israel has exploited to peddle their bizarre interpretation.
But yes, any application of either common sense or the principle of Relevance from Pragmatics would immediately reveal that the court didn't bring up Israel's offensive in Rafah for fun, or just because the judges just happen to be fans of military strategy. There is no world in which any remotely eloquent adult uses language like "I thereby order you to stop doing the extremely specific thing X, and any other things, which has the trait of being Y" to mean "Well you can stop or not stop doing X, depending on your own interpretation of whether it has the trait of being Y".
I completely misunderstood your argument about what "which" was doing. After re-reading the sentence like 90 times I'm convinced by you and Lapras's take. The second comma threw me off.
Isn't it that "extra" comma which makes all the difference, though? It sets off "and any other actions in the Rafah governorate" as its own, bracketed clause which can be removed from the sentence.
Like what if I were to require that you "immediately halt housebuilding, and any other actions on your property, which may cause harm to the local endangered beetle population", and you had a method of housebuilding which ensures that the beetles stay safe? My read would be that you can continue housebuilding using that method.
In your example, the natural interpretation would still be that you have to halt housebuilding. But even if you did change up enough to make your interpretation viable, the only reason that works is because "housebuilding" *could* be interpreted as an indeterminate collection of actions which could be further narrowed down.
In the original example, it specifically says "its military offensive", that it is referring to a specific thing, not an indeterminate collection which could be further narrowed down. In order to make it possible to interpret the way that Israel wants to, it would have to be changed to say "offensives" rather than "offensive" as well as the other changes.
So I guess what we're looking at is an attempt by the judges to craft a vague and ambiguous sentence that would allow as many of them as possible to sign on to it, but which ultimately wasn't all that successful.
I've noticed Google Maps is often getting things worse than it used to, with traffic backups that don't exist or missing ones that do. That's not even counting when it gets the best method to a destination wrong.
In Michigan, I-75 currently has construction between Detroit and Flint, and going north it actually consistently advises one to exit the freeway and enter again afterward, even though the freeway is actually open and running fine. If you follow its advice you will take longer to get to your destination.
That doesn't even count the time it routed me to a strange area instead of where I wanted to go.
I've gotten the distinct impression Google is trying to squeeze more revenue out of everything these days, and I'm not sure how any of these inaccuracies are helping it to do so.
Is anyone else getting the impression Google Maps is getting worse?
It's not just Google - there are companies that buy and aggregate geolocated traffic data (with the ultimate data coming from hundreds of different apps, so broader penetration than Apple or Google), like Airsage, after which anyone can buy the data.
I don't know if Airsage has a real-time data stream, I don't think they did when I was using them 5 years ago, so maybe the real-time traffic stuff is confined to Google and Apple. But I thought I should point out that geolocated data isn't special or controlled or hard to access, anyone with money can get it, and with broad penetration into any given population.
Certainly a large proportion of phones have this (I assume), but I suspect also, based on some directions the app provides, that other entities, like the government, are providing routing data. So my route which was low-traffic but not recognized as a route by Google may have had a "road closed" entry added, even though the road is open.
I've had trouble with it for a while, but more with the road information than traffic. In Seattle (hardly a backwater), it told us to turn left at an intersection where this was disallowed. The lane information is frequently out of date, and it doesn't pick up on closed roads as much.
Yes Google maps have been deteriorating rapidly, and I switched to Apple maps (yes, given their early history I was very reluctant to) several month ago. Apple, to their credit, made tremendous improvements to the maps and driving directions. Between using Duck for search and now Apple maps for driving my only remaining engagement with Google is email. This one is hard to break away from....
Unfortunately, DDG sucks nearly as much as Google-a-year-ago now. I've personally been using SearXNG, a federated open source search engine that aggregates results from multiple and has no ads or trackers.
It's like Google was back when it was useful, I don't even need to prepend "forum" or "reddit" to every search to get real results.
There's a number of URL's and browser plugins you can use to use SearXNG - I use paulgo.io as my goto URL on Safari on my phone and a firefox plugin to make it the default on my laptop.
re Short and Pope, someone once asked me why they were called Child Ballads when they're obviously not for children. I said, "Are you making a joke or is that a serious question?" because I really couldn't tell. It was a serious question. And the answer is, they were collected by a man named Child.
I keep checking fivethirtyeight, expecting them to have started modelling the 2024 election in earnest, but they haven't. If you want to see I feel like they normally do, by this point in an election year, but I don't know exact dates that they started previously. Are they just holding off because they don't like the answer?
I'm late to the party, but I'd like to say that you should stop checking FiveThirtyEight. Here's Nate Silver himself, explaining how low they have fallen:
To chime in on the same theme, I think this is a case of "follow the person, not the brand". I trust Nate Silver to be relatively accurate and impartial, but the brand "fivethirtyeight" is only as good as whatever demon is possessing it.
Thanks for all the replies, I had no idea that Nate had left 538. (Actually now it sounds familiar but I'd forgotten.)
I'm actually surprised they're not leaning even *more* into the modelling, though, in his absence. Maybe the remaining bozos can't come up with a model as sophisticated as Nate's, but surely they can make a dumb one?
That does seem like Disney's style these days. Maybe they think a blog is fine? But there's got to be a few statistics folk who are also into politics and who think that they could do as good a job as Nate Silver. Putting a few of them in charge seems like a no-brainer.
the "538 Model" is Nate Silver's IP. When ABC News made the inane decision to lay him off, they lost the model as well.
Nate has talked on his Substack about reducing the scope of the model, since (paraphrasing) why publish a constantly-updating model if the vast majority of its audience (*especially* the self-assured pundit class) are just as probabilistically innumerate as they were in 2016?
They do seem to have gone downhill since Nate Silver left and Disney took them over. His new Substack doesn't seem to have started coverage yet either, but that is probably because he is still (or was, as of this post 16th May) working on the model:
"Also, I’m finally taking some tangible steps to get the 2024 election model ready, interviewing finalists for the position I’m hiring — and later today, I may even (gasp) write a few lines of code."
April post announcing his plans for this year's election:
Fivethirtyeight seems to have changed over the past year. I know Nate Silver is no longer working for it and I think he owns the rights to the models. So probably this is the biggest reason if they haven't started modelling already. Nate Silver seems to have a Substack of his own now, so might be worth checking that out to see if he does a similar model there.
Sometimes when I scroll through Substack comments on mobile *something* triggers popups prompting me to subscribe to a comment author's blog to appear.
Is it possible to hide or disable those?
It seems impossible to hide them once triggered unless I reload the whole webpage.
Yes, I also regularly get this in the comments section when scrolling down. I'm not sure what triggers it - I think clicking on some part of a comment in a certain way when on mobile. It's fairly intrusive.
If you hover over someones name or icon, a modal will appear with more info about them and buttons to follow/subscribe.
On mobile this happens when you touch their name I think, so probably when scrolling you can trigger this accidentally and maybe there is a UI bug where it doesn't go away without direct input.
On desktop the popup appears when you hover over the blog name to the right of the username (if the user has one). On mobile you have to long-press it just the right way to get it (at least on Android, don't know about iOS).
In my experience, it happens whenever you view a new person's substack for the first time and scroll halfway down if you aren't logged in. They're really annoying.
Monorails have a reputation as a white elephant of a transport system which seemed like a good idea in the mid 20th century but which failed spectacularly everywhere they were tried. But they still don't seem like an obviously bad idea... you can build them in an established city with a small land footprint, they're quiet, they run on electricity, they don't get stuck in traffic, and they're pleasant to ride on. Why have they failed to find a use case outside touristy niches?
(Serious discussions please form a line at my left, "Marge vs. the Monorail" references please form a line at my right.)
I just remembered the Transrapid https://en.wikipedia.org/wiki/Transrapid magnetic monorail. It operates in Shanghai, but never got off the ground anywhere else.
People were quite against it when it was considered in Munich. Reasons were: Too expensive, not compatible with any other means of transportation, NIMBYism, high energy consumption, too much associated with Edmund Stoiber who made a fool of himself when he tried to advertise it: https://www.youtube.com/watch?v=bMUxRA4B9GE
Back then, I was against it too, but now I feel it would have been cool.
I read through the wikipedia article on monorails, and... I'm not sure I see what notable advantage monorails have over two-rail designs. This is probably partly just a limitation of the wiki article, which is really focused on history.
But it seems to me that many of the advantages you note (run on electricity, don't get stuck in traffic, small land footprint, pleasant to ride on) are all shared with any other electric elevated light rail system. Is the advantage here actually in the monorail? Or is it just easier to make an elevated monorail than an elevated two-rail system for some reason?
> Or is it just easier to make an elevated monorail than an elevated two-rail system for some reason?
From my cursory reading, that's exactly it. Elevated monorail tracks are cheaper than elevator two-rail tracks because of the smaller footprint. Unfortunately, that's their *only* advantage. In particular, monorails are more expensive for ground level or below-ground tracks, which are a lot more common than elevated tracks.
It must depend on how it's implemented. In Detroit, we have the Q-line, which is almost, but not quite, a huge waste. In support: it's nice to use in inclement weather, and probably for disabled and/or elderly people who cannot walk far or well. Against this, it is actually faster downtown to walk where you're going unless the train happens to be in sight, "don't get stuck in traffic" is wrong because people park in front of it (surely just "for a minute") and they break down, and the time estimates at the stations can be wildly wrong.
I never paid for a ride on it, as I worked for Rocket Mortgage, who probably paid for all rides I took.
The Peoplemover can be a better option since it can't be stopped by traffic (it's up on supports, as a kind of 2nd floor) but the places it serves is awfully short-range, and one could walk anywhere in its service area probably faster, if one includes going up to it and going down at your station.
Indeed. Also Scott, I'm sorry. It was a spur of the moment thing and I knew right after I closed the tab that I'd crossed the line a bit. I know it's not good to bring politics into a non-politics thread nor to write lazy one line potshots against whichever outgroup.
The same reason we have ICE cars instead of steam powered cars? There was a time when both systems were relatively viable, but more people chose ICE. With monorail, everyone chose trains with double rails instead. Now if you want to build monorail, you need separate tracks and trains and maintenance facilities. Basically the whole system is more complex and expensive than a normal train, even though the technology is not "worse".
Steam and even some kinds of electric car were pretty popular circa 1900. The biggest problem was not the coal/wood fuel, but the steam engine had to constantly be supplied with fresh water. A big advantage was the lack of transmissions. Steam power could be used to spin the wheels directly, and had consistent torque generation. There was no need for complicated gear ratios to manage power. Steam cars were like the Tesla of 1900, in that they had smooth constant acceleration.
What really started the decline of steam cars was the adoption of electric starters in ICE cars, which replaced the hand crank. This made ICE the all around most convenient system. From a thermodynamic perspective, an external steam engine would never be as efficient as an internal combustion engine. But it's hard to say what steam engines would look like today if we had kept using them. After all, ICE technology has been continuously improved for the last 120 years.
Steam based thermodynamic cycles doing work are ubiquitous in power plants and way more efficient there than any mass produced internal combustion engine, but the difficulty is miniaturizing it without loss of efficiency. Condensers tend to be very bulky.
I think the problems with steam cars were pretty insurmountable, and once someone figured out how to make a decent ICE.
Consider the locomotive -- steam technology had a massive head start and an incumbency advantage here, but ICEs quickly displaced steam once decent ICEs started coming along. ICEs were much more efficient, and required much less maintenance and attention. Same deal with ships.
In automotive applications the advantages of ICEs are even larger. It's okay if you need to spend half an hour heating up your boiler before you move your locomotive, but pretty inconvenient each time you move your car.
Insurmountable is a stretch I think, and the ICE cars of the same period had plenty of problems too. Condenser systems could recycle the water to extend refills to about 1500 miles, although this added weight. Some of the later kerosene fueled steam cars got 15 miles per gallon, which was comparable to ICE cars at the time. There were flash boilers, powered by diesel or kerosene ignition, that could heat enough steam in 90 seconds to power the vehicle long enough for the main boiler to warm up. I imagine modern electrical systems would also work well in a hybrid. Electric powers the start-up until the steam heats enough to take over, and then the steam engine recharges the battery.
There were also ways the steam car was distinctly superior to the ICE car. The lack of a clutch or transmission made steam vehicles much easier to drive, and the simple design lasted much longer. There are steam cars with over 500,000 miles on them still in good condition, without anything other than normal maintenance. Which is unthinkable for ICE vehicles, unless they get the Theseus's ship treatment. Steam engines are also much quieter, almost silent, and don't produce nearly as much exhaust.
The real nail in the coffin was by the time all these kinks were worked out, Henry Ford was rolling ICE cars off the assembly line at a rate that dominated the market. Steam cars remained in the realm of a novelty for the rich.
Although I think steam engines could have been a viable replacement for cars, there are some areas they wouldn't work. Mainly because ICEs have a better power/weight ratio and are easier to miniaturize. I struggle to see how a steam powered airplane or leaf blower could work.
Chongqing is always a weird case when it comes to structural engineering, because it's basically all mountain - go look up a video of someone driving through the city. I'm not surprised monorails work there - the geography basically makes elevated rail mandatory and monorails are the cheapest way to build elevated rail.
Its cheaper to build and maintain ground structures wherever that's an option, though.
Thank you for this interesting article! It mentions Wuppertal; the thing is that around 1900, Wuppertal was very very rich and able to afford a spectacular form of suspended train.
That's an interesting concept I hadn't heard of before. Looking at the Wikipedia page presents some obvious immediate issues though.
1. This thing has never been built beyond a prototype before. Even if it were a good idea, that would mean that it's an option for 20 years from now, not today. But the fact that it's never actually been built suggests that there are major problems of some sort with it. Not every bright idea in one guy's imagination turns out to make practical sense.
2. As Wikipedia points out, every single car needs to have an active gyroscope system. I'm guessing that increases costs and fuel usage a lot.
3. There's also the issue of safety. This design is "fail deadly" - if it ever loses power at all, it immediately falls off the track. That is a really bad property to have and probably fatal just by itself.
No, it doesn't fall immediately. Even when power fails, it keeps stable as long as the gyros are turning, which is about four hours. At least that's what Ernst Ritter von Marx wrote about the test monorail in London. Not sure how much fuel the gyros would need today.
And besides, not every failed idea is necessarily bad. Remember those sailships which had rotating cylinders instead of sails? Yes, that really works, better than you'd expect; but they still need wind.
I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell. I believe it would involve algorithms, mostly, not deep learning. I am not in the field, and there is no way I can actualize this. I will happily pass the idea to anyone who believes they could actually build the thing. If you work on animation software, this would be right up your alley. If the idea makes a pot of money, I think it would be reasonable to toss a bit of it my way. Anyone interested?
Re: your drama: I once had a similar experience, and I wasn't even an outsider - I was talking as a fellow coder wanting to collaborate on a cool idea.
My takeaway was that while the Internet is great for people who want to have a squabble, using it for any kind of productive or ambitious goal is going against the grain.
My advice is you have to act a bit like a politician: take nothing personally, shrug off snideness and jeering, smile and nod to the ones who miss the point or just don't get it, engage and enthuse the ones who offer something.
And you have to put up with a lot of repetition - other people won't even have read the other responses, let alone know the now-familiar background context you have in your head. No one can follow you in there without a lot of patient explaining.
I suspect everyone is like this, but I also suspect coders are a particularly petulant bunch.
The problem is if you unload on them all for it, you tire yourself out and everyone sees you getting wound up. And if you give them power by worrying about their reaction, you let them control the conversation. Ignore them instead - you're here to discuss/advance an idea, not to justify yourself to internet strangers.
I say all that - take it with a grain of salt because I haven't really made a success of starting collaborative projects; I now tend to approach in more oblique ways and have a low baseline expectation that other people will help me.
I use graphics software too and would still be interested to discuss your idea.
Re coders; I don't think coders are particularly petulant, it's just that "I have this cool idea and I'm just looking for a coder to implement it, and I want a cut" is kinda the coder equivalent of "hey I want you to do this art for me 'for exposure'" for artists. It's virtually always a bad deal for the coder, and a lot of coders get these sort of requests fairly often, and often give similar reactions to artists.
And I do think "you're just here to discuss an idea" is the wrong framing - they could have had that conversation, it would have required publicly describing the idea and saying "what are your thoughts?". But "I have an idea, and I'll give it to you if you agree to pay me if it goes well" is explicitly soliciting for other people to work on your behalf and that's a different dynamic.
Yeah, I get that, but even in my naive first post I did not propose anything that exploitive. What I actually said was " If the idea makes a pot of money, I think it would be reasonable to toss a bit of it my way. " And I later clarified many, many times what I had had in mind: I said SEVERAL TIMES, in pretty much these exact words, the following: The idea's not even that original, just a way of extending something already done. And it just popped into my head in an instant, whereas somebody building and advertising the thing would spend many many hours on it. So I don't think it would be at all reasonable to think I had a claim on that money -- just was picturing the developer tossing me like 1% as a thank you. Also said SEVERAL TIMES that I was certainly not thinking of any kind of contractual arrangement -- was just looking to give the idea away.
I'm sorry now I even mentioned money. It's really far from the main point. But I don't get why people were so reactive. It's as though some mass hypnosis kept everybody believing I was proposing "you do the work and I keep the money, OK?" even in the face of massive and ever-growing evidence that was not what I was proposing. And why are software developers so sensitive when they think some fool is proposing a ridiculously unfair and exploitive idea? Seems like software developers are a well-paid, smart, respected bunch -- why not just laugh off a stupid proposal of the kind people thought I was making? Instead, people reacted as though they were, I dunno, newly freed slaves, and somebody was trying to trick them into going back to massah's house and working the fields for free.
"It's unlikely that an off-hand idea by a non-expert will work out" *is* a form of advice and accurate and in lieu of any actual technical details in your comment about the only advice someone can give. Yes, receiving the same advice (which isn't what you want to hear) multiple times can be annoying, and it *might* not be right, but, to use your words, why not just laugh off those replies? None of them read as the programmers being angry, but your reaction to them does read as angry. The "sensitive" party and "reactive" in this thread does not appear to me to be the programmers.
Again, if you're just interested in someone implementing your idea, or having a discussion about it, just share it. Put it in a google doc and post the link. Or don't; but this extended debate about how everyone else is being unreasonable seems pointless, and I'm going to bow out of it.
I did share it, with 2 people who work in related fields and expressed curiosity. (And I shared with no strings attached, by the way, no request that they keep the info a secret, or not use it without signing some sort of contract, nothing remotely like that). Am about to share it with a third. I have not shared it here as I said I would, because not only was the discussion extremely unpleasant, but very few people showed any curiosity at all. Nobody asked why I was interested in facial expression, where they idea came from, what kind of graphics stuff I do. 95% of what was said was a prolonged, completely curiosity-free attempt to convince me I'm an asshole. "It's unlikely that an off-hand idea by a non-expert will work out" was the least of it. That's a bit tedious after you've heard it a couple times, sort of like the college advice your uncle always gives after he's had cocktails, but not offensive. But there was sneering and snark, and I was called a crackpot, told that I thought software developers are idiots, had made completely, laughably absurd statements, etc etc. It seemed like the message was "you're a fool with a swelled head and an exploitive asshole. Now tell us your fucking idea." Under those circumstances I lost my appetite for posting the stuff here.
Thanks, but actually I only unloaded on one, dionysus, and I didn't say anything awful in that exchange. And acting like a politician really goes against the grain for me. I dislike politicians, and I value being real, and would rather do the latter and take my lumps. I am reasonably good at making the case for my point of view in interactions like the present one, and while that doesn't soothe the troubled waters the way oil does, it often gets through at least partially to some of the people involved, and we end up having a somewhat better exchange at the end.
Also, people can sense when you're making nice just to soften them up so they'll be receptive to whatever it is you want from them. At the beginning, when I read your responses and Quiops whose name I may be getting wrong, I mentally lumped you with them as someone whose main agenda was to convince me I'm an asshole for thinking my idea could be a decent one. You started off the way Quiops did in your post, then suddenly switched to some friendly stuff about how you're curious and would love to have a nice little chat with me about my idea. That switch felt to me not like you'd realized you also had a second, friendlier message to convey, but like you'd realized never get anything outta me if your entire message was a lecture about how the chance is nil that a layman could come up with a novel graphics idea worth trying. And then in a later post you actually commented yourself about how you'd consciously decided to put something friendly into your post. So at this point I haven't the faintest idea how sincere any of the sentences in any of your communications are, including the current one. So I'd recommend giving more thought to the downside of being down towards the impression management end of the impression management -- real deal axis.
The problem is, we've all seen people who go "I am not involved in the field at all but I have this amazing new idea that is miles better than anything the professionals are doing".
Most times those ideas are not better. So people naturally tend to "Okay, tell us about the idea so we can see if it really is better".
If I claimed that I had a fantastic new system of doing therapy, even though I'm not a therapist, not trained in the field in any capacity, and have no experience of doing such work, I'm sure you as a professional would be slightly sceptical and want to know more about my fantastic new idea before you agreed to help me sell it to the public.
Maybe your new idea is marvellous, it could well be, but people are going to want to see the pig first before they buy the poke.
Yes, I would be definitely be skeptical, but I would want to hear your idea. I would probably post something like, "I have to admit I'm skeptical, but, you being you, I think there could be something in your idea, and I'd be very interested to hear it. " And then I would shut up and listen. I would do that partly out of courtesy and kindness, because I like you, but also I do not at all rule out the idea that you would have a genuinely good and interesting thought about psychotherapy. Then after I heard the idea I'd tell you want I really thought of it. If I thought it was absolutely no good I would look for tactful ways to get that across.
Your version is not a completely fair analog of what I posted, because I did not rave about having a whole fantastic new approach to software development that's miles better than what anyone else is doing. I named *one* out of thousands of kinds of software, and said I thought what I had would work better than current software for this one little task, and that I thought it might even sell. So it's more like if you posted that you'd had a novel idea about how to treat people with insect phobias, and said you didn't think the approach had been tried before, and that you thought it might actually help a bunch of people. So my claim was much more of that nature.
And I did not refuse to describe or show my idea. I said at the beginning that I'd go into detail if anyone who was able to build such thing expressed an interest in hearing the idea. I probably would have gone into detail anyway if most of the posts had said things like, I don't develop graphics software, but I'm quite curious about your idea. Can you post some more? But actually there was almost no curiosity expressed. 95% of what I got were long, irritated-sounding lectures about how ridiculous it was that I could for a moment entertain the idea that someone with no training in software could have an idea that would work. And people were pretty harsh and rude. The word "crank" was used. I was told that I thought software developers were idiots, and that I what I had said in my post was unbelievably absurd. The gist of it was that I was a fool and an asshole.
And actually I did tell the idea in detail, via DM, to 2 people who work in the field and expressed some interest. So the situation is not that all the posters I'm mad at are asking to see the idea and now I'm being contrary and refusing. Most did not express any curiosity at all in their intial posts or later ones. Yet one person who had not asked one single question about the idea did accuse me of "jealously guarding it" after I had "promised" to post it.
It really does seem to me that the people piling on me have a distorted perception of my initial posts and what their responses actually were. And it sux. I can be quite mean sometimes on here, but I only do it to people who seem like trolls and/or are being rude and cruel. I think I felt like being pretty good natured and reasonable in my posts overall had kind of given me, like, some credit -- like that if I posted something off-base, I'd kind of earned enough points so that people would be unlikely to believe I'd just posted something dumb, mean, entitled and ridiculous. Like if it came across that way they'd give me the benefit of the doubt and ask me to clarify what I meant. Nope.
>Your version is not a completely fair analog of what I posted, because I did not rave about having a whole fantastic new approach to software development that's miles better than what anyone else is doing. I named *one* out of thousands of kinds of software, and said I thought what I had would work better than current software for this one little task, and that I thought it might even sell.
Perhaps you forget that the the one tiny kind of software you described in very broad terms has applications in multi-billion-dollar industries such as games, movies, and TV. If you'll allow another analogy what that sounds like to me: You said the equivalent of "Oh it's no biggie, maybe you'll find a buyer here or there in the niche subfield of transportation, but I am certain I have improved upon the wheel, DM me if you like money."
>You said the equivalent of "Oh it's no biggie, maybe you'll find a buyer here or there in the niche subfield of transportation, but I am certain I have improved upon the wheel, DM me if you like money."
Yes, my initial naive post could be taken to mean that (though it could also be taken to mean other things). But when everybody got so angry I put up many many responses clarifying what I had meant, which was definitely NOT that if somebody knew the novel, awesome idea I had they could make millions by applying it in all the industries that in one way or another use tools for adjusting facial expression. I said the idea was not particularly original -- that it was just a way of slightly extending something that is already done. I said it had just popped into my head -- was not, in other words, the product of a lot of thought and labor. I said that I knew it was unlikely that an idea from someone outside the field would work, and would be something that had not been done, and would make much or indeed even any money. Seems to me it did not matter what I said -- I was the poster people loved to hate, and they were impervious to any information that would make me look less foolish, entitled, self-important and exploitive. Cuz where's the fun in that?
Let those who have never put up a post that could be taken to mean something really dumb and obnoxious cast the first stone.
I know you didn't come on strong with "I am so much smarter than the professionals", I think it's just that we've been burned before, in whatever job or career we have, by people coming in with "amazing new idea" or "we are completely scrapping how we used to do things and now doing it this new way", and refusing to listen to the people who have to use the system or implement the new way about how it's not going to work the way the "great new idea" person thinks it will work.
And some of us on here are less socially adept in interpersonal interaction, to put it charitably, so we do rush at it like a bull at a gate with "what makes you think you know so much?" 😀
I am if I experience the help and the olive branches as real. Currently feeling pretty warmly towards Vitor, for instance, whose post seemed simply sincere to me.
Of course not! What I meant was that if anyone had any interest I would describe the idea -- then, if you're interested, you may have it. I sent it to you as a DM. because the present discussion has gotten so unpleasant and I don't want to add fuel to it. Also DM'd it to Viki Szilard when they posted.
Well I’m a software dev who’s in the field of computer graphics/machine learning, and my curiosity’s getting the better of me so…
What do you mean by “changing the expression on a face”? Take an RGB image of a human face, and change it from say, a smile to a frown? Or take a rigged 3d model of a human face and animate it to have a desired expression?
If you prefer you can message me with the details. I can prototype things very quickly :))
I don't begrudge your optimism, but the reality is that ideas are a dime a dozen among AI researchers. The only way to know if something works is to try it, and the vast, vast majority of ideas that even experts have don't pan out. Because you're not in the field, you don't know what the state of the art is, what researchers have already tried, how feasible it is to implement an idea, or how plausible it is that an idea might work if implemented. The chances of your idea working out and being monetizable are very close to 0%, especially because it seems vague and poorly defined to begin with ("...I believe it would involve algorithms...")
I'm getting a bit sick of responses telling me it's unlikely the idea'a any good.
Hey, I get it. I am not expecting to make any money. If the thing did, I think it would be reasonable to get a bit from whoever makes it for supplying the idea, but I certainly wasn't picturing signing a contract or anything like that. In fact, I was imagining just describing the idea right here. Obviously if I thought it at all likely that this idea would make money, I would not be describing it on a forum where hundreds or thousands of people could read it -- I'd be jealously guarding it and telling one possible developer at a time, after swearing them to secrecy. On the other hand, I have messed around with enough graphics software to have a sense of what is possible, and the thing I'm thinking of seems to be in that realm. And I have searched hard enough for the thing I have in mind to be pretty sure it is not available now. So I doubt that it's impossible to do, and I doubt that it has already been done.
I don't see where the evidence is that I am overconfident about how workable and monetizable this idea is. In fact I have given multiple assurances of various kinds of optimistic stuff I don't think. All I am doing is asking whether anyone who builds animation software or the like is interested i hearing the idea. If someone is, I will lay it out.
In other words, I don't think the likelihood that this idea is worthwhile is zero. Several people who have responded so far seem to be triggered into some kind of irritable discourage-this-amateur feeding frenzy by the fact that I don't think the chance is fucking ZERO. Get over it.
Computer graphics is a very large field that's pretty mature (compared to AI at least). There are thousands of people doing research on (semi-)automated mesh animation, all sorts of things like projecting motion capture onto arbitrary models using some sort of skeleton, deforming meshes while renormalizing them, kinematics, etc etc etc.
This is a huge field of research backed by practical applications in some of the world's biggest industries: movies and video games.
This kind of field is much harder to make a contribution to as an outsider, especially when you don't know what the state of the art is, common tools and file formats in use, typical rendering processes, etc.
I don't want to discourage you, but the priors are strongly against you. That said, I'd be happy to discuss your idea, I'm a dabbler in computer graphics myself.
Hey, for about the 5th time, I get it that it is unlikely that an outsider would come up with an idea that is novel, and doable without an amount of effort that the idea does not merit.
It sounds like you're like me -- you use computer graphics, but do not develop the software. If so, I think I'm going to hold of on laying out the idea unless someone who actually works on this software asks to see it. When I first posted this idea I probably could have been persuaded to just describe it to somebody like you, who uses graphics software and is curious. But at this point I am irritated and uneasy, because every single respondent has told me that it's very unlikely the idea's worthwhile, and several have written about that at some length. It really seems to me like my post irritated the hell out of various people who actually write software, and that if, as is likely, my idea is not workable, or has already be done, I will be subjected to lengthy, snide "I-told-you-so's, dum dum" posts.
I don't understand why people keep piling on with the "it's very unlikely to be any good" posts. Do they think I didn't read all the earlier ones? That I read them but was unable to grasp their meaning? That I vehemently disagreed with them? Jeez, I have responded to all of these posts by saying I know the chance is low that the idea is workable.
Do you know why you felt the need to again make the same point. The first 90% of your post is still another explanation for me of why people outside the field almost never have an idea that is worth implementing. I'm not complaining about your post, I'm asking you, because you sound friendlier than the other posters. Can you figure out why it felt important to you to write that first 90%, which duplicates what the other posters have said, instead of just posting your last sentence, expressing some interest?
"It really seems to me like my post irritated the hell out of various people who actually write software"
Yes, it did irritate me. It irritated me because it matches the pattern of crackpots who take the people in a highly technical, actively researched field for idiots, and are convinced that they know better despite demonstrably not knowing even the basics. You don't even realize the absurdity of saying "I will happily pass the idea to anyone who believes they could actually build the thing" without giving any description of what "the thing" is. Do you realize that some things in computer vision require a few lines of code, while other things require years of dedicated effort by a large research team with tens of millions of dollars (which may well go down the drain because the idea turned out to be impossible), and that it's not always easy to tell which is which?
"I don't understand why people keep piling on with the "it's very unlikely to be any good" posts. "
I made the same point as the other posters because there is value in letting you know that there is overwhelming consensus on this point.
"I don't see where the evidence is that I am overconfident about how workable and monetizable this idea is."
The evidence is here:
"I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell."
I'd bet you that the world's top computer vision experts wouldn't dare to make a statement like "it'll work much better than what is out there currently" without implementing their idea and seeing that it actually works. When people pointed out the unlikelihood of success, you became hostile, which is again typical of a crackpot. Jealous guarding of your idea (despite unfulfilled promises to share it on this forum) is a third typical crackpot characteristic. Granted, you did acknowledge that the idea was unlikely to be monetizable and could be unworkable, which is not typical of crackpots.
>“I have an idea for software that will work much better than what is out there currently for changing the expression on a face.I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell."
Yes, it did irritate me. It irritated me because it matches the pattern of crackpots who take the people in a highly technical, actively researched field for idiots, and are convinced that they know better despite demonstrably not knowing even the basics.
There is nothing that I wrote that suggests in any way that I take the people in a highly technical, etc., field for idiots, or that I am convinced I know better despite not knowing the basics. That all seems like shit you are angry about from other contexts that you are dragging to this exchange and dumping on me. In fact not only did I not say anything that implied any of those insulting, stupid ideas, I said things that expressed ideas incompatible with it. I said in my initial post that there's no way I could possibly develop the idea into actual software. That's some pretty good evidence I'm aware that I lack basic skills, isn't it? Also I expressed willingness to just post the idea here, if anyone who has the skills to make this sort of thing expressed interest. Seems to me that makes clear that I do not think my idea is highly valuable and unique, since I'm willing to describe it to a huge forum. If I thought it was unique and highly valuable I would guard it jealously, wouldn't I?
>You don't even realize the absurdity of saying "I will happily pass the idea to anyone who believes they could actually build the thing" without giving any description of what "the thing" is.
Actually, in retrospect I do see how that sounds absurd if taken a certain way. But I did not mean that I expected somebody to decide, without hearing more, whether they could build it. Of course they could not! What I meant was, if what I’ve said interests you let me know, and I will put up a post describing the idea. If you think the idea is workable, it’s yours for the taking. Jeez, dionysus, it doesn’t seen like it’s that hard to figure out that there is an alternative interpretation to my post beyond the stupid, entitled one you put on it.
> “I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell."
I'd bet you that the world's top computer vision experts wouldn't dare to make a statement like "it'll work much better than what is out there currently" without implementing their idea and seeing that it actually works
Well, if you knew what my idea is you would see why what I’m saying is nowhere near as sweeping and grandiose as it sounds to you. it’s really just an extension of something that already exists. My hopefulness about the idea has nothing to do with thinking I am able to judge how easy it is to implement, and I have concluded that it’s easy. I totally get that I am not able to do that, and in fact that even experts would hesitate to do it with a novel idea. My optimism came from thinking, clearly we can do this for a and b. If there were software that could also do it for c, d and e, which are in the exact same class as a & b, that would make some cool things possible. Here’s a made-up analogy about tattoos, which probably is not historically accurate: Let’s say that it used to be that most tattoos were small simple blue images, and one day some tattooist said, why not make them multicolor? We know how to do inject other colors besides blue, and going multicolor would make more complicated and beautiful designs possible. OK, that’s the nature of my idea. It does not rest on any belief that I understand how to implement these things — it’s an idea about the possibility of extending something that’s already possible.
>Jealous guarding of your idea (despite unfulfilled promises to share it on this forum) is a third typical crackpot characteristic.
I didn’t promise to share it. I said if anyone who works in the field and can actually make this sort of thing was interested, I’d post it here. Actually, someone in the field finally wrote and expressed interest, and I laid out the entire idea for them in a DM. I only put it in a DM because this discussion has become so unpleasant, and I did not want to add fuel to it. I did not ask them to keep it secret, or to make any sort of contract. So I think that puts the jealous guarding accusation to rest.
Later edit: Somebody else expressed friendly interest, and I DM'd a detailed description to them to, and also did not say a word about secrecy, etc. Still think I'm jealously guarding my idea?
Know why I'm not just posting it here? Because this discussion is so unpleasant, and most participants have shown zero interest in the idea. You're reacting to the entitlement and whatnot you think is inherent in posting about the idea the way I did.
>When people pointed out the unlikelihood of success, you became hostile
I don’t think I did. I said many times that I did not believe this and that grandiose thing, and I said that politely. I eventually started to complain about the repetitive posts all saying the same thing, but I complained in a civil way. I guess it was snarky to describe it as a feeding frenzy, so we can count that as hostile, but it’s pretty small scale. And I don’t think I’ve been hostile in the present post. The worst things I have accused you of are dumping anger about other situations onto me, and failing to consider various non-idiotic things I could have meant by certain sentences. Whereas you, in the post I’m responding to, have used the word crackpot, have accused me of thinking of software experts as idiots, and of having absurdly grandiose and unrealistic ideas about what I am capable of, and of jealously guarding my idea. Your hostility score’s a lot higher than mine.
And didn’t you ever have an idea you thought might be worthwhile about how things might be done in a field outside your expertise? Something about the way a hardware stores could be set uo, or a way to get more people to get needed medical tests done, or whether it might someday be possible to control a cursor by running your tongue around the roof of your mouth?
I was trying to get across where exactly the expectation mismatch is. There are some domains where you can come up with a contribution relatively easily as an outsider. But let's say someone posted here that they're a hobbyist who's come up with a new surgical material... you'd be very skeptical. Not because the person is dumb, but because they basically have to be an insider to even have access to the situations and tools where they could conceivably experiment with their thing.
Computer graphics is very accessible OOH, with tons of people building their own raytracers, games and such, but the more *topological* problems OTOH depend on stacks of assumptions and lower level techniques, and you won't build something commercializable if you don't know exactly where in the toolchain your code is going to sit. My guess is that people would have been less skeptical if you'd just mentioned this as an interesting research problem.
Thank you for answering. And hey, I get all that. Computer graphics is sort of like a gold field that's had a crowd of people prospecting in it for years. There's not much left to find.
Still, didn't you ever have an idea you thought was worthwhile about a field outside your expertise, maybe even a field that's already had lots of people prospecting in it?
"Algorithms" is too vague to be meaningful. It's very unlikely that your idea is both possible to implement with only "algorithms" (which I'm taking to mean relatively simple image transformations, eg warp, skew, rotation, alpha compositing, etc) and better than existing techniques that use text/image embeddings and diffusion models. For an example of how powerful and usable these techniques are, I'll direct you to this blog post, wherein the authors discuss using a webcam stream of a face to animate a 3D model of a face in real time: https://blog.roblox.com/2022/03/real-time-facial-animation-avatars/
My point when I said algorithms was that I did not think deep learning would be used for the core of what this software does. I believe the task is simpler than the roblox animation one. Yes, it would be doing relatively simple image transformations.
Listen, I'm a psychologist and people here often have ideas based on simple misinformation, or ask naive questions, or propose naive theories. On the other hand, I find some of the ideas people here completely outside the field have quite fascinating and plausible. I thought that, for instance, about a number of comments about how the sense of self is constructed, in the discussion of Scott's post about IFS. Yes of course my field is much softer and mushier than software development, but there is still such a thing as being misinformed or naive about human psychology, and when I run across some of that I do not write a sneery response. Why be rude?
I don't perceive myself as having been rude, but merely frank - I stand by my probability estimate (very unlikely - which does include possible!). Sorry if that came across as rude. I think your tone and framing (refusing to reveal the idea itself, confidence that it could make money, asking someone to commit to implementing it, stating that it's "much better" than existing techniques despite a lack of demonstrated knowledge of existing techniques beyond having done a lot of searching) are all triggering the Crackpot Response Protocol for people, here.
I think you could have gotten a better response with an approach more like, "Hey, I had this idea for a way to change the expressions on faces, here it is: <description of idea>. Can anyone with experience in the field tell me if that's been tried, or why it would/wouldn't work?"
I'm rather curious to hear the idea. I'm not sure it's as easy as you think to turn ideas into products, or turn software into money, come to that. But I'm always interested to hear ideas.
> Mostly I get the feeling you're looking forward to teaching me a lesson in how people outside the field invariably come up with lousy ideas
That's a you problem. Jeering wasn't my intention and I deliberately tried to word my reply away from sounding like it was. In this case being hypersensitive has only cost you goodwill.
Since you admit you don't know enough about the field to be able to implement your idea, I'm curious as to why you seem somewhat confident (i) your idea would work better than what is currently available, and (ii) other people haven't already thought of it?
I do not think it is easy to turn ideas into products and software into money. I am pretty sure I'm right that the thing I've thought of does not exist, because I have done a very extensive search for it. But I am well aware that as someone not in the field I may not be right about how this could be done, or about whether doing it is a nightmare not worth the trouble, etc. On the other hand it's a free idea, and if I was in the animation field I'd at least ask to hear it. Ya never know.
But neither you, Quiop nor rebelcredential have expressed any interest in implementing the idea, should it turn out to be decent. Mostly I get the feeling you're looking forward to teaching me a lesson in how people outside the field invariably come up with lousy ideas. So I see no point in sharing the idea with you. If someone in the field shows some interest I'll describe it right here on the forum, though, and if they tell me it's not practical I'll certainly accept that.
Since rebelcredential also read my comment as "actually being snide," I accept responsibility for my poor choice of phrasing and offer my apologies. I was genuinely curious about the idea and wanted to know why you think people in the field would have missed it. (e.g. "I am a psychologist, so I have insights from my own field into the perception of facial expressions and I think computer modelling of facial expressions could be more effective if they incorporate these insights.")
I don't exactly think people in the field have missed it -- it's more that there's a lot of churn and change. There have been a huge number of sites opening up that offer a user-friendly interface for altering appearance. My idea is just an extension of something that's already being done. That is the reason I'm pretty sure it's doable -- not some delusion that I can intuit, without knowing how to code, that coding the thing I have in mind is pretty simple. I have looked *quite* extensively for sites or software that do what I have in mind, and can't find any, and that's why I'm pretty sure they do not exist. As for whether the thing I had an idea for would be widely interesting to people, it's hard to judge. Doesn't seem implausible to me, but it's hard to predict what the public will fall in love with and what they will ignore.
If your DM conversations don't end up leading anywhere, I'm sure you could start an interesting discussion in the next OT by describing your idea in more detail (assuming you're not too concerned about IP and money issues).
Concerned about IP and money issues? How can it possibly not be clear at this point that I am not concerned about either? I have probably said at least 10 times in the course of this long, unpleasant discussion that I do not view this idea as intellectual property, and that I'd happily describe it in detail here, publicly, if someone who works in the field showed some interest. As for money, I have also said multiple times that I get that the idea is unlikely to work, and if it works it's unlikely to be a big moneymaker. Also added that in any case I didn't think of myself as having a share in the profits. I just tossed out an idea that's a variant of something already done, so not a particularly original idea. The person who makes and advertises the thing would have put in many hours, and would deserve the money. All I said was that I thought it would be reasonable for the person to send me a small chunk as a thank you. (I had in mind something on the order of 1% , but certainly was not imagining formalizing that in a contract.). And I have now sent detailed descriptions of the idea via DM to 2 people in the field who expressed some curiosity, and I did not ask them to keep the idea secret or to send me some thank you cash if it by chance the software made them a good amt of money. The only reason I didn't just post the idea here, as I'd said I would if anyone was interested, was that the discussion had become so unpleasant. Also, there has been almost no expression of curiosity from the people posting. 95% of what I have gotten have been long, irritated explanations of how unlikely it is that my idea would work, and various bad judgments of my character for even *thinking* the idea might work. I have been called a crank, told that I think software professionals are idiots, accused of being ridiculously oblivious to various obvious things, of jealously guarding my idea, and of reneging on a promise to post it here. Until 2 people who work in the field put up brief posts expressing some curiosity and nothing else, nobody has expressed the slightest curiosity about why I'm interested in facial expressions, where the idea came from, why I think currently available ways of putting expression of faces are unsatisfactory, or what's the general approach I have in mind.
Anyhow, I appreciate you apologizing for being snide, and showing some interest now. I could not resist venting a bit, in the course of telling you why I have zero interest in posting more about this subject.
I'm aware this is well-trodden ground, but have we conclusively put to bed why it is that software/software development is quite so shit?
These are the reasons I know about:
Mental models and gaps thereof: details of the real system are complicated. They get hidden in libraries/frameworks that make things easier by hiding said details. New devs unaware of the underlying details end up doing inefficient things or reinventing systems they aren't aware already exist. This process repeats so you have layers on layers on layers and everything just seems to run slower and slower.
Leaky abstractions: these frameworks/libraries don't fully encapsulate the underlying model so when things go wrong you need to examine (and understand) every layer down the stack. Many of which you were never explicitly taught about because you weren't supposed to need them. More layers = harder time fixing bugs.
Docs: any lib/framework/component brings its own mental model, units of thought, and procedural knowledge (ie what actions/processes/patterns to follow when using it). Devs often don't even acknowledge these, and even when they do it well, communicating them takes a long time.
Dependency and fragility: stuff relies on an increasing number of other stuff, with the result that there's more and more to go wrong.
Bloat: new things are constantly being required and added that aren't fundamental to the job/important to the users. Both for end users - your new laptop is slow because it's trying to run a million new services that Microsoft has decided you will like - and for devs - that image carousel for your website comes brings with it React+tailspin+vite+webpack+didnt ask+dont care+touch grass.
Here's one: text files are a rather unnatural way to interact with code. Text is a serialization format, and a huge amount of the work of writing code amounts to moving from the text model to a mental model of the runtime model and back. Loading and unloading mental models this way is incredibly exhausting in the long run.
Compare to "working on a car"—you don't work on the "the blueprints for a car", you fiddle with one piece while interacting with the already-built other 99% of the car. In a new code base it's pretty hard to see all the pieces and how they interact—whereas when you pop the hood of a car, there it all is. Debuggers get a little closer, but not very—you can't feasibly rearrange the parts while the rest is running. Live-reloading tries to emulate the right idea, but it's still hopelessly stuck in the text-based paradigm.
A coding paradigm that was 50% closer to "popping the hood" would be a dream to work with, I think, if it could get over the huge barrier of "all our existing tools and mental models are designed to work with text files".
Oh, neat. I have a lot of old notes but it's a dormant project to me. Where my mind goes (/used to go) was towards a system with first-class constructs for:
* dataflow graphs. To the extent possible all "programs" would just a be an introspectable dataflow graph, tho one could compile a graph into a native function for speed.
* "components"—like actors, or like "things you can point to under the hood of a car". These in turn would be wired together in a dataflow graph. Components live in a hierarchy of layers, so e.g. your "webserver component" has subcomponents like "API" and "datastore" who are wired together, and you can "work on" the API component with the webserver and store already running.
* DSLs or sub-languages aka "slangs"—a given component has an internal namespace which basically defines a specialized language, with certain constructs in-scope. E.g. an API server automatically has a sublanguage for "routing" in scope, and an API handler has a bunch of HTTP equipment in scope.
* programs are "submitted to a component". E.g. an API handler implementation is a "program running in an HTTP-handler component", and a routing table is a "program running in the HTTP server component" in a very limited language that can basically only bind regexes to handlers. This notion of "submitting" always takes for granted that the underlying component exists and is running, and you can repeatedly submit "programs" to the same running component. One would typically "work under the hood" of a single component at a time.
The underlying runtime model is "whatever it takes to support this", but I would prototype it with a relational DB backend and leave the impl abstract, allowing a system of microservices to be defined in the same schematic language...
I'm seeing a lot of my own ideas here, which is a nice but odd feeling. I actually spent yesterday working on a (very crappy) dataflow planner thing as an experiment. (Conclusion was it needs more work.)
I ended up going in circles on what the DSLs should entail. A DSL implies a whole bunch of unseen background knowledge, which needs to be there when you pick it up. (Because what good are clever, domain specific nouns and verbs if the user doesn't know what they mean or, worse, assumes a subtly different meaning than the one the creator meant?) To fix this each DSL needs to come with all that explanation bundled in. But a standard wiki would end up being ignored and could struggle to communicate the important stuff, which is how I got onto my current preoccupation: better ways of notating and communicating "mental models" in general.
I get your sumbitting-programs concept - basically a system of actors where each actor speaks its own language. (You could call something built around that Babel, if only the name wasn't taken by an AST parser.)
What made you give up and move on?
One fundamental problem I see is, do you create this as an entirely new selfcontained universe - and have to reinvent said universe from scratch - or do you try to allow including external things, and then find yourself having to make compromises with them that break the entire concept?
Ha, that's heartening. I kind of think there's a naturalness to the perspective we've both glimpsed, and attempts to iron away complexity in a lot of different programming areas tend to converge to a similar framework.
> What made you give up and move on?
Haven't been writing code in a few yrs. These kinds of ideas would arise whenever I was frustrated with my tools. I never really got past the taking-notes and brainstorming stage—I don't have a programming language background at all.
> do you create this as an entirely new selfcontained universe
I imagine you:
* design the self-contained universe as an ideal
* but anything you build that's actually designed to be *used* has to be maximally interoperable with mainstream languages and tools. This might mean you implement the runtime interface in Python, or you run a python interpreter inside your runtime, or you interface with the runtime over the wire.
> DSLs... explanation... standard wiki... mental models
I don't have a ton of answers here, but might be able to illustrate my thinking as follows:
One of the narrowest problems I wanted to solve was to improve on SQL for big data-analysis-style queries. Consider: a SQL query represents a dataflow graph—data flows from a bunch of upstream tables into a final view or query result. Every column in that final result query "knows where it came from"—e.g. a column `user_id` knows exactly what upstream tables it came from, and an `avg(sales)` column knows it's an average of whatever `sales` was. It has to, because these details become the runtime representation which the compiler actually operates on!
Now, it seems to me first that:
* our tooling should have access to that runtime representation, such that I can cmd-click on a column in the final query and my editor can show me that graph structure by which that field is generated
* the final query represents a dataflow graph, which I want to be able to use in other ways than just *running* it. For example, I could "host it as an endpoint"—and autogenerate API docs, where SQL column descriptions in the upstream tables "flow through" the datagraph to document the columns of my API query along with its lineages, types, constraints, FK relationships, etc. Or I could materialize a number of tables in a DAG (here I'm thinking of a DBT style analysis workflow, if you know of it) and have each one automatically inherit lineage data. There are tools which try to layer in such lineage data later, but IMO it should part of the native representation of all SQL.
All of this metadata is just data, but we're blocked from using it because we treat queries as text-files first and runtime representations second, instead of the other way around. Sort of?
I like SQL as an example because it's literally just a dataflow graph, and there are a lot of ergonomic issues that can be solved just by being able to "slice" the internal model along various axes.
Actual imperative code is more complicated. But still I think of an ideal where, for example, every variable that flows through my code is accompanied by its type information, constraints, and docstring. A function `def f(x: int)` with a docstring specifying the meaning and constraint values of `x` is an "input node" into a graph, and all the metadata—type, constraint, documentation—can flow with `x`, and if `x` is later exposed in an API say, the metadata is all there to be auto-filled. The only case where we actually toss out the metadata is when we compile to native code for speed—but the dataflow representation is primal.
I guess I'm starting from a "low level ergonomics" place. I don't really know how to handle high-level complexity, but I sort of imagine the same kind of "dataflow" concept at the level of interoperating components, and editor which is specialized in viewing these graphs similarly no matter what level of abstraction they occur at.
Like, you spend literally days tracking down a bug in a complex software system, and when you finally find it: the guy who originally wrote the code left a comment that he doesn't handle that particular case. (This fact, of course, not being reflected in the documentation of the numerous layers of stuff built on top of the routine that doesnt handle that case). Also, leaky abstractions of course, but more like: life is too short to make the software correctly implement the abstraction.
I wish there were some way to graphically draw the "fitness for purpose" of a component, including all the moving parts, the context it needs to live in and the dependencies it relies upon, and the I don't know what you'd call it but the "envelope" of cases it does and doesn't handle. So we can tell these things at a glance rather than have to stumble across them at random.
Ostensibly, this is what type-definitions are for.
Unfortunately, side-effects exist. They're called side effects because instead of some interaction being an input or an output, the interaction goes *sideways* into a primordial soup of global state. And as a consequence of this, they don't get listed in the type-definition as God intended.
Some languages try to fix this by decreeing that all side-effects must go into the type-definition after all. Global State is now Local State. But now we have a new problem: each side-effect now need to be included in the list of inputs for all downstream functions (as well as the outputs). Otherwise, the downstream functions won't actually pass a side-effect along the chain. So to fix *that*, we add an operator called "bind". Which, using the power of first-class functions, automagically reconfigures the type-definitions so the downstream functions actually pass along a side-effect (in parallel to the chain of "normal" inputs and outputs).
And huh, look at that. We just reinvented monads. So if you want to tackle this, it might be helpful to take inspiration from Haskell. Or maybe Eiffel, for its contracts.
Another cause: Ideas pass through too many people and end up either disfigured or over-adapted.
The customer needs X, the product product owner conceive W, the architect fix it as Y, the BA describes Z and the developer build A. Now, either A don't do X (it's disfigured) or it does it, but with all the idiosyncracies of W, X and Y in the way (it's over-adapted).
I'm not even saying that the layers are useless. They aren't, or we'd end up building shit that's awefuly unaware of the rest of the industry or of the rest of the company. But it can sometimes turn a 2 months job into a 2 years job.
This is more general than software, and intuitively (to me) it happens in proportion to a single individual's powerlessness. Anything you can do by yourself, doesn't have to go through this process; the more people you need to help you, the more of this bullshit there will be.
To me a utopic outcome looks like our tools getting more and more sophisticated, allowing a single individual to feasibly take on larger and larger projects without having to involve other people. But I suspect I'm not very good at team collaboration, and maybe someone who was would regard this vision as a nightmare.
> To me a utopic outcome looks like our tools getting more and more sophisticated, allowing a single individual to feasibly take on larger and larger projects without having to involve other people.
Yeah, through this entire exchange, I was thinking "so GPT-5 or 6 will be a great thing for development, because it will be a single mind that can see all the code and dependencies down the entire stack, and can optimize an entire codebase as long as it's small enough to fit in the context and undestanding window."
Instead of calling so many mb of different libraries to do simple things in a bunch of unconnected places, it can just write the simple function and get rid of a bunch of dependencies and vulnerabilities. There are probably architectural things it can do in terms of commenting and dependencies that will make it easier to surface bugs, or describe and articulate the envelope concept you had upthread visually and textually. I think that's a pretty exciting area to be thinking about and working on right now.
I actually think the "envelope/mental model explaining a component" thing is going to be incredibly important - because when AI is the one creating the library, how are we going to follow along? We either need better explanatory tools or we accept that the AI is the new owner of tech and all we can do is try to manage it.
But right now I struggle a bit to think about the problem, because it feels like I don't have the right tools yet. To take a topical ACX example, I feel like a monk trying to diagnose demonic possession cases without the concept of "psychiatry".
I can tell you GPT 3.5 isn't there yet - every time I ask its opinion, we have a back and forth where I try to explain what I mean. No matter how I try rephrasing it, as soon as we get close, it crashes.
Tempted to add that not many fields have got away with «your system developed by us is vulnerable to known malicious attacks from the actors under sanctions from the government we share with you; we won't fix it, won't be responsible, and will legally prevent _you_ from fixing your own copy» for so long.
Oh, and of mental models — the mental model of a person who knows from the featureset how this things must have been done inside, and the mental model of a user throwing a cursory glance at the system… they are not very similar. And if you abandon the first one completely, your system gets too confusing to maintain, and if you abandon the second one completely, you can only have professionals willing to invest into training as users. So you cut uneasy compromises and the system breaks expectations of everyone who touches it. By design. Few people try to present fully disjoint views into the system while handling the differences of the models correctly via translation, fewer succeed…
I've wondered for a while about some kind of formal "analogy testing" process for this. A simplified model is an analogy to the real thing (think of your directory tree like "files" in a filing cabinet).
An "acceptable" analogy provides a simplified model without changing anything about the real behaviour. A bad analogy implies behaviour or logic that isn't there, or fails to prepare you for logic/behaviour that is.
I don't know by what process you could lay down or test these analogies. My impression rught now is that all of this is unconscious and illegible.
You're only looking at technical reasons, while some of them may be social.
Take institutional capture. This is obvious for proprietary stuff, but best observed in collaborative open-source projects the moment they get significant funding. The people in charge treat users as intruders, pursue their hobby horses at the expense of actually important functionality, and generally refuse to do any more work than absolutely necessary, but refuse to leave because they have funding, and it's hard to unseat them by forking when they're the ones who have funding.
Okay, I’ll be the one to touch the third rail. My advice to Dems on the 34 felony convictions: Do not spike the ball on this. Really. I would have CapLocked ‘do not’ if not for my own habit of dismissing CapLocked comments.
I’ve seen this dynamic play out on a smaller scale locally. I’d describe it but, as usual, I’m on my phone and I don’t want to tire my thumb. The result was not what the over eager wanted.
If the Dems think they are dealing with a 30% of American voters cult of personality now they can fully expect to have that number grow if they start to gloat about a conviction based on a novel interpretation of law against a former US president.
I'd expect that no matter what the median Democrat says or does, some combination of news sites and algorithms optimizing engagement with conflict, and right leaning news and political efforts seeking to optimize voter turnout with the same, will ensure that regardless of how large or small the contingent of "gloating" Dems are, right leaning voters will be sure to see them overrepresented as "how the other side is."
I wish more people felt as repulsed by their own side gloating, as they do by the other side gloating.
This feels like as good a place to ask as any: what is the relationship between Skibidi Toilet and the song Skibidi by Little Big?
According to reddit:
https://www.reddit.com/r/youtube/comments/161u3d9/the_viral_skibidi_toilet_series_is_based_on/#:~:text=It%20is%20based%20on%20a,is%20what%20inspired%20the%20toilets.
As an adult in a household that is currently infected with skibidi fever, I will just say that I’m glad the original Little Big song isn’t getting played.
Many young people seem to think we live in hard times. Some of the book reviews reflect that. It's a popular meme: these are hard times.
Obviously, these people are crazy and have no sense of even recent history.
I think the reasons for young people thinking they live in hard times have a lot to say about the future of AI.
In spite of a massive increase in the standard of living for Americans objectively over the past 20 years, young Americans reject that narrative. I think the disconnect is that technology, despite changing life tremendously, hasn't improved it subjectively enough that people notice that they are better off.
This leads me to believe that the same will likely to be true when AI changes things massively on an objective level. The standard of living will improve but will hardly be noticed, because subjectively we are asymptotically approaching optimal conditions for humanity.
Although young people generations hence will continue to complain about the economy sucking, their real complaints are that technological advances aren't helping them achieve what makes most people happy: good relationships, family, interesting work and optimism.
It is a basic Stoic advice to consider that it *could* be worse; and if you know history, you know that most people actually *had* it worse.
Like, when people complain about all the inconveniences related to covid, I think about Black Death and conclude that we have it too easy. Most of your family survives the pandemic, and you complain about having to wear a mask for a few months? Seriously? Read "A Journal of the Plague Year" to get some perspective!
Old people remember the old times in person, but young people can get similar perspective from reading books about the old times. It is a very natural mistake to assume that the past was exactly like the present. My kids find it difficult to imagine a childhood without internet.
It is natural for humans to imagine a golden age in the past. Christianity believes that Adam and Eve lived in a paradise; Marxism believes that out ancestors lived in a perfect egalitarian society; Taoism believes that ancient people were all virtuous and lived in harmony with Tao; feminism believes that noble savages lived in enlightened matriarchal societies. The only difference is that young people these days seem to believe that the Golden Age happened in a very recent past, so they can accuse their own parents and grandparents of being right there and having ruined it by eating some forbidden fruit. But maybe even this fits into the general pattern of accelerating progress.
The last several generations have also perceived hard times as young adults. The core reason for this is that the transition to independent adult living is genuinely hard for most people. And it seems worse than it is because you're comparing your own lifestyle as a new-minted adult fresh out of school to that of your parents who are 2-3 decades further along in their careers than you and have had a similar amount of time to build up a stock of household capital. And the other major baseline for comparison is "slice of life" sitcoms and other media depictions, which tend to show middle-class or struggling-young-adult characters with an unrealistically high material standard of living, particularly in terms of living space and dining-out-and-entertainment budget (c.f. "'Friends' Rent Control" on TvTropes).
On top of this, the transition to independent adulthood has probably gotten significantly harder over the past couple decades. Credentialism makes it harder to get good entry-level jobs than it used to be, people are graduating with more student debt than was the norm in the 90s and before, and the cost of basic housing has been growing faster than the overall inflation rate.
I suppose I am talking about "popular vibes". I was a young adult in the '90s and don't remember the vibe being "these are hard times". I would say that shows like "Friends" perhaps because of its unrealistic portrayal of life captured the positive vibe of the times. Indy movies like Office Space captured generational disenchantment with the workplace but also demonstrated that financial insecurity was not a big concern of twenty-somethings. By comparison, I can imagine a 22-year-old in 2010 watching Office Space and thinking "These punks are gainfully employed at desk jobs but are too spoiled by the '90s economy to appreciate it!"
I do agree that young adulthood is a hard time in life. But there's a difference between recognizing that versus thinking: "My grandfather's generation had it easier at this age. The 2020s are a bad time to be young."
I distinctly remember "these are hard times" vibes as a young adult in the early 2000s, and I'd assumed the vibes went back earlier than that. As a teenager in the 90s, I did notice a fair amount of pessimistic vibes in media, particularly stuff focused on young adults. Bringing up "Friends" again, remember how the theme song goes:
"So no one told you life was gonna be this way
Your job's a joke, you're broke
Your love life's DOA
It's like you're always stuck in second gear
When it hasn't been your day, your week, your month
Or even your year, but..."
It's a perky, upbeat song and the chorus is optimistic in tone, but it's optimistic about social support, not material conditions.
Later seasons of the show were a lot more materially optimistic than the earlier, IIRC. In the early seasons, Ross and Chandler had decent jobs, but the financial struggles of the other four main characters were major recurring themes even though the depicted standard of living implied that Monica and Rachel were the best-paid line cook and barista in New York.
I feel like the transition happened somewhere around 1999-2000. I can't speculate as to underlying causes, but the economy wasn't doing as well as it used to. And there were some signs of "things going wrong", like Columbine, and the WTO riots, and Bush v. Gore, and then 9/11.
I think the zeitgeist of the early '90s was best captured by the Jesus Jones song "Right Here, Right Now".
A woman on the radio talks about revolution
When it's already passed her by
Bob Dylan didn't have this to sing about
You know it feels good to be alive
I was alive and I waited, waited
I was alive and I waited for this
Right here, right now
There is no other place I want to be
Right here, right now
Watching the world wake up from history
Oh, I saw the decade in
When it seemed the world could change at the blink of an eye
And if anything, then there's your sign
Of the times
https://www.bing.com/videos/riverview/relatedvideo?q=right+here+right+now+song&mid=FB06636940A1BD4F695AFB06636940A1BD4F695A&FORM=VIRE
"Many young people seem to think we live in hard times. Some of the book reviews reflect that. It's a popular meme: these are hard times.
Obviously, these people are crazy and have no sense of even recent history.
...
Although young people generations hence will continue to complain about the economy sucking, their real complaints are that technological advances aren't helping them achieve what makes most people happy: good relationships, family, interesting work and optimism.
"
Less of this please.
This is one of the least productive ways you could have phrased this. You certainly understand why young people might complain; modern technology either doesn't help people's attempts to cultivate the things that make them happy: relationships, family, rewarding work, and optimism. You're also probably aware that there hasn't been a massive increase in the standard of living, it's been ~18% real increase over 20 years (https://fred.stlouisfed.org/series/MEPAINUSA672N) and it's certainly probable that, given variance, some people are actually significantly worse off than they were 20 years ago, on top all the social consequences you list.
We're all adults, we can discuss things calmly, you can just say that lots of people are frustrated with declining social relationships in a situation of moderate economic growth if that's what you actually believe.
What is your definition of “hard times?”
Without some kind of clear criteria, it just seems like an evergreen excuse to belittle and ignore other people’s complaints as long as we can define some time period in the past we can plausibly allege to be harder. Young people in the 1970s are whiners for complaining about inflation and the Vietnam war - those aren't "hard times," because previous generations lived through the depression and WW2, young people living through the great depression and WW2 are whiners - those aren't "hard times," because the civil war was far bloodier and those people didn’t even have antibiotics and “medicine” meant hacking a leg off with an unwashed saw, and so on.
To a first order, my criteria of "hard times" is a time of war vs. a time of peace, and the economy. Right now we have peace in the USA (as always, there are places where that isn't the case), and the unemployment rate has been very low for quite some time.
So I believe the major years of the Vietnam War, WW2, The Great Depression, and the Great Recession were "hard times". The '80s, '90s, '00s, and this decade not.
But maybe I am missing something. What am I missing?
> Many young people seem to think we live in hard times.
I think ragebait is to blame.
People really like reading about how shit everything is because it's engaging and confirms that they're not to blame for whatever hardship they're experiencing.
Perusing reddit.com/r/all you'll often find memes about what things cost and how cheap it was to buy a house in the 80ies (with completely made up insane numbers)
I suspect this is in part driven by troll farms, it would make sense for russia/china/whatever to try and convince western youths that everything is hopeless.
(But I think it would probably happen organically either way)
I think it's possible that the milieu we live in is making people better off in specific ways and worse off in other possibly more salient ways.
Better off is the obvious improvements in tech, medical care, etc. Lots of things are now massively more convenient.
Worse off tends to be things that are really important pillars of wellbeing - security in your place in the community (the number of people who are self-employed or employed in very small, close-knit businesses is way down compared to before - most of us now work as individual employees, surrounded by coworkers who rapidly move on or are made redundant - this is not at all conducive to building a sense of security in ones place in the social hierarchy when the faces change constantly), we're very dependant on scarce positional goods (I don't think it's controversial to say both housing and jobs are now more scarce), and a lot of us sleep a lot worse (some of this is screentime, some of this is higher population density).
Its completely possible that the psychological impact of being secure in community, vocation, and shelter is higher than "nicer stuff". People aren't necessarily opposed to work that is hard or unpleasant if the work is also respectable and able to afford a living (if that was the case there wouldn't be anyone signing up to become a doctor, which is well known to be gruelling). But the availability of these jobs seem to be shrinking, and a lot of the respectable jobs have had the hard aspects get harder without the respectable aspect changing (seems to have occurred in teaching especially)
Housing is probably scarcer. I don't see how jobs can be considered scarcer now with such a low unemployment rate for so long now. Jobs were truly scarcer during the Great Recession, 2009 - 2014. I would consider those years to be "hard times" for twenty-somethings.
Interesting point about working in "small, close-knit businesses". I suppose I don't understand why that would be preferable. Small businesses are more like families in ways both good and bad. Many small businesses are run by abusive owners, although many are run by wonderful owners. The modern HR-run corporation kind of irons out those extremes. You get neither too wonderful nor too abusive.
But perhaps you have a point about lack of community in a geographical sense. That is clearly something we have less of.
Jobs in general might not be scarce, but desirable jobs specifically are. Its well known that contract and temp positions make up a larger proportion of the workforce now.
Bouncing between companies every 2 - 3 years is not conducive to forming very strong bonds within an industry, and that's especially true if a big chunk of the workforce is doing that simultaneously.
Permanent roles aren't a guarantee that it's not going to happen, either, because those tend to be offered at very large organisations (> 300 employees), and frequent internal moves due to the company restructuring would be similarly destabilising.
A similar thing happens in housing - renters bounce from place to place due to having short (normally 1 year) leases, and it's not just that all of the neighbours are also on short term leases, you don't really get to become a "regular" at local businesses if you have to move again soon after (and some businesses, eg a supermarket, have high enough turnover that the checkout staff have changed like 8 times while you were there).
So that's my thesis - in the last 30 years, the two places most of us spend most of our time (work and home) tend to change too often for us to form lasting relationships with the people and businesses nearby. Even if you succeed personally in locking down these two locations, everyone else around you struggles to do that, so those people and businesses change constantly anyway, mostly to our social detriment, because it causes most of us to focus most of our social energy on a tiny group of people (your spouse and immediate family). I do think a lot of us no longer have "medium intensity" social bonds - people we know quite well, socialise with often, but don't literally live in your house. Most of us only have the low intensity (colleagues and acquaintances 0 - 3 years) and high intensity (spouse and kids if we're lucky) bonds. Young people don't really have the high intensity bonds yet, too.
Maybe, a lot of us feel like our personal "tribes" are too small to feel safe, because it's just so difficult to build a proper tribe under the current economic conditions.
First, I want to agree with WoolyAl that my post could have been phrased better and more generously.
As to your post: I agree with your words, but how *are* we to measure reality as to how hard the times are if we don't consider the most common metrics such as the unemployment rate or that we live in a time of peace not war?
What measures do you have in mind that show the current times are relatively bad?
> What measures do you have in mind that show the current times are relatively bad?
Per Haidt's After Babel, suicide and suicide attempts are up in young people.
Another societal wide measure is fertility rates - pretty much across the developed world, fertility is below replacement and trending down.
I'm baffled by the idea that low fertility rates are compelling evidence of relatively bad times. It seems a pretty universal law of the last century that fertility rates decline as infant mortality declines and material well-being increases. Very plausibly, lower infant mortality and increased standard of living *causes* lower fertility. No one aspires to move to the places with really high fertility!
The big drops in infant mortality were a long time ago.
So were the big drops in fertility!
There's a gap between actually-achieved and desired fertility though - Zvi quotes them all the time, but there's surveys showing most developed world women want ~2-3 kids, and have ~1 kid. The gap would be the indication that current times are relatively bad - either financial, social, or other concerns are leading people to have fewer children than they say they want.
The point is that if you have some metric by which sub-Sarahan Africa appears to be a uniformly better place than Europe, then you should not be using this as your primary metric for the question "are things Good or Bad?" If things were Good in sub-Sarahan Africa and Bad in Europe, then migration flows would be going in the other direction!
I feel it's important to know how many children men want here. If most men want 0 and have 1, that's a perfectly normal compromise number.
"they have a smaller proportion of total household wealth than older generations did at the same age:"
They don't control for demographic changes.
In 1995 the share of young people was much greater than it is now
So it's hard to tell whether there's a real effect there when it comes to a persons access to the economy, or whether there's just comparatively more baby boomers.
Why Bayes should be better known to lawyers and judges: https://unherd.com/2024/05/the-danger-of-trial-by-statistics/
On a related note, this is a decent read: https://www.barnesandnoble.com/w/constitutional-calculus-jeff-suzuki/1120724073
Thanks for the recommendation. I've posted about Lucy Letby before, and predicted more murders and attempted murders to be revealed at the upcoming enquiry. The New Yorker story raised some interesting points - my prediction is mainly based on similarity to Shipman, where the Police narrowed it down to 7 prosecutions, then after conviction opened an enquiry into the hundreds of other cases where Shipman probably murdered the patient. The NY piece contends that the Letby case is being approached by the Police in the way it is precisely because of a fixation on Shipman, in which case I'm being taken in by the Shipman vibe the Police are engineering. Private Eye have a good record of exposing weak convictions, and they have a piece on Letby ready to publish when the restrictions are lifted. So we'll see. But many of the arguments in the NY piece are lame....
...for example the doctors who snitched on Letby are depicted as being overly confident in their own opinion - which may be a failing in a journalist, but it's not clear to me that it's a fault in a doctor. Also, "but she had friends!!!" is a terrible argument - Shipman (sorry!) was a well liked and well trusted family doctor with a family of his own. Anyway, we'll see what the public enquiry brings out - I'm willing to be persuaded. But the very existence of the enquiry is probably bad news for Letby
👍
The logic of The War on Terror after 9/11 was "We fight them overseas so they can't fight us here (In the USA)"
I generally think that the USA's War on Terror was overkill and idiotic. We spend trillions of dollars on it we could have spent at home on infrastructure. (An argument that outsourcing to China for manufacturing wasn't so bad because China turned around and invested trillions in the US bond markets which kept interest rates low for Americans. The problem was that instead of investing that Chinese money in the US we blew it up in Iraq and Afghanistan.)
The War on Terror weakened US hegemony because it showed the US to be capricious, strategically weak (particularly in the case of Afghanistan), and politically divided. OTOH, we haven't had a foreign terrorist attack of not since 2001. So maybe that war against an emotion was sort of effective?
Does anyone today think the War on Terror was worth it? Even partially? Are there parts of it that are defensible? (I think dropping some bombs on the Taliban and assassinating Osama bin Laden were good things but not much beyond that.)
Was the war in Iraq worth it? Could it have been?
> Does anyone today think the War on Terror was worth it? Even partially? Are there parts of it that are defensible?
Aside from everyone else's point about the money, as a frequent flyer, the cost in US time to the TSA's security theater has now killed ten times more people than 9-11 itself, and which anytime it is audited with red-teams trying to get weapons through, has a *95% failure rate*.
I went back and forth with another ACX poster on various assumptions, and we arrived at a floor of ~35k US lives lost in US citizen-hours due to the TSA, which is 10x the actual toll of 9-11, and whose ongoing cost (with a 95% failure rate, remember) wastes something like 800M USA person-hours annually.
>We spend trillions of dollars on it we could have spent at home on infrastructure
Surely most of that money was indeed spent at home.
>Was the war in Iraq worth it?
For whom? For Iraqis, who will never be ruled by Qusay Hussein? Very possibly. Just as Afghanistan was very possibly worth it for the hundreds of thousands of girls who were educated during the 20 years after the Taliban were forced out of power. It is impossible to say, because weighing the costs and benefits requires a normative judgment.
Most of the money was spent at home to produce what? Ammunition and other supplies to support an army overseas? IOW, disposable not durable goods. Not advanced infrastructure that could exist as part of the wealth of the nation such as cross-country power lines delivering the abundant solar and wind power from the desert side of the nation to the more populated, less breezy and darker parts of the country.
Defense contractors benefited from that money, but couldn't it have been better spent, even on defense technology, without that war? I don't know what we spent on bombs and missiles that were launched and detonated, but they were all a deadweight loss, not an investment in future military technology for a war that might be worth fighting.
And as I say above, I think giving a lot of money directly to those who lost jobs directly due to the China trade would have been much more worthwhile than spending it on those wars.
Yes, of course it could have been better spent. I was merely objecting to the implication that it was simply thrown away. (And you seem to imply that again, when you refer to supporting "an army overseas." It matters little where a soldier is located, if the spending is domestic.
Note also that the only way to get to an 8 trillion cost is to include future spending on things like veterans' benefits.
And this study, which includes that, puts the total at 3 trilliin by 2050. https://watson.brown.edu/costsofwar/files/cow/imce/papers/2023/Costs%20of%2020%20Years%20of%20Iraq%20War%20Crawford%2015%20March%202023.pdf
Finally, "we could have spent it on infrastructure" is a bit of a red herring, given how little of the federal budget is spent Theron. The vast majority is spent on providing services, which in your formulation is also wasted spending.
The War on Terror cost us about $8 trillion. I know that seems like an absurd amount of money, but it would buy you Microsoft, Apple, and Nvidia. The three largest American companies, but you're still missing Google, Amazon, Tesla, etc.
If you're going to spend that money on infrastructure, prepare to be disappointed. Take the Bay Bridge replacement in California. Cost 6.5 billion to replace a bridge, originally estimated at 2 billion. At that rate, you'd be getting somewhere between 1,500 and 4,000 bridges, and it'll take about 11 years to get them done. Or look at California's High-Speed Rail. In 2008, it was estimated that the project would take 33 billion to get rail from Anaheim to San Francisco. As of 2023, we've sunk in 20 billion and gotten zero miles of usable track. It's currently estimated to cost 100 billion to get from Anaheim to San Francisco. So I can totally imagine spending that 8 trillion on projects that never go anywhere (a la NEOM in Saudi Arabia).
Or put another way, if you took that 8 billion off the U.S. national debt and we lived in the counterfactual world where that money had never been spent, the current debt would be at its 2017 level. I was alive in 2017 and I don't remember feeling like the U.S. was just drowning in wealth - quite the contrary, it was received wisdom that the U.S. was drowning in debt.
We certainly didn't win the War in Iraq or Afghanistan. But the U.S. largely believed - foolishly perhaps - in the idea that ordinary people could, if given a chance, successfully govern themselves without dictators or autocrats. Unfortunately in the past few years, we're seeing the naivety of that concept -- authoritarianism is flourishing everywhere. But I don't fault the U.S. for believing and some small part of me still believes that we'll one day see the good ending to the Arab Spring.
>We certainly didn't win the War in Iraq or Afghanistan.
I never understand it when people say this, esp about Iraq. The regime that governed Iraq was completely destroyed. The current constitution of Iraq is the one that was written under US supervision. If that isn't winning, I am not sure what is.
It’s now dominated by Iran, which was probably not what Uncle Sam had in mind.
Leaving aside that "dominated" is a gross overstatement, so what? Isn't that evidence that the war was won? Because if the old Ba'athist regime had regained power (and note that "the old regime regained power" is precisely the argument underlying the statement that Afghanistan was a loss) Iran would obviously have less influence. More broadly, the overall US policy re the Middle East has not been "anti-Iran." It has been pro-stability. And the former regime in Iraq was the source of enormous instability, what with their propensity to start wars with their neighbors and all.
8 trillion buys a lot. The entire interstate highway system cost $618 billion after adjusting for inflation.
I think the best use of the low interest loans from China would have been to compensate factory workers whose jobs were displaced by outsourcing to China. It would have benefited them, and it could have benefitted trade policy going forward as international trade wouldn't be viewed as such a negative thing by the masses if "trade-offs" != "the working class gets fucked".
High-speed rail in the USA is a boondoggle.
> if you took that 8 billion off the U.S. national debt
8 trillion, not 8 billion
Ack, sorry! The math should be correct. It’s like 16 years worth of US infrastructure spending, which seems like a lot, but really wouldn’t change reality all that much.
I think it was mostly not worth it either, and that the wars were a mistake. I think there are still some contrarians here who support the Iraq war, but for the most part, yours is the standard take.
Perhaps theoretically if they had not invaded Iraq and focused only on Afghanistan, Afghanistan might have turned out better, but in our timeline it certainly didn't end well and it's hard to put much faith in hypotheticals going better.
I just watched some videos of the songs from Wish and wow. I'd already heard that Wish was criticized for lack of story, excessive references and awkward song lyrics, but I had no idea it had such a weird-looking art style as well. The characters are still the same detailed-3d models as every past Disney movie, but the backgrounds are all 2d, making the whole thing look very stupid.
Also, is it me or is the exposition song just an inferior ripoff of the one from Encanto? It seems so similar.
It doesn't even have a pretty dress. It should be a no-brainer when making a Disney movie to put your heroine in a pretty dress so you can sell ten million copies of it to little girls, but apparently pretty dresses are now too heteronormative or male-gazey or something, so it's a shapeless purple long-sleeved tunic on our shapeless big-nosed heroine-of-undefinable-ethnicity.
The art style was supposed to be an homage to classic Disney movies like Snow White or Sleeping beauty, with the simple 3d models meant to resemble a kind of combination between classic watercolor backgrounds and modern CGI. It was actually an impressive technical feat to pull off, but the problem is that it looks weird. File it under the category of things that are hard to do, and also suck.
The songs are all bad, mostly due to lazy lyrics.
100% agree.
The moment that got me was the "You're a Star" number. At the end of the movie you can sort of see why they needed it, or something like it, since "everyone being a star" is basically how they defeat the big-bad. But situated as it was in the movie it was just this random mediocre "everybody is special" pop number that just flew in out of left field.
I think that the word evil is a really good example of Sapir-Whorf type effects where a word that doesn't do a good job of carving reality at the joints leads to sloppy thinking (by contrast, "good" in the moral sense is less harmful).
I think that "evil" conflates (at least) two different things - "does not try hard to do what they consider to be the right thing" and "is wrong about what the right thing to do is".
Consider, say:
:- Serial killer Ted Bundy
:- Indian independence leader Mahatma Gandhi
:- A relatively principled politician from a party you disapprove of.
:- 9/11 suicide bomber Mohammed Atta
:- Me.
On the first scale, "tries hard to do the right thing", I would rank these people
Suicide bomber > Gandhi > Politician > Me > Serial killer.
On the second scale, "is correct about what the right thing to do is", I would rank them
Me > Gandhi > Serial killer > Politician > Suicide bomber
So what sorts of insights can we express with these two scales that we can't with just the word "evil"?
Getting low on either scale gets you into the territory we refer to as "evil", but in very different ways - I think that both Atta (an incredibly brave and principled man, demonstrably giving to give up his life for what he believed was right, but whose principles and beliefs about what that constituted were diametrically wrong) and Bundy (who for all I know may have had a perfectly good moral compass, but just chose to ignore it) did terrible things, but they did them for very different reasons, and the condemnations I would offer of them as people (as opposed to of their actions) don't really over lap.
By contrast, the only people who do really good things are people who score high on both scales. The people at the top of the "tries hard to do good" scale include some saints, but also a lot of really terrible monsters; no-one else's moral values align with mine as well as my own do, but I'm not a very good person because I don't make the effort and sacrifices required to be. Since "good" requires being high on both scales, it - unlike "evil" - does a good job of referring to a natural cluster in the 2D space they span.
This sounds pretty sensible.
One slight complication is that beliefs are often downstream from intentions. If you are a cruel person who enjoys the suffering of others, you'll find yourself drawn to ideologies that say that cruelty is justified. Evil ideologies attract people who are already predisposed to be evil.
> On the second scale, "is correct about what the right thing to do is", I would rank them
Me > Gandhi > Serial killer > Politician > Suicide bomber
You demonstrate a severe deficiency of hate for the political party "you disapprove of". I would recommend watching YouTube videos from the lunatic fringe of their side of aisle until you are cured.
Serial killers are ranked higher on the "correct about the right thing to do" scale though. So I guess murdering innocent people is less wrong than outgroup political ideas.
I think Bundy was chosen because he's generally understood to have known that his crimes were morally wrong but didn't really care.
There are other serial killers who justified or excused their crimes, e.g. Ed Gein or David Berkowitz, and thus would have been ranked lower on the "correct about the right thing to do" scale.
That makes more sense.
Notice that I'd class someone like Bundy at the very bottom of the "does what he thinks is right" scale, whereas plenty of politicians I despise try harder to do what they (in my view wrongly) think is right (as do suicide bombers, regardless of their motives).
Are you implying politicians you disagree with have worse moral compasses than suicide bombers? That's the only person he ranked them above. That's quite extreme
I think you're misreading.
I *am* implying that there are serial killers with better moral compasses than politicians I disagree with (and possibly even politicians I broadly agree with), the difference being that the serial killer knows right from wrong and chooses to do wrong.
Well, not ALL suicide bombers, of course. Some of them probably have the same political views as said politicians. But I imagine suicide bombers to be more likely to be of the political persuasion NOT in power, and that might be skewing my representative instance morally-rightward.
At any rate, my point is that the hypothetical politician, by construction, is selected to have a "bad" moral compass. The hypothetical suicide bomber, in contrast, is only selected for passionate belief and a willingness to die for SOME cause.
The post didn't talk about a "hypothetical suicide bomber", it said 9/11 hijacker Mohamed Atta.
The top-level one, yes, but not the comment I was directly replying to, which spoke of "suicide bombers" more generally.
I stand by my reply to the original post.
I agree with you to the extent that most people use good and evil the way you defined. But they aren't treating the two terms symmetrically. In this framework, being good requires both intent and action, whereas evil only requires one or the other. This is why it seems like good is better defined and more exclusive. Basically, evil is over-defined as everything not explicitly good.
Good intent, good action > Good
Good intent, evil action > Evil
Evil intent, good action > Evil
Evil intent, evil action > Evil
If the terms were treated symmetrically, both good and evil would be narrowly defined as good acts with good intent and evil acts with evil intent, respectively. And both would be useful descriptive terms.
Interesting way of thinking about this, but there needs to be a "neutral" possibility for the "action" label.
And then on the "intent" label an unstated variable in how people think/talk about this is degree of selfishness. How people judge others' purely-self-interested actions varies enormously, so much so that it seems like a significant confounder to this framework's real-world usefulness.
I have a silly question (theory?) about the mechanism of the antidepressant mirtazapine.
For background I've been pretty severely depressed for a while (5+ years) and have had allergies/asthma/eczema/etc for all of my life; I had pretty severe allergies and asthma as a child, ended up in the hospital semi-frequently. I've read the theories that depression is linked to inflammation or whatever and there seems to be a reasonably robust association between asthma, food allergies, and depression risk.
At any rate I went through four separate medications before trying mirtazapine, mostly SSRIs, and they did absolutely nothing except give me some mild side effects. Ditto with therapy. Then I tried mirtazapine and it was magical; felt better after two weeks than I had in many years, and it's lasted for a while (more than a year now).
Of course this is completely anecdotal data. But I was reading up on the mechanism of mirtazapine, mostly out of curiosity, and I noticed that in addition to the main antidepressant mechanisms of á2-heteroreceptors antagonismand 5-HT2 receptor & 5-HT3 blocking, it's an extremely potent antihistamine. I'm curious if the antihistamine effect in itself is helpful for depression (and is maybe related to the fast onset of effect for mirtazapine). I can't find any studies for this. Has anyone tried injecting rats with histamines to see if they get depressed? Maybe in people with a long history of severe allergies/asthma it makes sense to prescribe a TeCA first? Would appreciate thoughts from someone who understands psychopharmacology better than me.
Out of curiosity, did you notice any changes in your allergies/eczema/asthma?
I don't understand it better than you, but I do have some thoughts for you: I doubt you will find much the by straightforwardly researching your exact question. But I just went on google scholar and searched "antidepressant effects antihistamines" and all kinds of stuff turned up. I think I also saw some stuff about antidepressants as a treatment for allergies, too. I'm guessing you will get confirmation that your basic idea is plausible. So then you might just try to figure out ways to test the idea with yourself. If you expose yourself to allergens, do you get a bit depressed? Does a course of allergy shots have an effect on your mood? Does adding a bit of benedryl to what you're already taking make a difference? (Check to see whether this is safe before trying it, though). I think you have an unusual and hard-to-treat kind of depression, and it would be good try empower yourself by figuring out as much as you can about how your system works. You can't count on psychopharmacologists to do that, or to be up on the research. Many seem to have very little intellectual curiosity. Anyhow, congratulations on finally finding something that works.
The Israel-Palestine situation has gone through several months' worth of events in the last week, here's a summary, as much for my own benefit as anyone else interested.
(1) On Friday, The ICJ ordered Israel to halt the operation in Rafah.
From [1], the 25:30 mark:
>>> By 13 votes to 2: [The court orders Israel to] immediately halt its military offensive, and any other actions in the Rafah governorate, which may inflict on the Palestinian group in Gaza conditions of life that will bring about its physical destruction, in whole or in part.
=====================
(2) Israel has interpreted (1) as saying that the military offensive should only be halted if it threatens to inflict genocide on the Palestinians, i.e. instead of understanding the sentence structure as "(a) halt military offensive (b) any other action which may ...", Israeli politicians and media appear to have deliberately understood (1) as "(a) halt military offensive and any other actions that satisfy (b), (b) that which may ....". Then declared that the military offensive in Rafah doesn't satisfy (b), and thus won't be halted.
It's notable how nearly every international newspaper understood (1) to mean the immediate and unconditional halting of hostilities in Rafah:
-- (2-a) NYT: U.N. Court Orders Israel to Halt Rafah Offensive [Subtitle:] The International Court of Justice ruling deepens Israel’s international isolation, but the court has no enforcement powers.
---- (https://www.nytimes.com/2024/05/24/world/middleeast/icj-ruling-israel-rafah.html, https://archive.ph/11UIA)
-- (2-b) WaPo: U.N. court order deepens Israel’s isolation as it fights on in Rafah [Subtitle:] Though a rebuke to Israel’s conduct of its war, the World Court ruling will be difficult to enforce without the backing of the United States.
---- (https://www.washingtonpost.com/world/2024/05/24/israel-rafah-invasion-icj-ruling/, https://archive.ph/K2Iqq)
-- (2-c) CNN: UN’s top court orders Israel to ‘immediately’ halt its operation in Rafah
---- (https://edition.cnn.com/2024/05/24/middleeast/israel-icj-gaza-rafah-south-africa-ruling-intl/index.html)
-- (2-d) Reuters: ICJ Gaza ruling: Israel was ordered to halt its Rafah offensive and open the Gaza-Egypt crossing for aid
---- (https://www.reuters.com/world/middle-east/icj-live-court-rule-israels-offensive-gaza-2024-05-24/)
Four of the ICJ judges - 2 of whom were dissenters - supported Israel's ass-backward interpretation in public statements, while 1 (South African ad-hoc) supported the mainstream interpretation, and 10 judges stayed silent.
=====================
(3) The ICC hasn't yet granted arrest warrants against Netanyahu and Gallant. This is within range, Putin's warrants took 1 months to grant, while Omar Al-Bashir's (Sudan's dictator) were granted in 9 months. It's unclear yet how the ICC classifies Netanyahu, and whether further developments will accelerate or decelerate the granting of the warrants.
=====================
(4) On Sunday, Israel has torched a safe zone in Rafah. Claiming it was targeting 2 Hamas officials, initial figures say that 45 Palestinians, 32 of which are children, were burnt to death as their tents caught fire from the air strike [2]. The IDF alleges that it used lighter ammunition to strike a nearby location, and that the fire is instead the result of a shrapnel from that attack that hit a fuel tank, starting the fire.
This attack received widespread condemnations from outside Israel. The EU is reportedly mulling sanctions, pushing the stricter interpretation of the ICJ ruling above that means immediate and unconditional retreat from Rafah. Amnesty International [3] further called the ICC to investigate the incident among the war crimes it's investigating.
Meanwhile, inside Israel, some journalists have celebrated the torching and likened it to the Lag Ba'Omer bonfire [4]. (a ritual of celebration of a Jewish holiday of the same name that coincided on the day of the attack, usually celebrated in Mount Meron in the North, but this year was celebrated in East Jerusalem's Shaikh Jarrah neighborhood.)
=====================
(5) Hamas claims to have ambushed and captured soldiers in Jabalya. The IDF denies the validity of the claims, but didn't offer any additional details or explanation of Hamas' footage. If true, it would be the first time that new hostages were added to Hamas' bunkers since October 7th.
=====================
(6) 2 Egyptian soldiers were killed in an exchange of fire with the IDF on Rafah, one immediately with a sniper shot, and the other due to injuries. Both militaries conducting investigation, heavily restricting information and issuing few public statements.
=====================
(7) On Saturday, a video appeared of a masked IDF personnel threatening Yoav Gallant of disobedience in case he orders a retreat from Gaza and/or handing over the territory to any Arab-affiliated government. The video was shared by Netanyahu's son, to widespread outrage and condemnation in Israel.
[1] https://www.youtube.com/watch?v=V-G8aj3CnCk
[2] https://www.youtube.com/watch?v=IQl9MrQ2oUI
[3] https://www.amnesty.org/en/latest/news/2024/05/israel-opt-israeli-air-strikes-that-killed-44-civilians-further-evidence-of-war-crimes-new-investigation/
[4] https://www.haaretz.com/israel-news/2024-05-27/ty-article/.premium/right-wing-israeli-journalists-celebrate-rafah-attack-likening-it-to-lag-baomer-bonfire/0000018f-b983-dca9-a5cf-bd832e6e0000, https://archive.ph/jQWWu
I'm not making any comment on the underlying issues, but going solely off of the sentence you quote, I would agree with the Israeli interpretation. "the court orders Israel to immediately halt [its military offensive, and] any [other] actions in the Rafah governate..." Grammatically you should be able to remove the bracketed section and have the sentence still make sense. But, at least in my experience with English, "halt... any other actions" is almost always followed by a "that" or "which" qualifying what subset of all possible actions are prohibited.
Again putting aside the object level and only focusing on the grammar, your post still makes no sense to me. That's not how English works at all, to the point where it is difficult to see how someone could interpret it in the way that you did unless they really really want to and are just looking for a fig leaf.
If you say "I want you to stop eating cows or any other animals that chew their cud", it is simply not reasonable to respond "cows don't chew their cud so that means we can eat them". The word "and" includes *extra* stuff, it doesn't limit that which is explicitly named. "Any other" also implies that the description of the second part describes the first part, but simply disagreeing with that implication doesn't mean you get to throw out the explicit plain meaning of the words.
I will admit that there's an extra comma before the "which" in LHHIP's quote which is pretty weird and shouldn't be there, but even with the comma, the rules of English just don't allow for an alternate interpretation here.
> there's an extra comma before the "which" in LHHIP's quote which is pretty weird and shouldn't be there,
Yes, I originally wrote the quote without the comma, then - for honesty's sake - I went and checked the official ICJ transcription [1], which does include the comma, a fact that Israeli media like Times of Israel has exploited to peddle their bizarre interpretation.
But yes, any application of either common sense or the principle of Relevance from Pragmatics would immediately reveal that the court didn't bring up Israel's offensive in Rafah for fun, or just because the judges just happen to be fans of military strategy. There is no world in which any remotely eloquent adult uses language like "I thereby order you to stop doing the extremely specific thing X, and any other things, which has the trait of being Y" to mean "Well you can stop or not stop doing X, depending on your own interpretation of whether it has the trait of being Y".
[1] https://www.icj-cij.org/sites/default/files/case-related/192/192-20240524-ord-01-00-en.pdf
I completely misunderstood your argument about what "which" was doing. After re-reading the sentence like 90 times I'm convinced by you and Lapras's take. The second comma threw me off.
Isn't it that "extra" comma which makes all the difference, though? It sets off "and any other actions in the Rafah governorate" as its own, bracketed clause which can be removed from the sentence.
Like what if I were to require that you "immediately halt housebuilding, and any other actions on your property, which may cause harm to the local endangered beetle population", and you had a method of housebuilding which ensures that the beetles stay safe? My read would be that you can continue housebuilding using that method.
In your example, the natural interpretation would still be that you have to halt housebuilding. But even if you did change up enough to make your interpretation viable, the only reason that works is because "housebuilding" *could* be interpreted as an indeterminate collection of actions which could be further narrowed down.
In the original example, it specifically says "its military offensive", that it is referring to a specific thing, not an indeterminate collection which could be further narrowed down. In order to make it possible to interpret the way that Israel wants to, it would have to be changed to say "offensives" rather than "offensive" as well as the other changes.
I understand what you're saying.
So I guess what we're looking at is an attempt by the judges to craft a vague and ambiguous sentence that would allow as many of them as possible to sign on to it, but which ultimately wasn't all that successful.
I've noticed Google Maps is often getting things worse than it used to, with traffic backups that don't exist or missing ones that do. That's not even counting when it gets the best method to a destination wrong.
In Michigan, I-75 currently has construction between Detroit and Flint, and going north it actually consistently advises one to exit the freeway and enter again afterward, even though the freeway is actually open and running fine. If you follow its advice you will take longer to get to your destination.
That doesn't even count the time it routed me to a strange area instead of where I wanted to go.
I've gotten the distinct impression Google is trying to squeeze more revenue out of everything these days, and I'm not sure how any of these inaccuracies are helping it to do so.
Is anyone else getting the impression Google Maps is getting worse?
Where does their data come from? Just people with the app installed and permissions granted?
It's not just Google - there are companies that buy and aggregate geolocated traffic data (with the ultimate data coming from hundreds of different apps, so broader penetration than Apple or Google), like Airsage, after which anyone can buy the data.
I don't know if Airsage has a real-time data stream, I don't think they did when I was using them 5 years ago, so maybe the real-time traffic stuff is confined to Google and Apple. But I thought I should point out that geolocated data isn't special or controlled or hard to access, anyone with money can get it, and with broad penetration into any given population.
Certainly a large proportion of phones have this (I assume), but I suspect also, based on some directions the app provides, that other entities, like the government, are providing routing data. So my route which was low-traffic but not recognized as a route by Google may have had a "road closed" entry added, even though the road is open.
I've had trouble with it for a while, but more with the road information than traffic. In Seattle (hardly a backwater), it told us to turn left at an intersection where this was disallowed. The lane information is frequently out of date, and it doesn't pick up on closed roads as much.
It wouldn't surprise me if they've got their AI making things up for it now. Their AI was their first response to a Google Search yesterday.
Yes Google maps have been deteriorating rapidly, and I switched to Apple maps (yes, given their early history I was very reluctant to) several month ago. Apple, to their credit, made tremendous improvements to the maps and driving directions. Between using Duck for search and now Apple maps for driving my only remaining engagement with Google is email. This one is hard to break away from....
Unfortunately, DDG sucks nearly as much as Google-a-year-ago now. I've personally been using SearXNG, a federated open source search engine that aggregates results from multiple and has no ads or trackers.
It's like Google was back when it was useful, I don't even need to prepend "forum" or "reddit" to every search to get real results.
There's a number of URL's and browser plugins you can use to use SearXNG - I use paulgo.io as my goto URL on Safari on my phone and a firefox plugin to make it the default on my laptop.
Thank you, I’ll give it a try.
re Short and Pope, someone once asked me why they were called Child Ballads when they're obviously not for children. I said, "Are you making a joke or is that a serious question?" because I really couldn't tell. It was a serious question. And the answer is, they were collected by a man named Child.
I keep checking fivethirtyeight, expecting them to have started modelling the 2024 election in earnest, but they haven't. If you want to see I feel like they normally do, by this point in an election year, but I don't know exact dates that they started previously. Are they just holding off because they don't like the answer?
I'm late to the party, but I'd like to say that you should stop checking FiveThirtyEight. Here's Nate Silver himself, explaining how low they have fallen:
https://www.natesilver.net/p/polling-averages-shouldnt-be-political
FiveThirtyEight is Nate Silver and his models. Without Silver and his models, there is no FiveThirtyEight.
To chime in on the same theme, I think this is a case of "follow the person, not the brand". I trust Nate Silver to be relatively accurate and impartial, but the brand "fivethirtyeight" is only as good as whatever demon is possessing it.
Thanks for all the replies, I had no idea that Nate had left 538. (Actually now it sounds familiar but I'd forgotten.)
I'm actually surprised they're not leaning even *more* into the modelling, though, in his absence. Maybe the remaining bozos can't come up with a model as sophisticated as Nate's, but surely they can make a dumb one?
That does seem like Disney's style these days. Maybe they think a blog is fine? But there's got to be a few statistics folk who are also into politics and who think that they could do as good a job as Nate Silver. Putting a few of them in charge seems like a no-brainer.
the "538 Model" is Nate Silver's IP. When ABC News made the inane decision to lay him off, they lost the model as well.
Nate has talked on his Substack about reducing the scope of the model, since (paraphrasing) why publish a constantly-updating model if the vast majority of its audience (*especially* the self-assured pundit class) are just as probabilistically innumerate as they were in 2016?
I've seen a fascinating discussion of the long-term prospects for the 2024 British election.
https://andrewducker.dreamwidth.org/4434171.html
They do seem to have gone downhill since Nate Silver left and Disney took them over. His new Substack doesn't seem to have started coverage yet either, but that is probably because he is still (or was, as of this post 16th May) working on the model:
"Also, I’m finally taking some tangible steps to get the 2024 election model ready, interviewing finalists for the position I’m hiring — and later today, I may even (gasp) write a few lines of code."
April post announcing his plans for this year's election:
https://www.natesilver.net/p/announcing-2024-election-model-plans
So looks like we'll just have to wait and see?
Fivethirtyeight seems to have changed over the past year. I know Nate Silver is no longer working for it and I think he owns the rights to the models. So probably this is the biggest reason if they haven't started modelling already. Nate Silver seems to have a Substack of his own now, so might be worth checking that out to see if he does a similar model there.
Sometimes when I scroll through Substack comments on mobile *something* triggers popups prompting me to subscribe to a comment author's blog to appear.
Is it possible to hide or disable those?
It seems impossible to hide them once triggered unless I reload the whole webpage.
Are you getting that on ACX?
Yes, I also regularly get this in the comments section when scrolling down. I'm not sure what triggers it - I think clicking on some part of a comment in a certain way when on mobile. It's fairly intrusive.
If you hover over someones name or icon, a modal will appear with more info about them and buttons to follow/subscribe.
On mobile this happens when you touch their name I think, so probably when scrolling you can trigger this accidentally and maybe there is a UI bug where it doesn't go away without direct input.
Yeah, the same happens for me as well. To get rid of it again I do the following:
1. Carefully tap some spot within the popup that does not trigger anything (not a link or button) and without scrolling
2. Carefully tap some spot outside of the popup that does not trigger anything
On desktop the popup appears when you hover over the blog name to the right of the username (if the user has one). On mobile you have to long-press it just the right way to get it (at least on Android, don't know about iOS).
In my experience, it happens whenever you view a new person's substack for the first time and scroll halfway down if you aren't logged in. They're really annoying.
Why not monorails?
Monorails have a reputation as a white elephant of a transport system which seemed like a good idea in the mid 20th century but which failed spectacularly everywhere they were tried. But they still don't seem like an obviously bad idea... you can build them in an established city with a small land footprint, they're quiet, they run on electricity, they don't get stuck in traffic, and they're pleasant to ride on. Why have they failed to find a use case outside touristy niches?
(Serious discussions please form a line at my left, "Marge vs. the Monorail" references please form a line at my right.)
Seattle has a nice one, but went for light rail instead of expanding it.
I just remembered the Transrapid https://en.wikipedia.org/wiki/Transrapid magnetic monorail. It operates in Shanghai, but never got off the ground anywhere else.
People were quite against it when it was considered in Munich. Reasons were: Too expensive, not compatible with any other means of transportation, NIMBYism, high energy consumption, too much associated with Edmund Stoiber who made a fool of himself when he tried to advertise it: https://www.youtube.com/watch?v=bMUxRA4B9GE
Back then, I was against it too, but now I feel it would have been cool.
not cost effective
I read through the wikipedia article on monorails, and... I'm not sure I see what notable advantage monorails have over two-rail designs. This is probably partly just a limitation of the wiki article, which is really focused on history.
But it seems to me that many of the advantages you note (run on electricity, don't get stuck in traffic, small land footprint, pleasant to ride on) are all shared with any other electric elevated light rail system. Is the advantage here actually in the monorail? Or is it just easier to make an elevated monorail than an elevated two-rail system for some reason?
> Or is it just easier to make an elevated monorail than an elevated two-rail system for some reason?
From my cursory reading, that's exactly it. Elevated monorail tracks are cheaper than elevator two-rail tracks because of the smaller footprint. Unfortunately, that's their *only* advantage. In particular, monorails are more expensive for ground level or below-ground tracks, which are a lot more common than elevated tracks.
It must depend on how it's implemented. In Detroit, we have the Q-line, which is almost, but not quite, a huge waste. In support: it's nice to use in inclement weather, and probably for disabled and/or elderly people who cannot walk far or well. Against this, it is actually faster downtown to walk where you're going unless the train happens to be in sight, "don't get stuck in traffic" is wrong because people park in front of it (surely just "for a minute") and they break down, and the time estimates at the stations can be wildly wrong.
I never paid for a ride on it, as I worked for Rocket Mortgage, who probably paid for all rides I took.
The Peoplemover can be a better option since it can't be stopped by traffic (it's up on supports, as a kind of 2nd floor) but the places it serves is awfully short-range, and one could walk anywhere in its service area probably faster, if one includes going up to it and going down at your station.
“ seemed like a good idea in the mid 20th century but which failed spectacularly everywhere they were tried.”
Just like communism!
25% warning, less like this please.
Also, to be fair, fascism.
Indeed. Also Scott, I'm sorry. It was a spur of the moment thing and I knew right after I closed the tab that I'd crossed the line a bit. I know it's not good to bring politics into a non-politics thread nor to write lazy one line potshots against whichever outgroup.
Fascism seems to collapse into democracy sooner o later. See Spain, Portugal, Chile, Argentina, etc.
The same reason we have ICE cars instead of steam powered cars? There was a time when both systems were relatively viable, but more people chose ICE. With monorail, everyone chose trains with double rails instead. Now if you want to build monorail, you need separate tracks and trains and maintenance facilities. Basically the whole system is more complex and expensive than a normal train, even though the technology is not "worse".
Wait, are you saying that ICE superiority over steam engines is down to contingent factors?
I haven't looked into this but having to put water and coal into the vehicle instead of pumping fuel seems way less convenient
Also steam engines seem less miniaturizable?
But please convince me of the opposite and let me dream of PM-2.5-saturated steampunk uchronias
Steam and even some kinds of electric car were pretty popular circa 1900. The biggest problem was not the coal/wood fuel, but the steam engine had to constantly be supplied with fresh water. A big advantage was the lack of transmissions. Steam power could be used to spin the wheels directly, and had consistent torque generation. There was no need for complicated gear ratios to manage power. Steam cars were like the Tesla of 1900, in that they had smooth constant acceleration.
What really started the decline of steam cars was the adoption of electric starters in ICE cars, which replaced the hand crank. This made ICE the all around most convenient system. From a thermodynamic perspective, an external steam engine would never be as efficient as an internal combustion engine. But it's hard to say what steam engines would look like today if we had kept using them. After all, ICE technology has been continuously improved for the last 120 years.
Steam based thermodynamic cycles doing work are ubiquitous in power plants and way more efficient there than any mass produced internal combustion engine, but the difficulty is miniaturizing it without loss of efficiency. Condensers tend to be very bulky.
I think the problems with steam cars were pretty insurmountable, and once someone figured out how to make a decent ICE.
Consider the locomotive -- steam technology had a massive head start and an incumbency advantage here, but ICEs quickly displaced steam once decent ICEs started coming along. ICEs were much more efficient, and required much less maintenance and attention. Same deal with ships.
In automotive applications the advantages of ICEs are even larger. It's okay if you need to spend half an hour heating up your boiler before you move your locomotive, but pretty inconvenient each time you move your car.
Insurmountable is a stretch I think, and the ICE cars of the same period had plenty of problems too. Condenser systems could recycle the water to extend refills to about 1500 miles, although this added weight. Some of the later kerosene fueled steam cars got 15 miles per gallon, which was comparable to ICE cars at the time. There were flash boilers, powered by diesel or kerosene ignition, that could heat enough steam in 90 seconds to power the vehicle long enough for the main boiler to warm up. I imagine modern electrical systems would also work well in a hybrid. Electric powers the start-up until the steam heats enough to take over, and then the steam engine recharges the battery.
There were also ways the steam car was distinctly superior to the ICE car. The lack of a clutch or transmission made steam vehicles much easier to drive, and the simple design lasted much longer. There are steam cars with over 500,000 miles on them still in good condition, without anything other than normal maintenance. Which is unthinkable for ICE vehicles, unless they get the Theseus's ship treatment. Steam engines are also much quieter, almost silent, and don't produce nearly as much exhaust.
The real nail in the coffin was by the time all these kinks were worked out, Henry Ford was rolling ICE cars off the assembly line at a rate that dominated the market. Steam cars remained in the realm of a novelty for the rich.
Although I think steam engines could have been a viable replacement for cars, there are some areas they wouldn't work. Mainly because ICEs have a better power/weight ratio and are easier to miniaturize. I struggle to see how a steam powered airplane or leaf blower could work.
Makes sense, thanks!
I don't have specific knowledge here, but to give you some hope, I don't think the coal has to be a part of it.
My first thought was that the monorail in Chiba seems reasonably successful.
A quick search suggests that the Haneda to Hamamatsucho monorail is very successful, and Chongqing operates a pair of well used monorails.
Maybe Disney and world expos have unfairly tarnished the monorail?
Also autocorrect seems to hate that word too.
Chongqing is always a weird case when it comes to structural engineering, because it's basically all mountain - go look up a video of someone driving through the city. I'm not surprised monorails work there - the geography basically makes elevated rail mandatory and monorails are the cheapest way to build elevated rail.
Its cheaper to build and maintain ground structures wherever that's an option, though.
Chongqing looks like it was designed by Escher. I have no idea how anyone gets around it.
Here's the first article I found when googling this: https://ggwash.org/view/67201/why-cities-rarely-build-monorails-explained. It seems pretty convincing to me.
Thank you for this interesting article! It mentions Wuppertal; the thing is that around 1900, Wuppertal was very very rich and able to afford a spectacular form of suspended train.
But the first two reasons aren't really valid for Gyro Monorails: https://en.wikipedia.org/wiki/Gyro_monorail
What about these?
That's an interesting concept I hadn't heard of before. Looking at the Wikipedia page presents some obvious immediate issues though.
1. This thing has never been built beyond a prototype before. Even if it were a good idea, that would mean that it's an option for 20 years from now, not today. But the fact that it's never actually been built suggests that there are major problems of some sort with it. Not every bright idea in one guy's imagination turns out to make practical sense.
2. As Wikipedia points out, every single car needs to have an active gyroscope system. I'm guessing that increases costs and fuel usage a lot.
3. There's also the issue of safety. This design is "fail deadly" - if it ever loses power at all, it immediately falls off the track. That is a really bad property to have and probably fatal just by itself.
No, it doesn't fall immediately. Even when power fails, it keeps stable as long as the gyros are turning, which is about four hours. At least that's what Ernst Ritter von Marx wrote about the test monorail in London. Not sure how much fuel the gyros would need today.
And besides, not every failed idea is necessarily bad. Remember those sailships which had rotating cylinders instead of sails? Yes, that really works, better than you'd expect; but they still need wind.
What happens when a gyro is sabotaged?
I don’t have a strong opinion. But if it makes your hands smell bad, it can’t be doing much better for your “clean” dishes.
(>_<)
I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell. I believe it would involve algorithms, mostly, not deep learning. I am not in the field, and there is no way I can actualize this. I will happily pass the idea to anyone who believes they could actually build the thing. If you work on animation software, this would be right up your alley. If the idea makes a pot of money, I think it would be reasonable to toss a bit of it my way. Anyone interested?
Re: your drama: I once had a similar experience, and I wasn't even an outsider - I was talking as a fellow coder wanting to collaborate on a cool idea.
My takeaway was that while the Internet is great for people who want to have a squabble, using it for any kind of productive or ambitious goal is going against the grain.
My advice is you have to act a bit like a politician: take nothing personally, shrug off snideness and jeering, smile and nod to the ones who miss the point or just don't get it, engage and enthuse the ones who offer something.
And you have to put up with a lot of repetition - other people won't even have read the other responses, let alone know the now-familiar background context you have in your head. No one can follow you in there without a lot of patient explaining.
I suspect everyone is like this, but I also suspect coders are a particularly petulant bunch.
The problem is if you unload on them all for it, you tire yourself out and everyone sees you getting wound up. And if you give them power by worrying about their reaction, you let them control the conversation. Ignore them instead - you're here to discuss/advance an idea, not to justify yourself to internet strangers.
I say all that - take it with a grain of salt because I haven't really made a success of starting collaborative projects; I now tend to approach in more oblique ways and have a low baseline expectation that other people will help me.
I use graphics software too and would still be interested to discuss your idea.
Re coders; I don't think coders are particularly petulant, it's just that "I have this cool idea and I'm just looking for a coder to implement it, and I want a cut" is kinda the coder equivalent of "hey I want you to do this art for me 'for exposure'" for artists. It's virtually always a bad deal for the coder, and a lot of coders get these sort of requests fairly often, and often give similar reactions to artists.
And I do think "you're just here to discuss an idea" is the wrong framing - they could have had that conversation, it would have required publicly describing the idea and saying "what are your thoughts?". But "I have an idea, and I'll give it to you if you agree to pay me if it goes well" is explicitly soliciting for other people to work on your behalf and that's a different dynamic.
Yeah, I get that, but even in my naive first post I did not propose anything that exploitive. What I actually said was " If the idea makes a pot of money, I think it would be reasonable to toss a bit of it my way. " And I later clarified many, many times what I had had in mind: I said SEVERAL TIMES, in pretty much these exact words, the following: The idea's not even that original, just a way of extending something already done. And it just popped into my head in an instant, whereas somebody building and advertising the thing would spend many many hours on it. So I don't think it would be at all reasonable to think I had a claim on that money -- just was picturing the developer tossing me like 1% as a thank you. Also said SEVERAL TIMES that I was certainly not thinking of any kind of contractual arrangement -- was just looking to give the idea away.
I'm sorry now I even mentioned money. It's really far from the main point. But I don't get why people were so reactive. It's as though some mass hypnosis kept everybody believing I was proposing "you do the work and I keep the money, OK?" even in the face of massive and ever-growing evidence that was not what I was proposing. And why are software developers so sensitive when they think some fool is proposing a ridiculously unfair and exploitive idea? Seems like software developers are a well-paid, smart, respected bunch -- why not just laugh off a stupid proposal of the kind people thought I was making? Instead, people reacted as though they were, I dunno, newly freed slaves, and somebody was trying to trick them into going back to massah's house and working the fields for free.
"It's unlikely that an off-hand idea by a non-expert will work out" *is* a form of advice and accurate and in lieu of any actual technical details in your comment about the only advice someone can give. Yes, receiving the same advice (which isn't what you want to hear) multiple times can be annoying, and it *might* not be right, but, to use your words, why not just laugh off those replies? None of them read as the programmers being angry, but your reaction to them does read as angry. The "sensitive" party and "reactive" in this thread does not appear to me to be the programmers.
Again, if you're just interested in someone implementing your idea, or having a discussion about it, just share it. Put it in a google doc and post the link. Or don't; but this extended debate about how everyone else is being unreasonable seems pointless, and I'm going to bow out of it.
I did share it, with 2 people who work in related fields and expressed curiosity. (And I shared with no strings attached, by the way, no request that they keep the info a secret, or not use it without signing some sort of contract, nothing remotely like that). Am about to share it with a third. I have not shared it here as I said I would, because not only was the discussion extremely unpleasant, but very few people showed any curiosity at all. Nobody asked why I was interested in facial expression, where they idea came from, what kind of graphics stuff I do. 95% of what was said was a prolonged, completely curiosity-free attempt to convince me I'm an asshole. "It's unlikely that an off-hand idea by a non-expert will work out" was the least of it. That's a bit tedious after you've heard it a couple times, sort of like the college advice your uncle always gives after he's had cocktails, but not offensive. But there was sneering and snark, and I was called a crackpot, told that I thought software developers are idiots, had made completely, laughably absurd statements, etc etc. It seemed like the message was "you're a fool with a swelled head and an exploitive asshole. Now tell us your fucking idea." Under those circumstances I lost my appetite for posting the stuff here.
Thanks, but actually I only unloaded on one, dionysus, and I didn't say anything awful in that exchange. And acting like a politician really goes against the grain for me. I dislike politicians, and I value being real, and would rather do the latter and take my lumps. I am reasonably good at making the case for my point of view in interactions like the present one, and while that doesn't soothe the troubled waters the way oil does, it often gets through at least partially to some of the people involved, and we end up having a somewhat better exchange at the end.
Also, people can sense when you're making nice just to soften them up so they'll be receptive to whatever it is you want from them. At the beginning, when I read your responses and Quiops whose name I may be getting wrong, I mentally lumped you with them as someone whose main agenda was to convince me I'm an asshole for thinking my idea could be a decent one. You started off the way Quiops did in your post, then suddenly switched to some friendly stuff about how you're curious and would love to have a nice little chat with me about my idea. That switch felt to me not like you'd realized you also had a second, friendlier message to convey, but like you'd realized never get anything outta me if your entire message was a lecture about how the chance is nil that a layman could come up with a novel graphics idea worth trying. And then in a later post you actually commented yourself about how you'd consciously decided to put something friendly into your post. So at this point I haven't the faintest idea how sincere any of the sentences in any of your communications are, including the current one. So I'd recommend giving more thought to the downside of being down towards the impression management end of the impression management -- real deal axis.
The problem is, we've all seen people who go "I am not involved in the field at all but I have this amazing new idea that is miles better than anything the professionals are doing".
Most times those ideas are not better. So people naturally tend to "Okay, tell us about the idea so we can see if it really is better".
If I claimed that I had a fantastic new system of doing therapy, even though I'm not a therapist, not trained in the field in any capacity, and have no experience of doing such work, I'm sure you as a professional would be slightly sceptical and want to know more about my fantastic new idea before you agreed to help me sell it to the public.
Maybe your new idea is marvellous, it could well be, but people are going to want to see the pig first before they buy the poke.
Yes, I would be definitely be skeptical, but I would want to hear your idea. I would probably post something like, "I have to admit I'm skeptical, but, you being you, I think there could be something in your idea, and I'd be very interested to hear it. " And then I would shut up and listen. I would do that partly out of courtesy and kindness, because I like you, but also I do not at all rule out the idea that you would have a genuinely good and interesting thought about psychotherapy. Then after I heard the idea I'd tell you want I really thought of it. If I thought it was absolutely no good I would look for tactful ways to get that across.
Your version is not a completely fair analog of what I posted, because I did not rave about having a whole fantastic new approach to software development that's miles better than what anyone else is doing. I named *one* out of thousands of kinds of software, and said I thought what I had would work better than current software for this one little task, and that I thought it might even sell. So it's more like if you posted that you'd had a novel idea about how to treat people with insect phobias, and said you didn't think the approach had been tried before, and that you thought it might actually help a bunch of people. So my claim was much more of that nature.
And I did not refuse to describe or show my idea. I said at the beginning that I'd go into detail if anyone who was able to build such thing expressed an interest in hearing the idea. I probably would have gone into detail anyway if most of the posts had said things like, I don't develop graphics software, but I'm quite curious about your idea. Can you post some more? But actually there was almost no curiosity expressed. 95% of what I got were long, irritated-sounding lectures about how ridiculous it was that I could for a moment entertain the idea that someone with no training in software could have an idea that would work. And people were pretty harsh and rude. The word "crank" was used. I was told that I thought software developers were idiots, and that I what I had said in my post was unbelievably absurd. The gist of it was that I was a fool and an asshole.
And actually I did tell the idea in detail, via DM, to 2 people who work in the field and expressed some interest. So the situation is not that all the posters I'm mad at are asking to see the idea and now I'm being contrary and refusing. Most did not express any curiosity at all in their intial posts or later ones. Yet one person who had not asked one single question about the idea did accuse me of "jealously guarding it" after I had "promised" to post it.
It really does seem to me that the people piling on me have a distorted perception of my initial posts and what their responses actually were. And it sux. I can be quite mean sometimes on here, but I only do it to people who seem like trolls and/or are being rude and cruel. I think I felt like being pretty good natured and reasonable in my posts overall had kind of given me, like, some credit -- like that if I posted something off-base, I'd kind of earned enough points so that people would be unlikely to believe I'd just posted something dumb, mean, entitled and ridiculous. Like if it came across that way they'd give me the benefit of the doubt and ask me to clarify what I meant. Nope.
>Your version is not a completely fair analog of what I posted, because I did not rave about having a whole fantastic new approach to software development that's miles better than what anyone else is doing. I named *one* out of thousands of kinds of software, and said I thought what I had would work better than current software for this one little task, and that I thought it might even sell.
Perhaps you forget that the the one tiny kind of software you described in very broad terms has applications in multi-billion-dollar industries such as games, movies, and TV. If you'll allow another analogy what that sounds like to me: You said the equivalent of "Oh it's no biggie, maybe you'll find a buyer here or there in the niche subfield of transportation, but I am certain I have improved upon the wheel, DM me if you like money."
>You said the equivalent of "Oh it's no biggie, maybe you'll find a buyer here or there in the niche subfield of transportation, but I am certain I have improved upon the wheel, DM me if you like money."
Yes, my initial naive post could be taken to mean that (though it could also be taken to mean other things). But when everybody got so angry I put up many many responses clarifying what I had meant, which was definitely NOT that if somebody knew the novel, awesome idea I had they could make millions by applying it in all the industries that in one way or another use tools for adjusting facial expression. I said the idea was not particularly original -- that it was just a way of slightly extending something that is already done. I said it had just popped into my head -- was not, in other words, the product of a lot of thought and labor. I said that I knew it was unlikely that an idea from someone outside the field would work, and would be something that had not been done, and would make much or indeed even any money. Seems to me it did not matter what I said -- I was the poster people loved to hate, and they were impervious to any information that would make me look less foolish, entitled, self-important and exploitive. Cuz where's the fun in that?
Let those who have never put up a post that could be taken to mean something really dumb and obnoxious cast the first stone.
I know you didn't come on strong with "I am so much smarter than the professionals", I think it's just that we've been burned before, in whatever job or career we have, by people coming in with "amazing new idea" or "we are completely scrapping how we used to do things and now doing it this new way", and refusing to listen to the people who have to use the system or implement the new way about how it's not going to work the way the "great new idea" person thinks it will work.
And some of us on here are less socially adept in interpersonal interaction, to put it charitably, so we do rush at it like a bull at a gate with "what makes you think you know so much?" 😀
You're not a very rewarding person to offer help or olive branches to.
I am if I experience the help and the olive branches as real. Currently feeling pretty warmly towards Vitor, for instance, whose post seemed simply sincere to me.
I do graphics programming for games. I'm curious what the idea is, but of course I can't commit to working on it without hearing the idea first.
Of course not! What I meant was that if anyone had any interest I would describe the idea -- then, if you're interested, you may have it. I sent it to you as a DM. because the present discussion has gotten so unpleasant and I don't want to add fuel to it. Also DM'd it to Viki Szilard when they posted.
Well I’m a software dev who’s in the field of computer graphics/machine learning, and my curiosity’s getting the better of me so…
What do you mean by “changing the expression on a face”? Take an RGB image of a human face, and change it from say, a smile to a frown? Or take a rigged 3d model of a human face and animate it to have a desired expression?
If you prefer you can message me with the details. I can prototype things very quickly :))
Side note, I'm really disappointed there are no faces on the Euro banknotes. It's so much fun to make them smile or frown!
https://www.youtube.com/watch?v=GX7Aj8SySYQ
I don't begrudge your optimism, but the reality is that ideas are a dime a dozen among AI researchers. The only way to know if something works is to try it, and the vast, vast majority of ideas that even experts have don't pan out. Because you're not in the field, you don't know what the state of the art is, what researchers have already tried, how feasible it is to implement an idea, or how plausible it is that an idea might work if implemented. The chances of your idea working out and being monetizable are very close to 0%, especially because it seems vague and poorly defined to begin with ("...I believe it would involve algorithms...")
I'm getting a bit sick of responses telling me it's unlikely the idea'a any good.
Hey, I get it. I am not expecting to make any money. If the thing did, I think it would be reasonable to get a bit from whoever makes it for supplying the idea, but I certainly wasn't picturing signing a contract or anything like that. In fact, I was imagining just describing the idea right here. Obviously if I thought it at all likely that this idea would make money, I would not be describing it on a forum where hundreds or thousands of people could read it -- I'd be jealously guarding it and telling one possible developer at a time, after swearing them to secrecy. On the other hand, I have messed around with enough graphics software to have a sense of what is possible, and the thing I'm thinking of seems to be in that realm. And I have searched hard enough for the thing I have in mind to be pretty sure it is not available now. So I doubt that it's impossible to do, and I doubt that it has already been done.
I don't see where the evidence is that I am overconfident about how workable and monetizable this idea is. In fact I have given multiple assurances of various kinds of optimistic stuff I don't think. All I am doing is asking whether anyone who builds animation software or the like is interested i hearing the idea. If someone is, I will lay it out.
In other words, I don't think the likelihood that this idea is worthwhile is zero. Several people who have responded so far seem to be triggered into some kind of irritable discourage-this-amateur feeding frenzy by the fact that I don't think the chance is fucking ZERO. Get over it.
Computer graphics is a very large field that's pretty mature (compared to AI at least). There are thousands of people doing research on (semi-)automated mesh animation, all sorts of things like projecting motion capture onto arbitrary models using some sort of skeleton, deforming meshes while renormalizing them, kinematics, etc etc etc.
This is a huge field of research backed by practical applications in some of the world's biggest industries: movies and video games.
This kind of field is much harder to make a contribution to as an outsider, especially when you don't know what the state of the art is, common tools and file formats in use, typical rendering processes, etc.
I don't want to discourage you, but the priors are strongly against you. That said, I'd be happy to discuss your idea, I'm a dabbler in computer graphics myself.
Hey, for about the 5th time, I get it that it is unlikely that an outsider would come up with an idea that is novel, and doable without an amount of effort that the idea does not merit.
It sounds like you're like me -- you use computer graphics, but do not develop the software. If so, I think I'm going to hold of on laying out the idea unless someone who actually works on this software asks to see it. When I first posted this idea I probably could have been persuaded to just describe it to somebody like you, who uses graphics software and is curious. But at this point I am irritated and uneasy, because every single respondent has told me that it's very unlikely the idea's worthwhile, and several have written about that at some length. It really seems to me like my post irritated the hell out of various people who actually write software, and that if, as is likely, my idea is not workable, or has already be done, I will be subjected to lengthy, snide "I-told-you-so's, dum dum" posts.
I don't understand why people keep piling on with the "it's very unlikely to be any good" posts. Do they think I didn't read all the earlier ones? That I read them but was unable to grasp their meaning? That I vehemently disagreed with them? Jeez, I have responded to all of these posts by saying I know the chance is low that the idea is workable.
Do you know why you felt the need to again make the same point. The first 90% of your post is still another explanation for me of why people outside the field almost never have an idea that is worth implementing. I'm not complaining about your post, I'm asking you, because you sound friendlier than the other posters. Can you figure out why it felt important to you to write that first 90%, which duplicates what the other posters have said, instead of just posting your last sentence, expressing some interest?
"It really seems to me like my post irritated the hell out of various people who actually write software"
Yes, it did irritate me. It irritated me because it matches the pattern of crackpots who take the people in a highly technical, actively researched field for idiots, and are convinced that they know better despite demonstrably not knowing even the basics. You don't even realize the absurdity of saying "I will happily pass the idea to anyone who believes they could actually build the thing" without giving any description of what "the thing" is. Do you realize that some things in computer vision require a few lines of code, while other things require years of dedicated effort by a large research team with tens of millions of dollars (which may well go down the drain because the idea turned out to be impossible), and that it's not always easy to tell which is which?
"I don't understand why people keep piling on with the "it's very unlikely to be any good" posts. "
I made the same point as the other posters because there is value in letting you know that there is overwhelming consensus on this point.
"I don't see where the evidence is that I am overconfident about how workable and monetizable this idea is."
The evidence is here:
"I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell."
I'd bet you that the world's top computer vision experts wouldn't dare to make a statement like "it'll work much better than what is out there currently" without implementing their idea and seeing that it actually works. When people pointed out the unlikelihood of success, you became hostile, which is again typical of a crackpot. Jealous guarding of your idea (despite unfulfilled promises to share it on this forum) is a third typical crackpot characteristic. Granted, you did acknowledge that the idea was unlikely to be monetizable and could be unworkable, which is not typical of crackpots.
>“I have an idea for software that will work much better than what is out there currently for changing the expression on a face.I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell."
Yes, it did irritate me. It irritated me because it matches the pattern of crackpots who take the people in a highly technical, actively researched field for idiots, and are convinced that they know better despite demonstrably not knowing even the basics.
There is nothing that I wrote that suggests in any way that I take the people in a highly technical, etc., field for idiots, or that I am convinced I know better despite not knowing the basics. That all seems like shit you are angry about from other contexts that you are dragging to this exchange and dumping on me. In fact not only did I not say anything that implied any of those insulting, stupid ideas, I said things that expressed ideas incompatible with it. I said in my initial post that there's no way I could possibly develop the idea into actual software. That's some pretty good evidence I'm aware that I lack basic skills, isn't it? Also I expressed willingness to just post the idea here, if anyone who has the skills to make this sort of thing expressed interest. Seems to me that makes clear that I do not think my idea is highly valuable and unique, since I'm willing to describe it to a huge forum. If I thought it was unique and highly valuable I would guard it jealously, wouldn't I?
>You don't even realize the absurdity of saying "I will happily pass the idea to anyone who believes they could actually build the thing" without giving any description of what "the thing" is.
Actually, in retrospect I do see how that sounds absurd if taken a certain way. But I did not mean that I expected somebody to decide, without hearing more, whether they could build it. Of course they could not! What I meant was, if what I’ve said interests you let me know, and I will put up a post describing the idea. If you think the idea is workable, it’s yours for the taking. Jeez, dionysus, it doesn’t seen like it’s that hard to figure out that there is an alternative interpretation to my post beyond the stupid, entitled one you put on it.
> “I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell."
I'd bet you that the world's top computer vision experts wouldn't dare to make a statement like "it'll work much better than what is out there currently" without implementing their idea and seeing that it actually works
Well, if you knew what my idea is you would see why what I’m saying is nowhere near as sweeping and grandiose as it sounds to you. it’s really just an extension of something that already exists. My hopefulness about the idea has nothing to do with thinking I am able to judge how easy it is to implement, and I have concluded that it’s easy. I totally get that I am not able to do that, and in fact that even experts would hesitate to do it with a novel idea. My optimism came from thinking, clearly we can do this for a and b. If there were software that could also do it for c, d and e, which are in the exact same class as a & b, that would make some cool things possible. Here’s a made-up analogy about tattoos, which probably is not historically accurate: Let’s say that it used to be that most tattoos were small simple blue images, and one day some tattooist said, why not make them multicolor? We know how to do inject other colors besides blue, and going multicolor would make more complicated and beautiful designs possible. OK, that’s the nature of my idea. It does not rest on any belief that I understand how to implement these things — it’s an idea about the possibility of extending something that’s already possible.
>Jealous guarding of your idea (despite unfulfilled promises to share it on this forum) is a third typical crackpot characteristic.
I didn’t promise to share it. I said if anyone who works in the field and can actually make this sort of thing was interested, I’d post it here. Actually, someone in the field finally wrote and expressed interest, and I laid out the entire idea for them in a DM. I only put it in a DM because this discussion has become so unpleasant, and I did not want to add fuel to it. I did not ask them to keep it secret, or to make any sort of contract. So I think that puts the jealous guarding accusation to rest.
Later edit: Somebody else expressed friendly interest, and I DM'd a detailed description to them to, and also did not say a word about secrecy, etc. Still think I'm jealously guarding my idea?
Know why I'm not just posting it here? Because this discussion is so unpleasant, and most participants have shown zero interest in the idea. You're reacting to the entitlement and whatnot you think is inherent in posting about the idea the way I did.
>When people pointed out the unlikelihood of success, you became hostile
I don’t think I did. I said many times that I did not believe this and that grandiose thing, and I said that politely. I eventually started to complain about the repetitive posts all saying the same thing, but I complained in a civil way. I guess it was snarky to describe it as a feeding frenzy, so we can count that as hostile, but it’s pretty small scale. And I don’t think I’ve been hostile in the present post. The worst things I have accused you of are dumping anger about other situations onto me, and failing to consider various non-idiotic things I could have meant by certain sentences. Whereas you, in the post I’m responding to, have used the word crackpot, have accused me of thinking of software experts as idiots, and of having absurdly grandiose and unrealistic ideas about what I am capable of, and of jealously guarding my idea. Your hostility score’s a lot higher than mine.
And didn’t you ever have an idea you thought might be worthwhile about how things might be done in a field outside your expertise? Something about the way a hardware stores could be set uo, or a way to get more people to get needed medical tests done, or whether it might someday be possible to control a cursor by running your tongue around the roof of your mouth?
I was trying to get across where exactly the expectation mismatch is. There are some domains where you can come up with a contribution relatively easily as an outsider. But let's say someone posted here that they're a hobbyist who's come up with a new surgical material... you'd be very skeptical. Not because the person is dumb, but because they basically have to be an insider to even have access to the situations and tools where they could conceivably experiment with their thing.
Computer graphics is very accessible OOH, with tons of people building their own raytracers, games and such, but the more *topological* problems OTOH depend on stacks of assumptions and lower level techniques, and you won't build something commercializable if you don't know exactly where in the toolchain your code is going to sit. My guess is that people would have been less skeptical if you'd just mentioned this as an interesting research problem.
Thank you for answering. And hey, I get all that. Computer graphics is sort of like a gold field that's had a crowd of people prospecting in it for years. There's not much left to find.
Still, didn't you ever have an idea you thought was worthwhile about a field outside your expertise, maybe even a field that's already had lots of people prospecting in it?
"Algorithms" is too vague to be meaningful. It's very unlikely that your idea is both possible to implement with only "algorithms" (which I'm taking to mean relatively simple image transformations, eg warp, skew, rotation, alpha compositing, etc) and better than existing techniques that use text/image embeddings and diffusion models. For an example of how powerful and usable these techniques are, I'll direct you to this blog post, wherein the authors discuss using a webcam stream of a face to animate a 3D model of a face in real time: https://blog.roblox.com/2022/03/real-time-facial-animation-avatars/
My point when I said algorithms was that I did not think deep learning would be used for the core of what this software does. I believe the task is simpler than the roblox animation one. Yes, it would be doing relatively simple image transformations.
Listen, I'm a psychologist and people here often have ideas based on simple misinformation, or ask naive questions, or propose naive theories. On the other hand, I find some of the ideas people here completely outside the field have quite fascinating and plausible. I thought that, for instance, about a number of comments about how the sense of self is constructed, in the discussion of Scott's post about IFS. Yes of course my field is much softer and mushier than software development, but there is still such a thing as being misinformed or naive about human psychology, and when I run across some of that I do not write a sneery response. Why be rude?
I don't perceive myself as having been rude, but merely frank - I stand by my probability estimate (very unlikely - which does include possible!). Sorry if that came across as rude. I think your tone and framing (refusing to reveal the idea itself, confidence that it could make money, asking someone to commit to implementing it, stating that it's "much better" than existing techniques despite a lack of demonstrated knowledge of existing techniques beyond having done a lot of searching) are all triggering the Crackpot Response Protocol for people, here.
I think you could have gotten a better response with an approach more like, "Hey, I had this idea for a way to change the expressions on faces, here it is: <description of idea>. Can anyone with experience in the field tell me if that's been tried, or why it would/wouldn't work?"
I'm rather curious to hear the idea. I'm not sure it's as easy as you think to turn ideas into products, or turn software into money, come to that. But I'm always interested to hear ideas.
On second look: If you meant this reply for Quiop it would make more sense, since he was actually being snide.
> Mostly I get the feeling you're looking forward to teaching me a lesson in how people outside the field invariably come up with lousy ideas
That's a you problem. Jeering wasn't my intention and I deliberately tried to word my reply away from sounding like it was. In this case being hypersensitive has only cost you goodwill.
Since you admit you don't know enough about the field to be able to implement your idea, I'm curious as to why you seem somewhat confident (i) your idea would work better than what is currently available, and (ii) other people haven't already thought of it?
I do not think it is easy to turn ideas into products and software into money. I am pretty sure I'm right that the thing I've thought of does not exist, because I have done a very extensive search for it. But I am well aware that as someone not in the field I may not be right about how this could be done, or about whether doing it is a nightmare not worth the trouble, etc. On the other hand it's a free idea, and if I was in the animation field I'd at least ask to hear it. Ya never know.
But neither you, Quiop nor rebelcredential have expressed any interest in implementing the idea, should it turn out to be decent. Mostly I get the feeling you're looking forward to teaching me a lesson in how people outside the field invariably come up with lousy ideas. So I see no point in sharing the idea with you. If someone in the field shows some interest I'll describe it right here on the forum, though, and if they tell me it's not practical I'll certainly accept that.
Since rebelcredential also read my comment as "actually being snide," I accept responsibility for my poor choice of phrasing and offer my apologies. I was genuinely curious about the idea and wanted to know why you think people in the field would have missed it. (e.g. "I am a psychologist, so I have insights from my own field into the perception of facial expressions and I think computer modelling of facial expressions could be more effective if they incorporate these insights.")
I don't exactly think people in the field have missed it -- it's more that there's a lot of churn and change. There have been a huge number of sites opening up that offer a user-friendly interface for altering appearance. My idea is just an extension of something that's already being done. That is the reason I'm pretty sure it's doable -- not some delusion that I can intuit, without knowing how to code, that coding the thing I have in mind is pretty simple. I have looked *quite* extensively for sites or software that do what I have in mind, and can't find any, and that's why I'm pretty sure they do not exist. As for whether the thing I had an idea for would be widely interesting to people, it's hard to judge. Doesn't seem implausible to me, but it's hard to predict what the public will fall in love with and what they will ignore.
If your DM conversations don't end up leading anywhere, I'm sure you could start an interesting discussion in the next OT by describing your idea in more detail (assuming you're not too concerned about IP and money issues).
Concerned about IP and money issues? How can it possibly not be clear at this point that I am not concerned about either? I have probably said at least 10 times in the course of this long, unpleasant discussion that I do not view this idea as intellectual property, and that I'd happily describe it in detail here, publicly, if someone who works in the field showed some interest. As for money, I have also said multiple times that I get that the idea is unlikely to work, and if it works it's unlikely to be a big moneymaker. Also added that in any case I didn't think of myself as having a share in the profits. I just tossed out an idea that's a variant of something already done, so not a particularly original idea. The person who makes and advertises the thing would have put in many hours, and would deserve the money. All I said was that I thought it would be reasonable for the person to send me a small chunk as a thank you. (I had in mind something on the order of 1% , but certainly was not imagining formalizing that in a contract.). And I have now sent detailed descriptions of the idea via DM to 2 people in the field who expressed some curiosity, and I did not ask them to keep the idea secret or to send me some thank you cash if it by chance the software made them a good amt of money. The only reason I didn't just post the idea here, as I'd said I would if anyone was interested, was that the discussion had become so unpleasant. Also, there has been almost no expression of curiosity from the people posting. 95% of what I have gotten have been long, irritated explanations of how unlikely it is that my idea would work, and various bad judgments of my character for even *thinking* the idea might work. I have been called a crank, told that I think software professionals are idiots, accused of being ridiculously oblivious to various obvious things, of jealously guarding my idea, and of reneging on a promise to post it here. Until 2 people who work in the field put up brief posts expressing some curiosity and nothing else, nobody has expressed the slightest curiosity about why I'm interested in facial expressions, where the idea came from, why I think currently available ways of putting expression of faces are unsatisfactory, or what's the general approach I have in mind.
Anyhow, I appreciate you apologizing for being snide, and showing some interest now. I could not resist venting a bit, in the course of telling you why I have zero interest in posting more about this subject.
I'm aware this is well-trodden ground, but have we conclusively put to bed why it is that software/software development is quite so shit?
These are the reasons I know about:
Mental models and gaps thereof: details of the real system are complicated. They get hidden in libraries/frameworks that make things easier by hiding said details. New devs unaware of the underlying details end up doing inefficient things or reinventing systems they aren't aware already exist. This process repeats so you have layers on layers on layers and everything just seems to run slower and slower.
Leaky abstractions: these frameworks/libraries don't fully encapsulate the underlying model so when things go wrong you need to examine (and understand) every layer down the stack. Many of which you were never explicitly taught about because you weren't supposed to need them. More layers = harder time fixing bugs.
Docs: any lib/framework/component brings its own mental model, units of thought, and procedural knowledge (ie what actions/processes/patterns to follow when using it). Devs often don't even acknowledge these, and even when they do it well, communicating them takes a long time.
Dependency and fragility: stuff relies on an increasing number of other stuff, with the result that there's more and more to go wrong.
Bloat: new things are constantly being required and added that aren't fundamental to the job/important to the users. Both for end users - your new laptop is slow because it's trying to run a million new services that Microsoft has decided you will like - and for devs - that image carousel for your website comes brings with it React+tailspin+vite+webpack+didnt ask+dont care+touch grass.
Have I missed anything?
Here's one: text files are a rather unnatural way to interact with code. Text is a serialization format, and a huge amount of the work of writing code amounts to moving from the text model to a mental model of the runtime model and back. Loading and unloading mental models this way is incredibly exhausting in the long run.
Compare to "working on a car"—you don't work on the "the blueprints for a car", you fiddle with one piece while interacting with the already-built other 99% of the car. In a new code base it's pretty hard to see all the pieces and how they interact—whereas when you pop the hood of a car, there it all is. Debuggers get a little closer, but not very—you can't feasibly rearrange the parts while the rest is running. Live-reloading tries to emulate the right idea, but it's still hopelessly stuck in the text-based paradigm.
A coding paradigm that was 50% closer to "popping the hood" would be a dream to work with, I think, if it could get over the huge barrier of "all our existing tools and mental models are designed to work with text files".
This is pretty much word for word my own opinion.
Have you had any thoughts about what your runtime model should look like? I've had various ideas but I'm interested to hear other peoples'.
Oh, neat. I have a lot of old notes but it's a dormant project to me. Where my mind goes (/used to go) was towards a system with first-class constructs for:
* dataflow graphs. To the extent possible all "programs" would just a be an introspectable dataflow graph, tho one could compile a graph into a native function for speed.
* "components"—like actors, or like "things you can point to under the hood of a car". These in turn would be wired together in a dataflow graph. Components live in a hierarchy of layers, so e.g. your "webserver component" has subcomponents like "API" and "datastore" who are wired together, and you can "work on" the API component with the webserver and store already running.
* DSLs or sub-languages aka "slangs"—a given component has an internal namespace which basically defines a specialized language, with certain constructs in-scope. E.g. an API server automatically has a sublanguage for "routing" in scope, and an API handler has a bunch of HTTP equipment in scope.
* programs are "submitted to a component". E.g. an API handler implementation is a "program running in an HTTP-handler component", and a routing table is a "program running in the HTTP server component" in a very limited language that can basically only bind regexes to handlers. This notion of "submitting" always takes for granted that the underlying component exists and is running, and you can repeatedly submit "programs" to the same running component. One would typically "work under the hood" of a single component at a time.
The underlying runtime model is "whatever it takes to support this", but I would prototype it with a relational DB backend and leave the impl abstract, allowing a system of microservices to be defined in the same schematic language...
See, it gets out of hand!
I'm seeing a lot of my own ideas here, which is a nice but odd feeling. I actually spent yesterday working on a (very crappy) dataflow planner thing as an experiment. (Conclusion was it needs more work.)
I ended up going in circles on what the DSLs should entail. A DSL implies a whole bunch of unseen background knowledge, which needs to be there when you pick it up. (Because what good are clever, domain specific nouns and verbs if the user doesn't know what they mean or, worse, assumes a subtly different meaning than the one the creator meant?) To fix this each DSL needs to come with all that explanation bundled in. But a standard wiki would end up being ignored and could struggle to communicate the important stuff, which is how I got onto my current preoccupation: better ways of notating and communicating "mental models" in general.
I get your sumbitting-programs concept - basically a system of actors where each actor speaks its own language. (You could call something built around that Babel, if only the name wasn't taken by an AST parser.)
What made you give up and move on?
One fundamental problem I see is, do you create this as an entirely new selfcontained universe - and have to reinvent said universe from scratch - or do you try to allow including external things, and then find yourself having to make compromises with them that break the entire concept?
> I'm seeing a lot of my own ideas here
Ha, that's heartening. I kind of think there's a naturalness to the perspective we've both glimpsed, and attempts to iron away complexity in a lot of different programming areas tend to converge to a similar framework.
> What made you give up and move on?
Haven't been writing code in a few yrs. These kinds of ideas would arise whenever I was frustrated with my tools. I never really got past the taking-notes and brainstorming stage—I don't have a programming language background at all.
> do you create this as an entirely new selfcontained universe
I imagine you:
* design the self-contained universe as an ideal
* but anything you build that's actually designed to be *used* has to be maximally interoperable with mainstream languages and tools. This might mean you implement the runtime interface in Python, or you run a python interpreter inside your runtime, or you interface with the runtime over the wire.
> DSLs... explanation... standard wiki... mental models
I don't have a ton of answers here, but might be able to illustrate my thinking as follows:
One of the narrowest problems I wanted to solve was to improve on SQL for big data-analysis-style queries. Consider: a SQL query represents a dataflow graph—data flows from a bunch of upstream tables into a final view or query result. Every column in that final result query "knows where it came from"—e.g. a column `user_id` knows exactly what upstream tables it came from, and an `avg(sales)` column knows it's an average of whatever `sales` was. It has to, because these details become the runtime representation which the compiler actually operates on!
Now, it seems to me first that:
* our tooling should have access to that runtime representation, such that I can cmd-click on a column in the final query and my editor can show me that graph structure by which that field is generated
* the final query represents a dataflow graph, which I want to be able to use in other ways than just *running* it. For example, I could "host it as an endpoint"—and autogenerate API docs, where SQL column descriptions in the upstream tables "flow through" the datagraph to document the columns of my API query along with its lineages, types, constraints, FK relationships, etc. Or I could materialize a number of tables in a DAG (here I'm thinking of a DBT style analysis workflow, if you know of it) and have each one automatically inherit lineage data. There are tools which try to layer in such lineage data later, but IMO it should part of the native representation of all SQL.
All of this metadata is just data, but we're blocked from using it because we treat queries as text-files first and runtime representations second, instead of the other way around. Sort of?
I like SQL as an example because it's literally just a dataflow graph, and there are a lot of ergonomic issues that can be solved just by being able to "slice" the internal model along various axes.
Actual imperative code is more complicated. But still I think of an ideal where, for example, every variable that flows through my code is accompanied by its type information, constraints, and docstring. A function `def f(x: int)` with a docstring specifying the meaning and constraint values of `x` is an "input node" into a graph, and all the metadata—type, constraint, documentation—can flow with `x`, and if `x` is later exposed in an API say, the metadata is all there to be auto-filled. The only case where we actually toss out the metadata is when we compile to native code for speed—but the dataflow representation is primal.
I guess I'm starting from a "low level ergonomics" place. I don't really know how to handle high-level complexity, but I sort of imagine the same kind of "dataflow" concept at the level of interoperating components, and editor which is specialized in viewing these graphs similarly no matter what level of abstraction they occur at.
I don't have much to contribute besides this kind of vague brain-dumping right now (:
maybe we should DM instead of this impossible-to-navigate chat thread, I already can't see your OP
Also, shortage of time to fix bugs.
Like, you spend literally days tracking down a bug in a complex software system, and when you finally find it: the guy who originally wrote the code left a comment that he doesn't handle that particular case. (This fact, of course, not being reflected in the documentation of the numerous layers of stuff built on top of the routine that doesnt handle that case). Also, leaky abstractions of course, but more like: life is too short to make the software correctly implement the abstraction.
I wish there were some way to graphically draw the "fitness for purpose" of a component, including all the moving parts, the context it needs to live in and the dependencies it relies upon, and the I don't know what you'd call it but the "envelope" of cases it does and doesn't handle. So we can tell these things at a glance rather than have to stumble across them at random.
Ostensibly, this is what type-definitions are for.
Unfortunately, side-effects exist. They're called side effects because instead of some interaction being an input or an output, the interaction goes *sideways* into a primordial soup of global state. And as a consequence of this, they don't get listed in the type-definition as God intended.
Some languages try to fix this by decreeing that all side-effects must go into the type-definition after all. Global State is now Local State. But now we have a new problem: each side-effect now need to be included in the list of inputs for all downstream functions (as well as the outputs). Otherwise, the downstream functions won't actually pass a side-effect along the chain. So to fix *that*, we add an operator called "bind". Which, using the power of first-class functions, automagically reconfigures the type-definitions so the downstream functions actually pass along a side-effect (in parallel to the chain of "normal" inputs and outputs).
And huh, look at that. We just reinvented monads. So if you want to tackle this, it might be helpful to take inspiration from Haskell. Or maybe Eiffel, for its contracts.
Another cause: Ideas pass through too many people and end up either disfigured or over-adapted.
The customer needs X, the product product owner conceive W, the architect fix it as Y, the BA describes Z and the developer build A. Now, either A don't do X (it's disfigured) or it does it, but with all the idiosyncracies of W, X and Y in the way (it's over-adapted).
I'm not even saying that the layers are useless. They aren't, or we'd end up building shit that's awefuly unaware of the rest of the industry or of the rest of the company. But it can sometimes turn a 2 months job into a 2 years job.
This is more general than software, and intuitively (to me) it happens in proportion to a single individual's powerlessness. Anything you can do by yourself, doesn't have to go through this process; the more people you need to help you, the more of this bullshit there will be.
To me a utopic outcome looks like our tools getting more and more sophisticated, allowing a single individual to feasibly take on larger and larger projects without having to involve other people. But I suspect I'm not very good at team collaboration, and maybe someone who was would regard this vision as a nightmare.
> To me a utopic outcome looks like our tools getting more and more sophisticated, allowing a single individual to feasibly take on larger and larger projects without having to involve other people.
Yeah, through this entire exchange, I was thinking "so GPT-5 or 6 will be a great thing for development, because it will be a single mind that can see all the code and dependencies down the entire stack, and can optimize an entire codebase as long as it's small enough to fit in the context and undestanding window."
Instead of calling so many mb of different libraries to do simple things in a bunch of unconnected places, it can just write the simple function and get rid of a bunch of dependencies and vulnerabilities. There are probably architectural things it can do in terms of commenting and dependencies that will make it easier to surface bugs, or describe and articulate the envelope concept you had upthread visually and textually. I think that's a pretty exciting area to be thinking about and working on right now.
I actually think the "envelope/mental model explaining a component" thing is going to be incredibly important - because when AI is the one creating the library, how are we going to follow along? We either need better explanatory tools or we accept that the AI is the new owner of tech and all we can do is try to manage it.
But right now I struggle a bit to think about the problem, because it feels like I don't have the right tools yet. To take a topical ACX example, I feel like a monk trying to diagnose demonic possession cases without the concept of "psychiatry".
I can tell you GPT 3.5 isn't there yet - every time I ask its opinion, we have a back and forth where I try to explain what I mean. No matter how I try rephrasing it, as soon as we get close, it crashes.
Tempted to add that not many fields have got away with «your system developed by us is vulnerable to known malicious attacks from the actors under sanctions from the government we share with you; we won't fix it, won't be responsible, and will legally prevent _you_ from fixing your own copy» for so long.
Oh, and of mental models — the mental model of a person who knows from the featureset how this things must have been done inside, and the mental model of a user throwing a cursory glance at the system… they are not very similar. And if you abandon the first one completely, your system gets too confusing to maintain, and if you abandon the second one completely, you can only have professionals willing to invest into training as users. So you cut uneasy compromises and the system breaks expectations of everyone who touches it. By design. Few people try to present fully disjoint views into the system while handling the differences of the models correctly via translation, fewer succeed…
I've wondered for a while about some kind of formal "analogy testing" process for this. A simplified model is an analogy to the real thing (think of your directory tree like "files" in a filing cabinet).
An "acceptable" analogy provides a simplified model without changing anything about the real behaviour. A bad analogy implies behaviour or logic that isn't there, or fails to prepare you for logic/behaviour that is.
I don't know by what process you could lay down or test these analogies. My impression rught now is that all of this is unconscious and illegible.
You're only looking at technical reasons, while some of them may be social.
Take institutional capture. This is obvious for proprietary stuff, but best observed in collaborative open-source projects the moment they get significant funding. The people in charge treat users as intruders, pursue their hobby horses at the expense of actually important functionality, and generally refuse to do any more work than absolutely necessary, but refuse to leave because they have funding, and it's hard to unseat them by forking when they're the ones who have funding.