I think he's talking about (1) extracting the plutonium produced as a by-product of U-235 fission, and (2) extracting and re-concentrating the remaining U-235. I don't think he's talking about using any of the actual high-level wastes, e.g. the Cs-137 or Sr-90 et cetera that are usually considered the problem children in spent nuclear fuel, isotopes produced in abundance that have half-lives ~10-100 years (so they're very radioactive but don't decay very fast). So, yes, those latter isotopes are just always going to be a pain, so you just have to put them in a (dry) hole somewhere and let them decay over a few centuries.
Those two isotopes in particular are also annoying because they have the chemical properties of K and Na (in the case of Cs) and Ca (in the case of Sr), and so they can slot into the places where those elements are used in the body and just hang out, irradiating you internally for years. Cs also forms very soluble salts, so it disperses easily in any kind of watery environment.
Many of the fission products you isolate during reprocessing are useful for other things, such as medical isotopes, radioisotope thermal generators (including Sr-90), etc. I'm not sure what they do with the cesium. But France after reprocessing is left with a pretty negligible amount of spent fuel waste and they just keep it all in a building somewhere.
Yes, I think that's true to an extent, and I suspect Cs-137 is used in radiotherapy applications for just this reason (it's conveniently obtained). But my vague impression is that the requirements of medical-grade isotopes are such that you need to use special reactors constructed for (or amenable to modification to) that purpose, I don't get the impression that reprocessed fuel from power reactors is usually considered a great option. Indeed, my impression these days is that the bigger nuclear medicine departments are moving towards installing their own synchotron to make their isotopes to order, and on the spot.
But maybe they do it in France, and maybe I'm generally wrong, this is not my area. I am little suspicious that any of these ancillary applications can absorb the few hundred kg of high-level waste you'd collect from a refueling. I'm still thinking you'd end up with at least a tonne or two of stuff you just needed to stick in a hole in the ground somewhere. (Not that I think this is a big problem, it's a big planet, there's plenty of places to put such a hole.)
People tend to worry about waste that has a half-life of the order of 10,000 years, but IIUC almost all of that long-lived stuff is plutonium, which is nuclear fuel that we can and should separate out and burn in reactors (which will split the atoms into smaller radioactive atoms).
The smaller radioactive "fission products" (which can't be burned) mostly have half-lives below 100 years, and after 500 years their radioactivity level has fallen below the level of uranium ore (not even uranium metal).
The goal of being able to store waste without any human maintenance for 20,000 years always seemed to be of dubious value — I mean, how about we focus on preventing an apocalypse, rather than focusing on guaranteeing that post-apocalyptic settlers who somehow decide to settle in the Nevada desert 8,000 years from now won't suffer from a 0.1% increased risk of cancer because they don't know what the ☢ symbol means?
But since society seems devoted to storing the waste For As Long As It Takes, it's worth noting that it's much easier to design a structure to last 500 years than 20,000. But the whole issue is kinda moot because plutonium emits alpha radiation, which is easily shielded (unlike the gamma radiation from many shorter-lived isotopes). So, you can safely sleep atop a drumful of plutonium, just don't ingest it, or grind it into dust and breathe it, or grind it into dust and dump it in your drinking well.
As for isotopes with very long half-lives (over 100,000 years or so), the crucial thing to understand is that radioactivity is something that happens with radioactive decay; therefore radioactivity is inversely proportional to half-life. As the half-life gets longer, the danger level decreases: all else being equal, if something has a half-life of one year, it is 10,000 times more hazardous than something with a half-life of 10,000 years. Therefore, things with extremely long half-lives aren't scary; it's the intermediate half-lives (10 to 10,000 years) that are problematic, as they are both long-lasting and radioactive enough to potentially cause cancer.
P.S. A one-ton drum of plutonium contains as much usable energy as 3,000,000 tons of coal, so I'd just like to add that nuclear waste is awesome compared to other kinds of waste. Because it's tiny.
Coal ash might be stored in open ponds or giant buildings that occasionally split open and dump their entire contents into rivers: https://appvoices.org/coalash/disasters/ — but because nuclear waste is so small, it's practical to encase it in concrete so that it never harms humans, wildlife or even plants. Why didn't those activists worried that containment isn't 100% perfect spend the last 60 years fighting coal instead of nuclear?
A geologist friend of mine worked on Yucca Mountain and he called it a sham. They were being asked to guarantee there would be no major earthquakes or volcanoes for 10,000 years. According to him that was impossible to state with high confidence.
But even the reason your geologist friend was asked to ensure that for 10,000 years was a sham. It's totally not necessary. Even high level waste becomes less radioactive than a granite countertop within a few hundred years.
And there are plenty of rocks that haven't been touched in millions of years! Pick any one rock that hasn't moved for 10 million years, drill a hole into it, and drop the waste in. Honestly the whole storage "problem" is mostly regulatory theater.
I've never understood that requirement. Like, why are we *planning* for a dystopian future in which humanity has degenerated so much it can't exercise competent stewardship of a waste respository?
We also have 5,000 nuclear weapons lying around . Do we need to weld instructions for their care and responsible storage to them, written in pictograms at a 4th grade level, just in case the future turns out to be stupid?
Yeah honestly even the fact that we're planning to ensure that this waste repository is safe for the post-apocalyptic humans who might stumble upon it is...simply not a standard we hold literally anything else to.
What the heck is an "accidental near detonation?" Is that like "not a detonation at all" after someone dropped one by accident? If an airman bumps into an W80 in a warehouse and clonks it with his wrench, and nothing happens, is that an "accidental near detonation?" I'm deeply unimpressed.
Anyway, the fact that a maximum of 20,000 weapons has been supervised by a passel of 19- to 25-year-olds with at most a HS diploma for 75 years without a single notable slipup is proof enough to me that it can be done. At this point I think the burden of proof is on the people who think future American government will screw up something as obvious as digging an irrigation channel right next to the big yellow warning sign that says YOU WILL DIE IF YOU DIG HERE .
Even with the school board's actions, it appears that most likely there were no negative health effects to the people who lived near Love Canal. Source: Wikipedia.
I remember Love Canal. At the time I thought the reaction was hysterical, and I still think that. Perhaps I need to learn something, but my experience was that in the 70s public attitudes towards random assorted chemical in the environment shifted radically -- in the 1960s after we got done cleaning a carburetor with a bowl of gasoline, we just threw it onto the ground. Heck, people changed the oil on cars and threw it on the ground. Turpentine after painting? Down the sink. Same with business. If a gas station leaked 1000 gallons of gasoline into the soil, big deal, nobody much cared.
All of that changed in the 70s. All of a sudden we got really upset about those things, started suspecting them of causing cancer and allergies and whatnot. (And some of that may have been true, after all, cancer has to come form somewhere.) Love Canal was just a symptom of the changing times.
So that would be a better argument if we were really careless about nuclear waste *now* and the future might get a lot more uptight and be pissed at us. But we're talking the other way around -- we're uptight *now* and, strangely, feeling like we need to assume the future will be slobs. But that has never happened before, and it seems strange to assume.
It’s massive recency bias to think the next 75 years will be just as stable as the last 75.
Look, I support nuclear power in general, but I also know humans are very bad at assessing risk, especially over very long timelines.
I’m old enough to remember the Berlin Wall falling and a tremendous amount of entrepreneurial dynamism and overall optimism in the 1990s. America was the dominant hyper power and most of the world *liked* it that way.
But then a few dozen followers of Osama Bin Laden changed everything.
It’s foolish to assume something like that won’t happen again.
What do you mean by "changed everything"? I don't see that anything changed at all except for the greater clampdown on the tiny amount of terrorism. Or are you talking about the war in Iraq & Afghanistan, which I'm pretty sure was small compared to wars of previous generations?
In any case, the ~3,000 people that died on 9/11 is to me a very small number when you're talking about the entire U.S. over a long time span. I mean, there have been about 200 times as many deaths due to COVID.
Well, no, it's *logical* to assume the next 75 years will be rather like the previous 75. Unless you have evidence to the contrary, or at least a good working hypothesis. I don't mind a *certain* amount of caution, e.g. it's definitely prudent to put your nuclear waste out in the desert, far from anywhere even remotely habitable, and mark it plainly, big warning signs and such. But assuming that within the next century we're going to be in some Thunderdome situation where nobody reads English any more and the wild kids dig up the shiny metal bits to make into necklaces -- this goes well beyond prudence.
Look, the whole point of the future is that they're almost certainly going to know more than us (technologically speaking), the same way we have known more than the people in the past, for the past 1000 years or so. So why try to do their job? Do ours. Leave them a growing economy, healthy happy new generation of people, some new inventions -- and yes, a few inherited problems with which they're expected to cope. After all, our parents handed us nuclear weapons and the Cold War, and we've done OK with that.
People seem to manage extensive hours of training, constant practice, and a focus on situational awareness and response while driving. Are you saying flying is intrinsically much harder? Or that you wouldn't expect flying to become a primary mode of transportation, so it would be harder to maintain the practice?
Flying is intrinsically much harder than driving. It's tricky to get an apples-to-apples comparison because the units of measure are different (fatalities per passenger mile vs fatalities per hour flight time), but general aviation is at least 10x more deadly than driving. And that's in our current regime, where people who are doing the flying are self-selected to be more focused/situationally aware/skilled than your average random car driver. There's obvious reasons for this - mechanical issues in a car generally mean you pull over and wait for a tow truck. Mechanical issues in a Cessna mean you do everything exactly right or you die.
I have a PL, IFR, and High Performance rating. Flying an airplane is intrinsically much harder because of the procedures involved, but I might seem like a heretic to other aviators for saying that I don't believe these are things that couldn't be overcome by the public at large by more and better automation and inherent safety protocols built into the systems as available to the public at large.
I absolutely could see a future where the general public had access to and used flying as their primary *individual* form of transportation, but in that form a huge amount of the system routine and implementation would have to be prohibited to the owner/user.
99% of aircraft incidents are still (again) pilot error, and that is with operators that are trained and certified above the level of the general public. For the general public to have it as their primary form of transport, even more freedom would need to be limited.
Think "Minority Reports" self-driving rail-pods, but airborne.
+1 Yes, flying today's airplanes is too hard, but a market for flying cars would put pressure on the control system design to make it simpler. Using "cockpits are too confusing for normal people" as an excuse is 1st order thinking.
Carl Pham below points out that "we shouldn't do it because it's too dangerous" means that we never begin the process of learning how to make it better.
Air taxis would make a ton of sense, though, right? People who fly all day can afford extra training; that we never got air taxis supports the thesis of excessive regulations.
...though it occurs to me that many people would want to hire a taxi to fly downtown, where airstrips might be scarce, even assuming you can build them directly on top of the normal roadways. So another explanation is chicken-and-egg: you need a lot of taxi traffic to justify the airstrips, but lots of airstrips including downtown ones to justify the taxis. Another popular destination should be airports, but air traffic control wouldn't want small aircraft encroaching on its airspace. I doubt this is insurmountable tho.
A third explanation is risk to third parties. Air taxis may be safer than cars (certainly per mile traveled), but a fatal crash will occasionally kill a random person on the ground. This is mostly true of car crashes too! However, while a car crash may kill a random person on a road or sidewalk, an air-taxi could crash into a house, which probably makes them feel psychologically more dangerous even it they're not.
Maybe to get past choked highways and reduce infrastructure costs. But the thought of then weaving between skyscrapers gives me goosebumps. I'd keep them out of downtown or any residential neighborhood. That would make them more like an airBus anyway, i think.
I am a pilot, and flying is much harder than driving. Maneuvering in three dimensions is part of it, but bigger parts are cross-coupling of controls (where every input does at least two different things), a higher level of critical multitasking, and the part where you can't pull over to the side of the road or even slow down if things get temporarily overwhelming or the hardware glitches. Also, all-weather flying (necessary for routine travel) becomes a complex exercise in real-time systems management and requires training *out* some deeply instinctive behavior. In my particular case, that makes me safer in the sky than on the ground because flying effectively commands my attention but driving leaves me too easily distracted, but I'm not exactly typical.
The average person can learn to fly a light plane safely if you insist on it. And we sort of do, for people who want to fly. But the difficulty, expense, and tedium of the necessary training and the non-trivial fraction of the population that will wash out and be aeronautically disenfranchised, will likely make this politically infeasible at flying-car levels of utilization. The remaining hope for "flying cars" is increased automation, but safe autopilots are a classic 80/20 problem where, for this application, we'd really need that last 20%.
I'm reading the book because of this review, and one point Hall makes is that the autogyro from the 1920's has a reverse risk profile from an airplane. It's unstable on takeoff, but will glide to a stop on landing, even if engines fail. So the "can't slow down and can't pull over" might be less of a problem with another technical model.
Nice!! I want one now. One thing Hall doesn't answer well is why US regulations blocked progress across the world. The other review raises that issue as well - financial and health regulation has slowed things down, but not stopped us in our tracks. Are flying cars just on the wrong side of the cost disease? Or is there some other issue (like, say, physics)?
"People seem to manage extensive hours of training, constant practice, and a focus on situational awareness and response while driving."
And yet *look at how dangerous driving is.* Even if flying drivers were no more prone to accidents than terrestrial drivers, there would still be too many accidents, and they'd probably be more deadly.
Self-flying cars are an easier problem now, but that would quickly devolve if even 5% of the US population had them and used them as their primary form of transport. Granted, working an integrated Terrain/Collision & Avoidance System (TCAS) for objects moving in 3 dimensions instead of 2 is slightly easier, but if you see my comment above it would be a catastrophe if the car/system gave the choice to the pilot/operator on how to avoid a collision as opposed to the linked computers making that choice.
If you want to see a horror story of just how confusing human/computer/system interfaces can be vis-a-vis collision avoidance, watch this simulation of the Flight 2937 collision of Switzerland.
As a matter of handling traffic, I imagine flying cars would be restricted to long-distance travel, between or just outside cities, and restricted to tired travel within them. Also freight trucks would probably remain wheeled, given the weight, and given that shippers already have the option of freight airplanes, and aren't using them.
That's pretty much what Heinlein was figuring 70 years ago for domestic transport: the wealthy and important government agents would have city cars that can quickly covert into aircraft in the countryside, then land in the hinterlands of the destination and roll into town.
Why does the *average* person need to be able to do it well for *anyone* to be allowed to do it at all? This seems indicative of the weird current hyperfocus on equity over opportunity which I mentioned elsewhere. None of us can have flying cars until all of us can? Why?
Is flying a flying car so hard that we can't, say, take the 10% best drivers in the nation (about 20 million people) and let *them* fly around? What would that do to boost their productivity? Maybe there are a ton of really smart people from whom we are not getting X new inventions a year because they are freaking stuck in traffic.
I think because on one hand, the gains are kind of speculative: "spend less time in traffic -> more inventions" seems like a stretch, especially considering how many other ways there are to solve that problem: remote presence being the obvious 2020 solution, public transit or a private driver to make 'commuting' time more productive, etc.
And the potential downsides are obvious and specific - the potential for loss of life and destruction is pretty clear (and not just for the people behind the wheel!) in case of accidents, to say nothing about the potential deliberate acts of destruction.
Right. And I get this. As I said elsewhere, we have just become across-the-board more timid about things. "But is it safe? What could go wrong?" tends to trump "But what marvels might we unlock if it goes right?" pretty much all the time.
There are argument both ways, of course, and either "safe at any speed" or "damn the torpedoes" extremes are unwise. But whatever choice we make comes with costs. If we are much more concerned about what will go wrong, then we will be generally less entrepreneurial, take fewer risks, and our social progress will increasingly resemble NASA's progress on human space flight -- very, very slow, but also very very safe.
"we have just become across-the-board more timid about things."
Can't this be seen as a reflection of the increased opportunity cost associated with various forms of risk taking? As measures of expected life satisfaction/life expectancy substantially increased over the 20th century, the costs of risk taking in the form of foregoing expected life increased.
I think it's a good argument in general, but I'm not sure the data support it in this case. Life expectancy in the US has only gone from 70 to 79 since 1960. That's nothing to sneeze at, but it's hard to see it having that big an effect on risk-taking. I don't think life satisfaction has increased at all, and indeed the rising rates of middle-aged suicide, and middle-aged drug use and overdose, would kind of suggest the opposite. Heck, even the teenagers are having less sex, apparently. Four hours of World o' Warcraft doesn't compare to getting it on with the fox from Algebra, so I can't see how *they're* happier.
'Life satisfaction' was a poor choice of words on my part. Maybe it would be more accurate to say video games/the internet has largely solved the "problem" of boredom. What equivalent way of wasting time by yourself did teenagers in the 50s and 60s have? Comics? Playing records? Seems like there's no real equivalent to the video game slob stereotype we have now. I wonder if a lot of early-life entrepreneurship (broadly speaking) is just a way to avoid boredom.
We already do let a few highly trained people fly around. Furthermore, we allow them to take passengers, so the rest of us can have the benefit of their skill. They're called airline pilots.
I think a lot of the problem is that there isn't a clear Schelling point for regulation in between "The average person should be able to get a license without it being unduly burdensome" and the current very cautious regime where it takes hundreds of hours of expensive training to get a pilot's license.
Regulations for cars can't get that strict because the average person expects to be able to drive and the population as a whole wouldn't tolerate that changing, but once you accept that only a fraction of unusually skilled people can do it safely the standards might tend to err on the side of caution because of the incentives on regulators.
Well it's a good thing the automobile wasn't invented last year then, isn't it? "You're going to let 16-year-olds sit at the controls of a two-ton metal machine capable of accelerating to 100 MPH? Insane!"
If ten percent of the population is flying freely above the crowded freeways at 100+ mph on their daily commute, the remaining ninety percent are going to really notice this. And be jealous. And vote.
Right, well, you could say the same thing about absolutely any manifestation of wealth. It's not like people don't already notice private planes, yachts, gated communities, private schools, expensive condos with beautiful views in ski resorts.
The way we traditionally deal with this is social mobility. People have the *opportunity* to get rich themselves, through hard work and talent, and then they join the ranks of the 10%. People are smart enough to understand that if everyone has to have the same stuff, it's going to be a low level of stuff, and so they're generally willing to instead enter the lottery of who ends up in the 10% -- as long as they believe it isn't pure chance that produces the winning ticket.
They also usually tend to believe that what the 10% have usually moves down after a while anyway, as long as economic growth continues -- at one point you had to be decently well off to afford a car at all, then everyone had one car but being a two-car family was pretty tony, and now these days it's buying a Tesla for the 3rd car that marks you out as in the 1%.
I agree such attitudes are less common today, though, and there are many more like what you're saying: "Only 10% of us will ever fly above the Blade Runner dystopian squalid streets below, and it's going to be some connected/powerful/aristocratic 10% that will never include me or my kids, so screw that." But the existence of that curious change, and wild speculation on its origin, is kind of the point of the reviewed book.
I think a lot of the military research successes have come because they have goals of making a practical technological application as their first objective. Academic scientists are frequently driven by raw curiosity and/or obsession with the function of some very specific part of the natural world. There's nothing morally wrong with that, but it's not optimal for finding practical applications. They stretch the words in their grant proposals to try to convince funding sources that their work will have some practical benefit--and every so often one of the many projects does--but fundamentally their interests are usually anchored to the thing they're studying rather than a solution to a practical problem.
In contrast the military's objectives are to increase real world combat effectiveness. That's a practical goal, and therefore they're much more willing to sideline or abandon research avenues that are less likely to result in new solutions to practical problems and to do so sooner.
> Academic scientists are frequently driven by raw curiosity and/or obsession with the function of some very specific part of the natural world
Worse than this, I think a lot of academic scientists aren't even all that interested in the very specific thing they study, they're just kinda stuck with it.
Here's the way it works: your PhD supervisor was the world expert on the subject of tantalum oxide. When you came along he racked his brain for five minutes and assigned you a project looking at the ever-so-slightly-different-and-more-obscure subject of tantalum sulphide. You work hard for five years and sure enough, you're one of the world's top experts on the subject of tantalum sulphide. You don't really care about tantalum sulphide. It turns out that tantalum sulphide is completely boring and unimportant. But you _can_ think of a dozen possible tantalum sulphide related projects that you could potentially get some grant money to work on, and you don't know enough about anything else. So you start churning out grant proposals on the subject of tantalum sulphide, starting with "Tantalum sulphide has potential applications in X, Y and Z".
I feel like this is kinda the case for at least half the academic scientists I've met; they're stuck in some kind of dead end doing research that they're not especially interested in but they suspect they can get funding for.
Yeah, the 'just stuck with it cause it's what your advisor had to bequeath upon you' is why I left the academic research field. Naively I originally went in with some very specific research goals and did so specifically because i felt that basically no one was working on them. I eventually found that the fact no one is working on them already, means it's nearly impossible to start doing so within the framework of academia and grant funding. You need to either be rich enough to self-fund, or you need to be fortunate enough to discover an exciting breakthrough in your target direction while still a student--and even then it probably needs to be monetizable to work out. Even if you do one of those it's incredibly hard to start down the new direction in academia because there may only be a handful of people in the world who'd be a good fit as your advisor and there's no guarantee you'll be accepted to (or want to attend) their particular institution.
> As for government science not being worth a damn, that may be true now, as I'm told the funding process has become completely corrupted. However, there are notable successes like GPS and the internet
I'm old enough to remember the exact same criticisms of government science being made decades ago, while those things were being researched. I don't think there's been any major change, just that as always the kind of research that governments fund is unsexy and far from the point at which it would be implemented, so it will only seem valuable in retrospect
My pat answer to the flying car question has always been "We do have flying cars, they're called helicopters".
Now of course there's a good reason why most of us don't own helicopters; they burn a lot of fuel, they need a lot of expensive maintenance, they require a lot of specialised training in order to fly them, they're relatively dangerous, and they're so loud that you're not allowed to land them in most places. But all of those problems are intrinsic to hovering transport, so they preclude your "flying car".
An electric automatically-piloted quad-copter might be able to solve most of these problems to some extent, though.
hard disagree. couple hours in a sport plane and in a robinson helicopter plus regular motorcycle rider. of the three, plane is the easiest and motorcycle the most difficult/dangerous. yes, crashing the plane would be bad, but after takeoff and setting it on course and trimming it out, i could practically take my hands off the controls (extremely low-inertia 600 pound aircraft with no autopilot or stabilization aids). it’s much less stressful as you’re making fewer inputs/decisions per minute. as far as take off/landing, the amount of computing power in the average tesla is probably more than enough to automate those anyway esp if vtol as the book proposed.
On the other hand the first time a flying car breaks down and crashes on a kindergarten there will be no end of additional restrictions made about who can fly what over built up areas.
I’ve never heard anyone before say government-funded science was bad for science!" Ayn Rand has as character in Atlas Shrugged who was a brilliant physicist before he got government funding. Then he became useless. Nassim Taleb (I forget which of his books) also argues that many of our significant inventions came from outside academia.
Funny that the author uses a machine learning analogy, as ML is definitely an invention of academics.
I don't think it would be hard to imagine that there is an *appropriate* level of government funding for science and technology, and that at levels that are too high it sucks all the oxygen out of the room -- and then all your science becomes done in the way you do government science, which is not the only way to do it, and in some cases not the best way to do it.
It is certainly the case that 50 years ago there was a far more vigorous realm of basic R&D outside of government and academia. You think of Bell Labs, or IBM Yorktown, Xerox PARC, Exxon Annandale -- these place attracted absolutely first-rank talent, and in their day invented amazing technologies. But corporate R&D has been eviscerated, and it isn't *completely* out of the question that part of that is the entry of the 800lb gorilla -- government funded academic research. From the point of view of the new researcher, there is much to like (initially) about the government/academic model: you don't have nearly as strong a deadline/results pressure, and you can often work on more abstract problems. And for some areas of research that is an excellent shift. But...there are areas of research where a certain amount of bottom-line and practical focus *is* good for results, and can even be quite beneficial to the individual, in the era when it was possible to become very handsomely rewarded by commerce.
I think there's a very active world of corporate research in things programming, and for this we can thank many of the brilliant innovations in computational stuff. And to some extent we still see that in biology, although perhaps not as much as we'd hoped 30 years ago -- biotech is still very tightly tied to academia, and the bio giants (e.g. Big Pharma) don't seem to be *expanding* their basic R&D programs. If Utopia could be created just by brilliant programming, we'd be in good shape, but alas it needs progress in things made of actual stuff also.
I'm pretty sure a lot of private research money is gone because companies realized that groundbreaking research usually doesn't actually pay itself off to the company funding the research.
Leo Szilard wrote a short story in *Voice of the Dolphins* where a character predicted pathologies of government-funded science. IIRC it was written in the late 40s or early 50s, satirical in the form of proposing NSF-style funding as a way of sabotaging an enemy society.
(Szilard was a physicist who proposed the fission chain reaction in the 30s and clashed with Manhattan Project management.)
I've heard it before. A number of years ago I met a very smart person whose name escapes me (associate of current MIRI researcher Eliezer Yudkowsky), at a SENS research conference on aging who brought it up during a Q&A. Most of the attendees get their paychecks via grant money so his question was laughed off, but at the time I recognized his name and went to talk to him about it afterwards. We had an interesting discussion about historical funding of research, but most of it was about pre-industrial revolution research where discoveries in the sciences were mostly made by rich or patronized-by-rich people and no one had even considered government funding go through a bureaucracy to be distributed to researchers.
It's not that nothing is ever discovered when research is funded via government grants, but it shouldn't be a surprise to anyone most of the funding goes to things which enhance the prestige of the researchers of yesteryear who are now in charge, rather than to people who intend to make progress in completely new directions. I'm not sure I'd attribute ML primarily to academics either--the math of it is a very old idea. Rather computers have recently reached a point in all their capacity types where the technique is useful. Academics working in ML are primarily "just" fiddling with the variables of layer counts and compounding systems to get new results. It's important work, but not fundamentally a new paradigm.
I don't have a particularly strong opinion for/against public research funding, but I think dismissing the idea without a lot more discussion is exactly the kind of failure that "Where's My Flying Car?" argues we already suffered from. Similarly labeling the idea "libertarian", and especially associating it with Ayn Rand, makes it more political, and therefore more controversial, than it needs to be and wrongly biases a lot of people against the idea. We'd need much better evidence that the bureaucratic funding model benefits ML research more than a private funding model would before I'd find its use in the review ironic.
"I'm not sure I'd attribute ML primarily to academics either--the math of it is a very old idea. Rather computers have recently reached a point in all their capacity types where the technique is useful."
Those facts are correct, but the old math was originally invented by academics as well. So it is certainly attributable primarily to academics.
As a side note on this idea of government funding in historic periods...The idea of trying to separate 'the wealthy' such as the random Lord, Duke, Earl, and sundry types of lesser nobleman from the idea of government is invalid in my view when talking about that era.
The rich and powerful people funding or lending their power to various researchers, artists, and merchants in the enlightenment era were either directly members of the government through their aristocratic positions or were themselves strongly beholden to support from such nobles.
The idea of trying to say this funding wasn't run through large government bureaucracies in the 1700s doesn't make sense as such structures by and large did not exist at the time with the major government actions being tax collection, the military, and running a fairly simple court system to arbitrate disputes and process criminal charges. I don't think there was an NIH or NAS type body at the time and the main equivalent would have been the nascent University sector and the not-so-private actions of nobles with governmental authority to individually fund whomever caught their interest.
It sounds like a coloured view of history seen through a modern lens which ignores how people lived at that time. I'm obviously reading into that position from only a few words, but it sounds fairly anachronistic to me.
You're understating the case for ML. The fact is that there have been major advancements in ML. GANs, Transformers, reinforcement learning, just really large networks, RNNs, etc... are all serious innovations in ML. And most of them have come out of private labs (in fact most of them have come from FB and Google...).
I was about to say the same thing. Arguments against government funded/controlled science go back to the 1950s.
And the thing is - they kinda have a point. Instead of coming up with new and innovative methods, we end up taking years to publish papers that are mostly forgotten and ignored.
It also freezes in the horrible bachelor-master-phd requirement. Let's get real: I can train anyone to be a good experimental biologist in about a 100 hours or less. 12 years' effective apprenticeship? Get. Real.
I've yet to get to the point in my PhD at which they sit me down and tell me how to think independently, I personally just think you just have to learn from experience.
> And the thing is - they kinda have a point. Instead of coming up with new and innovative methods, we end up taking years to publish papers that are mostly forgotten and ignored.
Would we suddenly get a lot better at coming up with new and innovative methods if government funding were removed, though?
> Let's get real: I can train anyone to be a good experimental biologist in about a 100 hours or less
You can train someone to do biology experiments, yes; the part that requires an expert is understanding which experiments are worth doing.
(Or, less pithily, the job of the expert is planning out a full research program that will advance a particular sub-sub-field of biology, assembling a team of 100-hour doofuses aka PhD students to actually do it, and then properly communicating these results to the other experts in the field.)
This depends a lot on how government funding works. In the US and UK, yes. In most remaining Europe, no. It is part of the job, but by far not the main part.
In most universities in Germany or France, you can get by without raising any external money at all. You will have less PhD students, and you won't be a superstar, but even superstars do not spend the main chunk of their time allocating money.
This does have downsides. Whether it is overall a good or a bad thing, this is really complicated.
"In most universities in Germany or France, you can get by without raising any external money at all. You will have less PhD students, and you won't be a superstar, but even superstars do not spend the main chunk of their time allocating money."
In France, the current annual funding of a researcher is about 2000 euros. Its is almost impossible to do anything without grants, at least from experimental people.
Well, in principle you can do it in the US once you have tenure, but...you'll never be promoted, and you'll get assigned to all kinds of painful committees, get shit teaching assignments, that kind of thing. The university really loves its overhead :(
In fact, that's the corruption I'd most like to see rooted out of the system. When research overhead can make up twice as much of the university budget as tuition, the incentives are pretty screwy.
If it weren't for the 12 years of apprenticeship AND that said process effectively locks you into researching some specific side project of your advisor's specialty, biology research would be my current occupation. The current paradigm is actively turning away people who want to go in directions that the fewest people were interested in historically, and that's inherently going to bias results against real breakthroughs.
"A survey and analysis performed by the OECD in 2005 found, to their surprise, that while private R&D had a positive 0.26 correlation with economic growth, government funded R&D had a negative 0.37 correlation!”
It seems widely unlikely to me that this strong negative correlation is causal. Has the - 0.37 correlation by any chance been calculated by mixing developped countries (High public RD, low ec grotwh, and developping countries (low public RD, high economic growth) by any chance?
I find causality believable. When I left my job as a bioinformatician, I had in mind to do a bioinformatics start-up. I eventually gave that idea up, because nearly all of the bioinformatics software used in the US is developed using government grants, and made available for free. It's difficult to find anything in the bioinformatics space where you don't have to compete with people who are getting paid by government grants and giving away their software. But that free software isn't easy to use, and usually isn't supported or maintained.
It seems to me that your observation suggests that goverment money makes producing well maitained product more difficult, but it really doesn't seem obvious to me that well maintained software is a key to innovation.
Firstly, if you use economic growth correlations and/or practical applications as measures for how well science is being done, something is wrong - especially in what concerns fundamental research.
What should be analysed is how public funding criteria have often created an environment where risky projects are disincentivised, especially as more and more people got PhDs and exhausted what little slack existed for trying wacky stuff without fear of being outcompeted by conventional incremental stuff.
Anyway, since private companies will do R&D in search of profit regardless, it seems logical that public funding should go primarily to that research with no obvious profit routes. The question is how that funding can be more efficiently allocated in today's highly competitive academic world.
If you haven't ever wondered whether government funding of research was bad for science, you haven't done research with government funds.
I have so many horrible stories that I don't know where to begin. But let's start with Congress. Congress understands that we need basic research, but we also need that research to lead to profitable businesses, and to solve urgent technological problems. So, in a lot of basic research funding, they've decided to kill 3 birds with one stone by requiring that basic research must also be applied research, and must be the basis for a profitable start-up business. In the SBIR program, which I'm most-familiar with, your grant proposal must explain how it is basic research, how it is applied research, and what clients you have interested in the product that will come out of this "basic research". If you get to Phase 2, which you must get to at least 1/4 of the time in order for SBIR grants to be profitable, your Phase 2 grant proposal must have a client lined up and committed to provide part of your funding. To get to Phase 2, you must focus on your business plan, and spend about 90% of your budget on producing a cool-looking Phase 1 demo.
The result is that few SBIR grants do much basic research. In some years, I've looked at most of the unclassified "research" grants offered by the US government, and concluded that the only actual basic research being done by any government agency was by DARPA. DARPA is the very best agency we have for doing basic research.
When I submitted, won, and ran a DARPA contract, the COTR (Contracting Officer's Technical Representative) in charge of my $100,000 project was also managing a project which, if I recall, had a budget of $100 million, so that it was literally not worth his time to read any of the reports I read, or to answer any of my emails or phone calls. He had some underlings inform me of this. My team was one of 5 or 6 other teams awarded a Phase 1 contract. Near the end of the contract, after I'd worked 2 months of 12-hour days and weekends preparing the final report and demo, I got an email telling me not to bother completing it, because the COTR had already decided who he wanted to award the Phase 2 grant to. It was given to the only team which appeared, from its slide presentations, not to have produced anything other than slide presentations.
That was one of the most-successful government research projects I ever worked on--the software I developed did eventually make the company a lot of money--for the simple reason that, although no one was interested in the results, at least no one on the project wanted it to fail.
Contrast this with the NASA/FAA grants I worked on. Back around 1970, Congress ordered the FAA to use NASA engineers for airspace research projects, in order to avoid suddenly firing all those engineers after the moon landing. So air transportation research projects were managed by NASA, and carried out by government contractors (so those NASA engineers got fired anyway). NASA was monitored by the FAA, which saw NASA as a bunch of pointy-headed nerds with no practical experience who should stop telling them what to do. Plus the FAA didn't really want to automate air traffic control, because while it would save lives and a lot of fuel, it would put FAA employees out of work.
Or contrast it with the government-funded bioinformatics work I did for a genome research institute, where I was supposed to automate the work of the genome annotators, who were supposed to help me with the program and approve it when it was ready, after which they would be fired. That worked about as well as you'd expect it to.
The most-successful project I ever ran was a NASA project. As often happens, the original COTR who wrote up the project solicitation had been rotated out before the project entered Phase 2, and no one else in NASA or the FAA had the slightest interest in the project. Even the new COTR, whose job is to ensure that I carry out the work approved in the contract rather than repurposing it to some other objective, encouraged me to repurpose it to some other objective that someone actually cared about. So I did; and that project made the company a lot of money, and saved NASA $40 million, though it will probably never be used for its intended purpose (to automate air traffic control).
But we haven't even begun to talk about the main reasons government research grants waste money. One is that government funding centralizes funding, so for every agency there's somebody in a room in Washington DC who's responsible for $10 billion of grant funding every year. They'd much rather manage ten $1 billion projects than ten-thousand $1 million projects, even though the 10,000 $1 million projects would be much, much, MUCH more efficient.
(I once read an NIH blog post describing a study of the relative efficiency of NIH research grants. They compared the output of grants of between $1 million and $20 million by counting the number of research papers produced per project. They concluded, IIRC, that projects costing more than $10 million were almost twice as productive as projects costing less than $5 million. But they forgot to divide project output by project cost. The blog post summarizing the report was written by the director of an Institute, managing hundreds of millions of dollars worth of grants yearly, who DID NOT KNOW YOU NEED TO DIVIDE OUTPUT BY COST to compute productivity, because Institutes aren't incentivized to check on overall monetary efficiency.)
Most of the money in government contracting goes to pay for reputation. Big agencies prefer to award contracts of $50 million and up, because otherwise they have too many projects to manage. So some COTR has to award a $100 million contract. She's gonna award it to some big company, like Raytheon or IBM, because most big contracts fail, and she won't get fired if IBM fails, but she will get fired if Name-You-Don't-Recognize fails. BigCorp will try to line up a bunch of subcontractors to do the work. The competing subcontractors each line up famous experts who claim they'll support the work.
So with a big contract, you end up with a hierarchy, with BigCorp at the top, subcontracting corporations below them, and PIs at the bottom who manage the productive work. BigCorp's job is to vouch for the reliability of the subcontractors, for which they take about a 50% cut. Each subcontractor's job is mostly to vouch for the reliability of the PI, for which they take a more than 50% cut. The famous experts might get a 10-20% cut of the remaining money, usually do approximately nothing, and are being paid for the use of their names, like the famous people on a company's Board of Directors.
None of this is irrational, given the premise that projects must be big. Big projects fail so often that taking a 50% cut at each reputation level in order to put some credible reputation on the line is worth it.
Do the math, and you'll see that little of the money on big projects is left to do work. Whereas with small projects, there's more money left to do work, but most of it is put into making a cool demo (think the MIT Media Lab), and (rough guess) 90% of small government projects are killed or thrown away without anybody ever using them, either because they threatened someone's job, or because nobody really wanted them in the first place.
This was excellent. And a bit of a shame it's buried in the substack comments. I know at some point in future I'll fruitlessly look for "That ACX comment on science funding stuff".
This needs one or another of Scott's epistemic status notes, but anyway, here's a story for you: One of the "A.I. Winter" events happened after a period in which the Department of Defense had decided that Artificial Intelligence was a priority, but then professors at America's most venerable universities had perfected the art of getting DoD grant money by laying out mumbo-jumbo that promised the moon and delivered nothing. Somebody at one of the big private research labs (probably Bell Labs or IBM) eventually collected some convincing enough evidence that this shitshow was what was going on. DoD responded by just turning off the money spigot. Then for a decade or so everything in AI research got very quiet - though some smart people may have been laying the groundwork for the more productive research directions that eventually emerged later.
As one of those 1980s AI researchers, I think all of the AI researchers I knew were sincere, though a few had crackpot theories. Mostly, symbolic AI never worked out as well as people had hoped. Expert systems were hard to deploy because they were by definition designed to replace experts, and most fields with "experts" have regulations about who is and is not an expert. That turns out not to include computer programs.
The DoD had its own problem with AI. I won't claim this was endemic, but it did happen more than once: The military wanted automated "red teams" for training exercises, and would periodically award grants to people who had state-of-the-art (symbolic) AI systems to control the red team. Then the product would be deployed, and the red team had to be programmed by some Spec 3 technician who /might/ know how to program computers, and that went poorly. Eventually the military would give up and look for something simpler. When they got the simpler thing, it was too simple to be a good red team, and someone would say "We should use more AI!", and the cycle would begin again.
The utter lunacy of this book quite aside, flying cars are around the corner and we have development of greentech to thank: electric cars beget better batteries that can do the job.
Agreed, as Tom explained (much better) below me "Energy density. We could have built flying cars off gasoline a long time ago". We seem to have read Matej's question differently. I took their question to mean "why do we need better batteries before they can be used for flying cars", not "why are batteries are better than gasoline".
Energy density. We could have built flying cars off gasoline a long time ago, but they would have been extremely pollutive (helicopters get around two miles per gallon). Now, what exactly is the distinction between flying car, helicopter, and airplane is mostly just, like, the branding, but basically nobody is going to release something called a "flying car" and not have it be emission-less today, so that means batteries.
Energy density matters because a flying car starts to run into a rocket equation kind of problem, where the more range you want, the more batteries you need to carry, which means a greater mass, which decreases your range. This means that more batteries doesn't really solve your problem. So to get a longer range flying car, you need higher energy density batteries.
Oh absolutely, the theoretical limits could be insanely better than we have now. But this is one of those "it's actually just really hard" problems.
For lithium ion, we're near the limit. Tesla's 4680 batteries are probably going to be within 30% of the theoretical limit for lithium ion chemistries. I don't know that number for sure but it's what I recall talking with battery experts I work with.
Solid state chemistries can theoretically do a lot better but there's a reason they're taking a very long time. The history of the battery industry is kind of like fusion power, always 10 years away or something. You should ignore all news that you hear about a "battery tech breakthrough" until it's in a consumer product that you could theoretically buy.
Side point - if we can work from home and in virtual reality and do not need to commute anywhere (for even 20-30% of the population)...do we need flying cars anymore? Might the technological innovation which allows us to work virtually reduce traffic and the need to live near our workplaces or have any kind of commute...might that innovation outpace the battery technology needed for flight? Flying with an electric car requires clearing quite a high bar of innovation and production which could still be 15+ years away from a cheap consumer model. While the battery technology for ground transport is essentially already or very soon to be a consumer grade product and getting better and cheaper over time.
It is sort of like the strange move for 'smart thermostats' which are a short term stopgap until we can build better houses such as passivehaus designs which passively self regulate with a small up front investment to reduce running costs of a building dramatically over its lifespan.
A flying car is cool and fun, but I don't see it being more than a toy like an ATV for Jetski for a very long time in the 30+ year range until a person at home might 'call for a flying car automated taxi' to take them somewhere unless that house is a large mansion - in which case they can just afford to hire a human helicopter pilot right now.
I would say that flying cars are finally becoming available today not just because of dramatically improved battery technology, but also because of dramatically improved control software (and the hardware to run it, of course).
I think they are both quite difficult. Sure, it's possible to design the optimal control system on paper; but we have only recently gained the power to actually implement one that is small enough to fit into a flying car, and fast/smart enough to be usable by the average person (as opposed to a trained helicopter pilot). Doing so required massive advances in computer hardware as well as machine learning.
Not really. The reason that the computer science problem appears to be being solved isn't that it was easy to do, but that computers are getting so fast and so parallel that we don't have to shrink away from brute forcing it so much. Really this goes for a lot of things. Most changes in software stacks today aren't to enable new capability but are instead to make the work of software developers slightly less tedious, and to allow less-smart people to successfully accomplish software development tasks, at the cost of program performance (since we have increasing performance to spare).
It's so aggravating. Like a level 1 engineering analysis would say "oh, renewables are intermittent? Well a large capacitor (battery) can solve that problem."
A level 2 analysis is like "well, do we have enough batteries?" The automotive industry is making incredible strides here. The problem is that all of those batteries are going to be used by the automotive industry. Vehicle-2-grid is not actually a good idea, people tend to want their car to be charged when they go to use it. Battery production MIGHT just scale extremely well now that demand is of no consequence, but it's just weird to see so many people (who are acting out of a very respectable abundance of caution over climate change) betting the future on the assumption that battery production will scale like semiconductors did. Do I hope that's going to happen? Absolutely, in fact my career is trying to make that happen.
What you or I "back" is irrelevant. Fission nuclear is too expensive and too slow and a huge expansion simply isn't going to happen. Internet debates are one thing, but the brute economics and politics are another. Nobody wants to deal with the many tricky problems associated with radioactive waste and you're not going to force them to. It's time to move on.
I'm strangely optimistic about fusion, which I think is a reasonable possibility within the decade. But fission is EOL.
D-T fusion still makes radioactive waste, although you have better control over what it is (there's a rather long list of elements you can't use in a wasteless fusion reactor because of (n,y) or (n,p) creating something with an annoying half-life - "annoying", here, being "too long to just put it in a pond for a couple of years, short enough to make significant radiation").
"Fission nuclear is too expensive and too slow and a huge expansion simply isn't going to happen."
Did you read the review at all? It had giant sections about just why fission is expensive and it had nothing to do with the underlying technology.
Also I was interested in fusion (especially after hearing about MIT's SPARC) but then I read a bit more about energy densities and... they're dismal. Back to fusion.
It has to do with regulation forcing companies to pay the costs that they would otherwise externalize, which makes the projects uneconomic. The risk of the underlying technology is what makes it so expensive. The only thing the book has to add is radiation denialism, which is not super useful.
Batteries are well and good, but how are these things going to get around?
My limited experience with VTOL drones of appropriate size to carry humans makes me highly skeptical that we have technology to automate key system functions (primarily take-off and landing as well as collision avoidance) sufficiently to where people would be able to drive them with an equivalent level of training that we give drivers.
Landing on a ship (which is always moving) is much more difficult than landing on the ground (which doesn't move). This is like saying that we don't allow cars to dock with trucks in motion so we shouldn't allow drivers to be licensed at all.
hmm. the book’s framing would say that contemporary battery/energy storage tech is finally on the level of 40s-era petrol tech, thus limited flying cars with likely similar range/speed/cost as a pitcairn autogyro.
We shouldn't have flying cars until people are willing to require that only computers may fly them. There isn't enough airspace above a city for humans to avoid crashing into each other.
"The public is wrongly terrified of nuclear energy, but they shouldn’t be. Radiation killed 0 people at Fukishima"
You really lost me with the Fukishima minimization. As if deaths at the time of the incident are the only relevant concern. How much land exactly is contaminated forever?
Well, the worse places in the Fukushima area are giving out 90 mS a year right now. That's about twice what we allow a radiation worker to be exposed to every year and given that these limts are conservative it's not clear that there's anything wrong with living there now. That's slightly smaller than the smallest yearly dose linked to cancer but given that we let people smoke, have wood stoves in cities, etc we should probably let people live in Fukushima right now. I wouldn't move there until levels are down to the 10mS/year where Ramsar and we know the people there aren't getting more cancer than usual. But that's most of the Fukushima exclusion zone.
That is no small thing. And who is to say the next nuclear disaster doesn't create a far worse contamination problem. For a looong long time. To not even mention that aspect of the issue in a blithe dismissal of concerns about nuclear energy is just crazy to me. This is a real pattern in the arguments of nuclear apologists I've noticed, and doesn't exactly inspire confidence.
I don't mean to come across as blithe. I wouldn't endorse someone smoking even 1 cigarette each day, eating bacon for breakfast every day, or other risks in roughly the same range. I wouldn't live in Fukushima right now myself. But it's still not a huge risk and if other people have more tolerance for risk than I do I think that they should be allowed to smoke or eat bacon or live in Fukushima. And it isn't forever. Different isotopes decay at different rates. One with a half-life of 1 year is roughly 10 times more radioactive than one with a half-life of 10 years so the most radioactive isotopes tend to decay fastest, though the least radioactive ones will be with us for a long time. Still, I think that in a few decades when the worst parts of Fukushima are merely as radioactive as Denver it will be unreasonable to worry about the radiation levels and at that point I will actually move my attitude to blithe dismissal.
Yes, I meant the author was being blithe (I assume that is not you). Your clarification was quite helpful, thanks. My use of "forever" was of course not literal, but along the lines of "ruined for human use for decades/centuries"
Yes, that's certainly a serious cost to the accident. But it's also the case that many other kinds of power generation have even higher environmental costs, on average. For my part I'd say that once we finish getting rid of every fossil fuel power plant we should start getting rid of fission but I'd hold off on it until then and I wish we'd built more in the past before solar became cheap.
I'm certainly willing to entertain a cost/benefit comparison of nuclear with other options. That is why I find it frustrating when I am presented with what seems to be a deliberately incomplete or even misleading one such as a we find in this review.
The fact that you are calling people apologists is not a sign of good faith intent to weigh the risks here. All electricity generation comes with risks. Solar panel manufacturing involves toxic chemicals which have a half-life of infinity years (non-radioactive chemicals don't decay) The silicon sand refinement process releases tiny silica particulate in the air which leads to silicosis, responsible for tens of thousands of deaths a year. More people will die this year falling off a rooftop installing solar panels than will die building nuclear power plants. Are you an outspoken advocate against solar panel manufacturing?
You shouldn't be. Energy, on net, saves lives and improves global welfare. Why would anybody just be a nuclear apologist? Have you considered that so-called nuclear apologists genuinely believe that it's the safest form of energy generation?
Pretending that day-of-meltdown causalities are the-only relevant metric of the nuclear power risk is ridiculous to the point that anyone who employs such a ridiculous metric can fairly be called an apologist in good faith.
I think a lot hinges on you saying the phrase "only relevant metric." Those are words that you have attributed to the author, and yet the author did not write, yet you are using to cast aspersions over his intentions and all else who may just genuinely think this is a good policy decision. I urge you to have greater patience and a more open mind.
That is the only metric he or she uses, to the exclusion of all the other relevant metrics I mentioned. It was their decision to make that strange choice, not mine just because I pointed it out.
To be uncharitably pedantic, exactly zero land is contaminated "forever" - the radioactive contaminants will, eventually, decay to undetectable levels.
But if you're asking the more reasonable question of "how much land is contaminated to the point of uselessness on a timescale longer than a couple decades"...well, it honestly doesn't seem like there's very much at all outside of the plant itself. The exclusion zone, such as it is, has pretty much shrunk to a couple towns in the immediate vicinity of the plant, and that's *with* the Japanese government's conservative-bordering-on-paranoid safety regulations.
It's also important to note the discharge of contaminated wastewater into the ocean adjacent to Fukushima, which can concentrate in sea animals, including food animals.
The question is Trutium which is constantly created by cosmic rays in the upper atmosphere, has a half-life of 12 years, and is NOT concentrated by sea animals.
Umm, no: A study published in the journal Science in August 2020 found traces of several other radioactive isotopes in the Fukushima wastewater, many of which take much longer to decay than tritium.
Some of that radioactive material may have already made its way into local wildlife; In February, Japanese media reported that shipments of rockfish were halted after a sample caught near Fukushima was found to contain unsafe levels of radioactive cesium.
Assuming you're talking about this study (https://science.sciencemag.org/content/sci/369/6504/621.full.pdf), the amount of other radioisotopes is utterly negligible. There's 500,000 Bq/liter of tritium radioactivity and around 10 Bq/liter for the other isotopes. For comparison, your body contains 8000 Bq of radioactivity, or about 100 Bq/liter. That is, if you took the tritium out of the wastewater and drank the rest, you'd probably lower the average radioactivity of your body.
As an aside, the 500,000 Bq/liter of tritium becomes 100 Bq/liter if you dilute it by a factor of 5000. Dumping it into the ocean is a good way to do that.
I don't want to dismiss the concern of waste leakage into the ocean...but it's not established that the waste leakage into the ocean is going to cause significant ecological damage. I know that may sound like a crazy claim to make, but radiation is extremely not intuitive.
Most scientists expected Pripyat (outside Chernobyl) to be some kind of toxic, uninhabitable wasteland for a hundred years. Instead, wildlife there is thriving beyond anybody's expectation. Vegetation growth even exceeds what we would expect if there was no meltdown. It's quite possible that Chernobyl-like radiation levels are harmful to large mammals like humans but good for smaller life forms like bacteria and plants. Nature is antifragile.
Now that is absolutely not to say that we should just go dumping nuclear waste into the ocean because it might be good for it, but it's not obviously a catastrophe.
I don't think so. Even with a standard "linear no threshold" model of radiation damage, the chance that wildlife could notice the difference is ~zero; I expect a flourishing of plants and wildlife purely as a result of the missing humans.
Specifically, it looks like a typical radiation exposure in Pripyat is 0.7 µSv/hr or 6 mSv/year as of 2009 (without any attempt at environmental cleanup AFAIK). For comparison, "In Europe, average natural background exposure by country ranges from under 2mSv annually in the United Kingdom to more than 7mSv annually in Finland." http://www.chernobylgallery.com/chernobyl-disaster/radiation-levels/
I'm sure human scientists have methods sensitive enough to notice a tiny increase in cancer rates due to exposures substantially below 100 mSv — but wildlife is more concerned with how to find its next meal. (Btw, this figure of 7 mSv is the the highest I've ever heard and, if true, ought to make a good place to do a study on radiation risks.)
Regarding your link: beware the man of one study, but especially beware the study by one man! I don't know how best to read the radiation-risk debate, but it's clear there *is* a debate, i.e. evidence for the effects of low-dose radiation seems inconclusive (apart from universal agreement that the effect, whatever it is, is small), and making this especially hard is the near-impossibility of finding healthy human subjects exposed to intermediate levels of radiation (10 to 100 mSv per year). An illustrative paragraph from a study shows this:
"Most of the previous ecological studies investigating associations between childhood leukaemia and naturally occurring sources of ionising radiation have found positive associations for radon (13-15) while for gamma radiation and cosmic rays results have been inconsistent (16-22). Early case-control studies of the association between natural sources of radiation and childhood leukaemia were underpowered and have reported mixed results (23-26). The largest of these, the UK Childhood Cancer Study, included over 2000 cases of childhood cancer and reported weak evidence of a negative association between childhood leukaemia and measured radon concentrations (25) but no evidence of an association with measured gamma dose rates (26). However, the proportion of eligible subjects participating in the measurements was low and varied by socio-economic status. Because exposure to these sources is ubiquitous and variation in cumulative doses received by children of similar age are small, large sample sizes are needed to detect the small predicted risk. Given the rarity of childhood cancer, the only way to achieve such sample sizes is by combining data over long periods of systematic cancer registration." https://boris.unibe.ch/135621/15/Mazzei_JRadiolProt_2019_AAM.pdf
In the same paragraph we see childhood leukaemia associated positively and (in the largest study) negatively with radiation from radon. Also mentioned is the difficulty of measuring the effect (it's especially difficult because cancer is normally caused by something other than radiation, so the effect we're looking for could easily be swamped by other things that affect cancer rates.)
In general this is the problem with the "precautionary principle": if something is banned for being potentially harmful, it becomes extremely difficult to clearly show that it IS harmful.
My understanding on the effect of the relatively low dose of radiation outside Chernobyl on plants and animal is that they are thriving despite clear negative effects of radiation (increase of the ferquency of abnormality, decreased growth ring in trees, etc), because the absence of human pressure more than compensate the effect of radiation. But given our current knowledge, it seems to me extremely unlikely that radiation is good for any known living organism.
"But given our current knowledge, it seems to me extremely unlikely that radiation is good for any known living organism." -- A study here suggests that some amount of radiation isn't just good, it's required: https://www.pbs.org/wgbh/nova/article/life-without-radiation/
At some point, you have to say what form of radiation you're talking about. You know that sunlight is radiation, right? I assume you agree that sunlight is good for life.
Radio waves are radiation. If you're using wireless internet, you're using radiation. Sound is radiation.
As with anything else in science, numbers matter, and what amount of radiation you're talking about is crucial. But it's clear that the idea that all radiation in any amount is bad for life is just ignorant.
"At some point, you have to say what form of radiation you're talking about. "
I thought it was obvious that when I said "the relatively low dose of radiation outside Chernobyl" I meant α and β radiation due to radioactive contaminanation, not sunlight!
This certainly does not demonstrate that (hard!) radiation are necessary. The only well estblished result is that if you hit bacteria with a high dose of radiation they survive better if they had previous exposure to a low dose. That is a far cry from some (hard!) radiation is necessary for life. α particles really do a lot of damage to complex molecules, it seems extremely unlikely that this damage can become necessary.
" But it's clear that the idea that all radiation in any amount is bad for life is just ignorant." Obviously true if you include lower energy radiation. But I maintain that "given our current knowledge, it seems to me extremely unlikely that (hard!) radiation is good for any known living organism".
I didn't know if you meant just alpha and beta, or also gamma radiation.
I think the Nova article says (or implies) that the scientists restricted cosmic rays and maybe natural radiation from heavy isotopes. Cosmic rays are protons and other nuclei (like an alpha particle, I guess); I don't know what the other consists of, but probably neutrons, and alpha and beta particles.
I was wrong to say this radiation is necessary for life, but the experiment suggests that hard radiation is beneficial to life. That contradicts your statement that radiation is not good for any known living organism. Am I missing something here?
Aside: I believe that radiation in some form, probably electromagnetic, is required for all life, because we need it to get mutations in DNA, and we need mutations in DNA to get evolution. It could be that DNA polymerase could've evolved to be more error-prone to make up for less radiation.
Whoa, so the ecological impact of the worst civilian nuclear disaster in history is exceeded by the prior ecological impact of humans simply living there? That definitely lowers my estimate of the ecological consequences of civilian nuclear power.
I am not sure. The impact of humans "simply living there" on plants and animals is huge! Imagine an alien civilization landing on earth and colonizing it for their own purpose, whithout noticing or caring about us, the impact on us would probably be extremely high.
The findings are gradually decreasing (it's still only 45 years ago) and was probably just bad luck that it is only 1,400 km away (800 miles might be close for US citizens but for Europeans there are 4 countries in between)
Well, none of it is contaminated "forever," but the current restrictions on residence apparently apply to about 371 km^2. You may want to bear in mind that industrial pollution routeinly affects much larger swathes of living area, and routinely cases far more deaths. For example, China's Huai River policy encouraging the use of coal north of the river has been argued to cut 5y of life expectancy off the 500 million people who live there:
Industry pollutes, and if we choose to live technological lives we always run health risks (particularly when industrial plant is combined with things like earthquakes and tsunamis). It's not possible to run zero risks without living in caves and hunting antelopes with spears. So perhaps the focus should be on *relative* risks, and trade-offs. Being blinding Polyanna enthusiastic about nuclear power is dumb. But so is being "terrified."
I thought it was clear that I wasn't using "forever" literally. As I said elsewhere, I'm certainly willing to entertain a cost/benefit comparison of nuclear with other options. That is why I find it frustrating when I am presented with what seems to be a deliberately incomplete or even misleading one such as a we find in this review.
My critique was not oversimplified. I just used a term loosely in such a way that the meaning was still obvious. Neither did the author oversimplify. They lied by omission. So maybe don't misrepresent the whole exchange?
I don't agree that your oversimplification was obviously not literal, while the author choosing deaths as the comparison criterion is a lie. If you're not willing to be charitable to the author, you shouldn't expect others to be charitable to you.
Really? You thought that by "forever" I meant that when the sun turns into a black hole Fukushima will still be irradiated? Come on. That could not be more different than choosing a deeply misleading metric.
I had the same reaction to the Chernobyl minimization. If only 43 people had died, that'd be one thing, but glossing over the health impacts of the disaster on hundreds, if not thousands, of other people seems dishonest to me. I'm pro-nuclear power, and while I think when it fails it fails due to human error and not scientific error, the costs of failure really are very, very high.
You really lost me with the "forever" bit. Yes, yes, you didn't mean it literally. But either this is irrational blind fear, or it's a fundamentally quantitative problem. If it's irrational blind fear, then there's nothing for it but to try and point the stupid people somewhere else. If it's a fundamentally quantitative problem, then you don't address it by saying "how much?" and then introducing a spurious infinity term for rhetorical effect.
About 300 square kilometers near Fukushima have been turned into a de facto nature preserve for the next few decades. Turns out nuclear-power-plant levels of radioactive contamination are a pretty good deal for most animals, because it poses relatively low risk over their natural lifespan but is enough to drive off their chief natural predator and habitat-paver-over.
An analysis I did on SSC or DSL a year or so ago, suggests that if the human race generated 100% of its electric power using nuclear power plants built to 20th-century standards (but no new atom bomb factories), the rotating nuclear nature preserve would at any point be equivalent to IIRC Macedonia or Haiti in size, with individual zones rotating in and out every fifty years or so. Or, if you prefer human habitats to the wildlife sort, you can build your power plants to more modern designs and cut that down.
Compare and contrast to the amount of land we'd have to turn into e.g. solar farms, where the sun never shines and nothing grows. No, it's not enough to just put solar panels on the roofs of existing buildings, and we're not even going to talk about solar roads.
I'm open to the possibility that radiation from meltdown concerns can be addressed. But the author didn't even acknowledge them, let alone make a case as you have here
If somebody proposes to do something useful, like providing abundant cheap clean energy, then the burden of making the case ought to fall on the person saying "no you're not allowed to do that because it's too dangerous". You stepped up to the plate and completely failed to make that case, offering only a rhetorical question including an objectively false assumption.
And that's pretty much par for the course in this business. Lots of people believe that they know that everybody knows that nuclear power is "unacceptably dangerous" and that simply alluding to that "fact" is a slam-dunk win. Almost nobody actually makes the case.
I think the real reason for the turn against nuclear was that the public was used to thinking of fallout in terms of being downwind of thermonuclear groundbursts, where it meant dying puking your guts out in hours or days rather than a theoretical increase in your risks of cancer. Order of magnitude comparisons are made hard by the fact that radiation is invisible. I wrote about the matter [on my blog](http://hopefullyintersting.blogspot.com/2019/06/sometimes-you-need-new-word.html) at more length. I already see a change of attitude between the generation who grew up in the shadow of the mushroom cloud die off and as those of us who grew up with reactor meltdows as our image of fallout so I'm optimistic about that aspect of the future.
At the core of The Green Religion is something I call the environmentalist's habitat paradox: if you really like the environment, a natural first-order desire would be to live in a cottage deeply secluded in nature, far from civilization. But this is either unscalable (and therefore antisocial) as you cannot allow too many others to indulge in the same lifestyle, or you DO proselytize this lifestyle and it becomes environmentally catastrophic. The paradox is that if you love the environment (as in truly want to protect it), you must live in a city.
Eco-pragmatism needs better branding. We need extremely lush, literally-covered-in-plants cities powered by cheap nuclear.
Part of the branding problem is lack of good definitions that leads many to group serious thought on the cost/benefit of various levels of protection for the long standing state of the environment, in with the dumbest environmental protesters--chained to trees while using their iPhone to tweet about how a plant has the same moral value as a human. Worse any attempt at serious problem solving or compromise in public debate very quickly devolves into extremes shouting at each other because government's control via regulation makes things winner-take-all at the entrepreneurial investor level and a status signal of tribal politics and virtue at the level of regular voters.
Well I think it's more the same problem which exists with everything else (and why I'm a rationalist) which is that people choose the right answer with their feelings. If you're someone that really likes the aesthetics of environmentalism, well the last place you want to live is a concrete jungle.
Enter Derek Jensen, an anti-civilization advocate who lives in the remote wilderness with bears. He thinks civilization, cities, and industrialization were horrible ideas, and everyone should give them up and go live "more natural lifestyles" where they hang out with bears in the woods. Nevermind that there are probably less than 300,000 bears alive in the world and so, at best, each bear would have to befriend tens of thousands of people. Not only would this certainly not be good for the bears (not sure what the Dunbar number is for bears but I think it's safe to assume it's less than ten thousand), but Jensen himself would almost certainly not want to share his bear friends with ten thousand other people. I know countless environmentalists/activists who would say that Jensen's lifestyle is idyllic.
I'm a person that just naturally has to think through what the natural consequences of things are. Apparently most people don't do that, they just think "I'd like to live in the woods." and that's the end of any kind of consequential analysis. And so to me, these ideologies, when people say "we should all abandon civilization and live in the woods" somewhere in there, either nature is just absolutely destroyed beyond the likes of which we have ever seen (which doesn't sound like their goal), or colossal numbers of humans vanish somehow. And so I am extremely skeptical of these ideologies.
Looking at some very rough numbers, there is enough forest in the world for everyone to live on a little more than one acres of forest. Maybe we could live in small communities of 50-100 people on 50-100 acres, and maybe some people would be happier living on the savannah or other open grasslands, freeing up space for the rest of us to have 1.5-2 acres or whatever.
Some Googling did not get me an answer to how much forested land a person needs to live in a sustainable way, but I know it's a lot more than an acre. That's especially true for the many many people who would have to live in Siberia and other inhospitable forests, where heating fuel for the winters would be a big issue.
On the bright side, the billions of deaths in that first year would certainly help with the "colossal numbers of humans vanish somehow" problem!
I'm quite sympathetic to Jensen's goal, but I think there may be much more sustainable options available. For instance, I choose to live in an area where there are so many forests around that I can literally see at least one from any vantage point within 50 miles. There's several hundred thousand people in that range, mostly clustered in a series of small towns and surrounded by farmland.
I don't think it's intellectually responsible to be sympathetic to Jensen's goals. Climate change and sustainability have always been human problems. We have no direct business case for reducing emissions, but we know we have to, can we make ourselves do it? We can hardly get people to stop feeding wild deer, raccoons, or ducks. Somehow we're supposed to expand the average person's sphere of altruistic concern to contain the ecosystems of all of the "forest[s] of the world" and give them a scientifically rigorous understanding of how to achieve that?
This is a fantasy, perhaps worth thinking about, debating, and taking seriously back in the 1960s when it emerged in force, but given humanity's track record since then we have no reason to believe such a psychological/sociological/educational stunt can be performed, and it's frankly dangerous to continue entertaining it. Green hippies are in their bubble, waiting for everyone to come join the drum circle, while mainland America is still rolling coal and ICEing EV charging stations.
To be sympathetic is not to actually support. I'm "sympathetic" in the sense that most humans (and incidentally also environmentalists) have been misled by their ignoring the numbers in favor of policy ideas based on feelings, and who can blame them for just doing the usual human thing?
Now, when I point out the numbers to someone and they're like "you red tribe bastard!" and I'm like "I'm not red tribe" and they're like "whatever I'm outta here", that's when my sympathy dries up.
Yes. We don't have to live in cities of a million persons though. We can live in 100 dense pedestrian pockets connected by silent inconspicuous hyperloop.
I don't think that it's obvious the ideal size is 10,000 people and afterwards you see diminishing returns to scale for agglomeration. Even if it were, a 10 billion human planet would require one million such small cities. A quick google search tells me that there are currently 10,000 "cities" worldwide. We should absolutely not want to find 100x as many locations around the world for more cities. That would necessarily mean fewer nature reservations.
It seems to me that we've moved on to trickier problems to solve that are mostly based on coordination rather than simply maximizing consumption. For instance, imagine a 40-story office building with about 2,000 workers where each one commuted by flying car. How many landing strips do you need? Remember that unlike parking spots, you can't stack them -- each one needs to be open to the sky -- and you probably need *minimum* five minutes' clearance between cars. If everyone arrived between 8am and 9am, that means that each landing strip can serve a dozen employees, so you'd need 166 total just for this one building. From a perspective of land use and of time spent getting from your parking spot to your destination, this just sounds terrible. So we should be happy we don't have flying cars, because the societal equilibrium they'd put us in would be terrible.
One could argue that this particular problem is specific to transportation technologies, but social media has amply demonstrated that it's possible for many technologies to lead to bad equilibrium outcomes.
All this is not to say that techological stagnation isn't a problem, but when people talk *specifically* about flying cars, I discount their arguments specifically for the above reasons.
I'm not really sympathetic to this kind of argument because you could probably make similar sounding arguments about a lot of technologies we take for granted today.
Imagine if you had to propose making, say, the national power grid in todays climate if it didn't already exist. You are going to make a giant country spanning grid of copper wires that make a circuit into every single household? The wires themselves are ugly and you want to have them on every street? And these wires are dangerous if anyone fiddles with them or damges them. And if they are burried then and someone were to hit some burried ones with a spade? Wouldn't that hurt them? And then you have the wiring in the walls of your wooden houses? Wouldn't that be a fire risk? And the sockets themselves have a potentially deadly voltage just sitting there on the wall where any kid could stick a fork and kill themselves.
How could you ever possibly scale this?
If something is useful on an individual level, then it will be slowly adopted and then we will find a way to scale it later.
I don't think the argument I'm making is about "how do you scale this" -- spatially inefficient transportation technologies have negative returns to scale. If you have the first car? Sure there aren't any gas stations, but it's still awesome (especially compared to the first telephone) because you can get anywhere 5x faster than anyone else. But if everyone has a car, it clearly makes your car go slower.
So in fact, I would make exactly this argument about technology (the car) that we do have today. How come scientists haven't solved traffic? It's because road space is a special kind of resource where the richer you get, the scarcer it gets.
If you are saying that your argument about Flying Cars applies equally to regular cars, then how do you explain the fact that Cars are widely in use today?
Doesn't that imply that Flying Cars would still be in use widely even with the challenges that you forsee?
You mentioned a bad equalibrium before and compared it to social media, but that implies that regular cars are in a bad equalibrium too and that we would be better off without them? I don't think I can agree with that.
I do think that regular cars in the way currently used in the US are a bad equilibrium! If you look at Europe or Asia, the number of car trips per capita is about half of what's in the US, and it's not because they're behind in car technology. Similarly, I don't think that making cars better technologically solves the problems we have with them. Teslas can accelerate much faster than internal-combustion cars, but do they actually get you where you're going any faster?
Counterargument: The most appealing argument for flying cars was reduced reliance on public infrastructure, highways. Also flying around buildings is dangerous. Restrict flighted travel to outside cities. Continue using urban parking garages designed for tired vehicles.
Honestly, this is what we should have said with ground-based cars. Driving around people is dangers. Restrict >20mph travel to outside cities. Continue using urban transport using small and/or dense vehicles like bikes, feet, and streetcars.
Stand back a few paces from this discussion, and ask yourself honestly: Do you really believe there will ever be ten million flying cars buzzing around the US?
For me is just ludicrous to imagine that.
For comparison, can you imagine millions of persons moving around the US at 300 mph in silent hyperloops?
> so you'd need 166 total just for this one building. From a perspective of land use and of time spent getting from your parking spot to your destination, this just sounds terrible
I mean, if you had a 2000-person building where everyone commutes by non-flying car then you'd need a pretty big parking structure too. In order to just maintain the same 166-space footprint that you already think is too big, you'd need a twelve-storey parking structure, which is very large.
From a land use point of view, flying cars (idealised flying cars anyway) would allow us to reclaim all the space currently devoted to streets and roads.
Also if we had (idealised) flying cars we probably wouldn't bother having 40-storey office buildings anyway, our cities would be less dense because you could travel greater distances with ease.
One nit to pick: general (private) aviation was not done to death by regulation as much it was by product liability torts. Lawyers somehow got extremely good at convincing juries that the aviation accident equivalent of "16 year old who just got their driver's license buys a Ferrari, drives it at 120mph on a twisting mountain road at night in a rainstorm, and predictably winds up dead after careening off a cliff" was somehow Ferrari's fault, and awarding the idiot's family millions of dollars in damages. Given that Ferraris are already a low volume market, it doesn't take too many such lawsuits to drive the cost to buy a new one through the stratosphere.
(The actual scenario would be that a rich retired athlete or businessman would buy an expensive, complex high performance airplane, do the minimum amount of training required, then fly off into bad weather in unfamiliar areas - which they should have known not to do if they had been paying attention in flight school - and run into a mountain, or building, or just plain crash. And their widow would then sue the airplane manufacturer, and usually win.)
In 1994, Congress passed https://en.wikipedia.org/wiki/General_Aviation_Revitalization_Act, which was supposed to fix this. Lawyers just switched targets from the manufacturers to the mechanics who work on planes, with the predictable result that airframe & powerplant mechanics refuse to sign off on an airplane's annual inspection unless everything is perfect, increasing cost of ownership for private airplanes.
All that being said, as a private pilot, the idea of having to share the skies with several orders of magnitude more aircraft, being flown by the equivalent of your average automobile driver who can't be bothered to use their turn signal or put down their phone while driving, is terrifying.
The core problem isn't a lack of a storage facility, it's pretending that perfectly viable nuclear fuel is "waste".
I think he's talking about (1) extracting the plutonium produced as a by-product of U-235 fission, and (2) extracting and re-concentrating the remaining U-235. I don't think he's talking about using any of the actual high-level wastes, e.g. the Cs-137 or Sr-90 et cetera that are usually considered the problem children in spent nuclear fuel, isotopes produced in abundance that have half-lives ~10-100 years (so they're very radioactive but don't decay very fast). So, yes, those latter isotopes are just always going to be a pain, so you just have to put them in a (dry) hole somewhere and let them decay over a few centuries.
Those two isotopes in particular are also annoying because they have the chemical properties of K and Na (in the case of Cs) and Ca (in the case of Sr), and so they can slot into the places where those elements are used in the body and just hang out, irradiating you internally for years. Cs also forms very soluble salts, so it disperses easily in any kind of watery environment.
Many of the fission products you isolate during reprocessing are useful for other things, such as medical isotopes, radioisotope thermal generators (including Sr-90), etc. I'm not sure what they do with the cesium. But France after reprocessing is left with a pretty negligible amount of spent fuel waste and they just keep it all in a building somewhere.
Yes, I think that's true to an extent, and I suspect Cs-137 is used in radiotherapy applications for just this reason (it's conveniently obtained). But my vague impression is that the requirements of medical-grade isotopes are such that you need to use special reactors constructed for (or amenable to modification to) that purpose, I don't get the impression that reprocessed fuel from power reactors is usually considered a great option. Indeed, my impression these days is that the bigger nuclear medicine departments are moving towards installing their own synchotron to make their isotopes to order, and on the spot.
But maybe they do it in France, and maybe I'm generally wrong, this is not my area. I am little suspicious that any of these ancillary applications can absorb the few hundred kg of high-level waste you'd collect from a refueling. I'm still thinking you'd end up with at least a tonne or two of stuff you just needed to stick in a hole in the ground somewhere. (Not that I think this is a big problem, it's a big planet, there's plenty of places to put such a hole.)
People tend to worry about waste that has a half-life of the order of 10,000 years, but IIUC almost all of that long-lived stuff is plutonium, which is nuclear fuel that we can and should separate out and burn in reactors (which will split the atoms into smaller radioactive atoms).
The smaller radioactive "fission products" (which can't be burned) mostly have half-lives below 100 years, and after 500 years their radioactivity level has fallen below the level of uranium ore (not even uranium metal).
The goal of being able to store waste without any human maintenance for 20,000 years always seemed to be of dubious value — I mean, how about we focus on preventing an apocalypse, rather than focusing on guaranteeing that post-apocalyptic settlers who somehow decide to settle in the Nevada desert 8,000 years from now won't suffer from a 0.1% increased risk of cancer because they don't know what the ☢ symbol means?
But since society seems devoted to storing the waste For As Long As It Takes, it's worth noting that it's much easier to design a structure to last 500 years than 20,000. But the whole issue is kinda moot because plutonium emits alpha radiation, which is easily shielded (unlike the gamma radiation from many shorter-lived isotopes). So, you can safely sleep atop a drumful of plutonium, just don't ingest it, or grind it into dust and breathe it, or grind it into dust and dump it in your drinking well.
As for isotopes with very long half-lives (over 100,000 years or so), the crucial thing to understand is that radioactivity is something that happens with radioactive decay; therefore radioactivity is inversely proportional to half-life. As the half-life gets longer, the danger level decreases: all else being equal, if something has a half-life of one year, it is 10,000 times more hazardous than something with a half-life of 10,000 years. Therefore, things with extremely long half-lives aren't scary; it's the intermediate half-lives (10 to 10,000 years) that are problematic, as they are both long-lasting and radioactive enough to potentially cause cancer.
P.S. A one-ton drum of plutonium contains as much usable energy as 3,000,000 tons of coal, so I'd just like to add that nuclear waste is awesome compared to other kinds of waste. Because it's tiny.
Coal ash might be stored in open ponds or giant buildings that occasionally split open and dump their entire contents into rivers: https://appvoices.org/coalash/disasters/ — but because nuclear waste is so small, it's practical to encase it in concrete so that it never harms humans, wildlife or even plants. Why didn't those activists worried that containment isn't 100% perfect spend the last 60 years fighting coal instead of nuclear?
Good point !
A geologist friend of mine worked on Yucca Mountain and he called it a sham. They were being asked to guarantee there would be no major earthquakes or volcanoes for 10,000 years. According to him that was impossible to state with high confidence.
But even the reason your geologist friend was asked to ensure that for 10,000 years was a sham. It's totally not necessary. Even high level waste becomes less radioactive than a granite countertop within a few hundred years.
And there are plenty of rocks that haven't been touched in millions of years! Pick any one rock that hasn't moved for 10 million years, drill a hole into it, and drop the waste in. Honestly the whole storage "problem" is mostly regulatory theater.
I've never understood that requirement. Like, why are we *planning* for a dystopian future in which humanity has degenerated so much it can't exercise competent stewardship of a waste respository?
We also have 5,000 nuclear weapons lying around . Do we need to weld instructions for their care and responsible storage to them, written in pictograms at a 4th grade level, just in case the future turns out to be stupid?
Yeah honestly even the fact that we're planning to ensure that this waste repository is safe for the post-apocalyptic humans who might stumble upon it is...simply not a standard we hold literally anything else to.
Yes? Just since 1950 there have been dozens of accidental near detonations of nuclear weapons.
https://www.atomicarchive.com/almanac/broken-arrows/index.html#:~:text=Since%201950%2C%20there%20have%20been,been%20lost%20and%20never%20recovered.
And on your second question: yes? That was part of the planning for Yucca Mountain
https://www.bbc.com/future/article/20200731-how-to-build-a-nuclear-warning-for-10000-years-time
What the heck is an "accidental near detonation?" Is that like "not a detonation at all" after someone dropped one by accident? If an airman bumps into an W80 in a warehouse and clonks it with his wrench, and nothing happens, is that an "accidental near detonation?" I'm deeply unimpressed.
Anyway, the fact that a maximum of 20,000 weapons has been supervised by a passel of 19- to 25-year-olds with at most a HS diploma for 75 years without a single notable slipup is proof enough to me that it can be done. At this point I think the burden of proof is on the people who think future American government will screw up something as obvious as digging an irrigation channel right next to the big yellow warning sign that says YOU WILL DIE IF YOU DIG HERE .
Even with the school board's actions, it appears that most likely there were no negative health effects to the people who lived near Love Canal. Source: Wikipedia.
I remember Love Canal. At the time I thought the reaction was hysterical, and I still think that. Perhaps I need to learn something, but my experience was that in the 70s public attitudes towards random assorted chemical in the environment shifted radically -- in the 1960s after we got done cleaning a carburetor with a bowl of gasoline, we just threw it onto the ground. Heck, people changed the oil on cars and threw it on the ground. Turpentine after painting? Down the sink. Same with business. If a gas station leaked 1000 gallons of gasoline into the soil, big deal, nobody much cared.
All of that changed in the 70s. All of a sudden we got really upset about those things, started suspecting them of causing cancer and allergies and whatnot. (And some of that may have been true, after all, cancer has to come form somewhere.) Love Canal was just a symptom of the changing times.
So that would be a better argument if we were really careless about nuclear waste *now* and the future might get a lot more uptight and be pissed at us. But we're talking the other way around -- we're uptight *now* and, strangely, feeling like we need to assume the future will be slobs. But that has never happened before, and it seems strange to assume.
75 years < 150 < 500 < 1,000
It’s massive recency bias to think the next 75 years will be just as stable as the last 75.
Look, I support nuclear power in general, but I also know humans are very bad at assessing risk, especially over very long timelines.
I’m old enough to remember the Berlin Wall falling and a tremendous amount of entrepreneurial dynamism and overall optimism in the 1990s. America was the dominant hyper power and most of the world *liked* it that way.
But then a few dozen followers of Osama Bin Laden changed everything.
It’s foolish to assume something like that won’t happen again.
What do you mean by "changed everything"? I don't see that anything changed at all except for the greater clampdown on the tiny amount of terrorism. Or are you talking about the war in Iraq & Afghanistan, which I'm pretty sure was small compared to wars of previous generations?
In any case, the ~3,000 people that died on 9/11 is to me a very small number when you're talking about the entire U.S. over a long time span. I mean, there have been about 200 times as many deaths due to COVID.
Well, no, it's *logical* to assume the next 75 years will be rather like the previous 75. Unless you have evidence to the contrary, or at least a good working hypothesis. I don't mind a *certain* amount of caution, e.g. it's definitely prudent to put your nuclear waste out in the desert, far from anywhere even remotely habitable, and mark it plainly, big warning signs and such. But assuming that within the next century we're going to be in some Thunderdome situation where nobody reads English any more and the wild kids dig up the shiny metal bits to make into necklaces -- this goes well beyond prudence.
Look, the whole point of the future is that they're almost certainly going to know more than us (technologically speaking), the same way we have known more than the people in the past, for the past 1000 years or so. So why try to do their job? Do ours. Leave them a growing economy, healthy happy new generation of people, some new inventions -- and yes, a few inherited problems with which they're expected to cope. After all, our parents handed us nuclear weapons and the Cold War, and we've done OK with that.
People seem to manage extensive hours of training, constant practice, and a focus on situational awareness and response while driving. Are you saying flying is intrinsically much harder? Or that you wouldn't expect flying to become a primary mode of transportation, so it would be harder to maintain the practice?
Flying is intrinsically much harder than driving. It's tricky to get an apples-to-apples comparison because the units of measure are different (fatalities per passenger mile vs fatalities per hour flight time), but general aviation is at least 10x more deadly than driving. And that's in our current regime, where people who are doing the flying are self-selected to be more focused/situationally aware/skilled than your average random car driver. There's obvious reasons for this - mechanical issues in a car generally mean you pull over and wait for a tow truck. Mechanical issues in a Cessna mean you do everything exactly right or you die.
I have a PL, IFR, and High Performance rating. Flying an airplane is intrinsically much harder because of the procedures involved, but I might seem like a heretic to other aviators for saying that I don't believe these are things that couldn't be overcome by the public at large by more and better automation and inherent safety protocols built into the systems as available to the public at large.
I absolutely could see a future where the general public had access to and used flying as their primary *individual* form of transportation, but in that form a huge amount of the system routine and implementation would have to be prohibited to the owner/user.
99% of aircraft incidents are still (again) pilot error, and that is with operators that are trained and certified above the level of the general public. For the general public to have it as their primary form of transport, even more freedom would need to be limited.
Think "Minority Reports" self-driving rail-pods, but airborne.
yeah, but how much automation was available up until, say, 10 years ago? Suppressing flying cars in the 70s might not have been such a bad idea.
+1 Yes, flying today's airplanes is too hard, but a market for flying cars would put pressure on the control system design to make it simpler. Using "cockpits are too confusing for normal people" as an excuse is 1st order thinking.
Carl Pham below points out that "we shouldn't do it because it's too dangerous" means that we never begin the process of learning how to make it better.
Air taxis would make a ton of sense, though, right? People who fly all day can afford extra training; that we never got air taxis supports the thesis of excessive regulations.
...though it occurs to me that many people would want to hire a taxi to fly downtown, where airstrips might be scarce, even assuming you can build them directly on top of the normal roadways. So another explanation is chicken-and-egg: you need a lot of taxi traffic to justify the airstrips, but lots of airstrips including downtown ones to justify the taxis. Another popular destination should be airports, but air traffic control wouldn't want small aircraft encroaching on its airspace. I doubt this is insurmountable tho.
A third explanation is risk to third parties. Air taxis may be safer than cars (certainly per mile traveled), but a fatal crash will occasionally kill a random person on the ground. This is mostly true of car crashes too! However, while a car crash may kill a random person on a road or sidewalk, an air-taxi could crash into a house, which probably makes them feel psychologically more dangerous even it they're not.
A fourth problem: noise pollution. Not only are aircraft louder than cars, but the walls put up on highways to reduce noise wouldn't work on them.
Maybe to get past choked highways and reduce infrastructure costs. But the thought of then weaving between skyscrapers gives me goosebumps. I'd keep them out of downtown or any residential neighborhood. That would make them more like an airBus anyway, i think.
Both, I'd imagine.
I am a pilot, and flying is much harder than driving. Maneuvering in three dimensions is part of it, but bigger parts are cross-coupling of controls (where every input does at least two different things), a higher level of critical multitasking, and the part where you can't pull over to the side of the road or even slow down if things get temporarily overwhelming or the hardware glitches. Also, all-weather flying (necessary for routine travel) becomes a complex exercise in real-time systems management and requires training *out* some deeply instinctive behavior. In my particular case, that makes me safer in the sky than on the ground because flying effectively commands my attention but driving leaves me too easily distracted, but I'm not exactly typical.
The average person can learn to fly a light plane safely if you insist on it. And we sort of do, for people who want to fly. But the difficulty, expense, and tedium of the necessary training and the non-trivial fraction of the population that will wash out and be aeronautically disenfranchised, will likely make this politically infeasible at flying-car levels of utilization. The remaining hope for "flying cars" is increased automation, but safe autopilots are a classic 80/20 problem where, for this application, we'd really need that last 20%.
I'm reading the book because of this review, and one point Hall makes is that the autogyro from the 1920's has a reverse risk profile from an airplane. It's unstable on takeoff, but will glide to a stop on landing, even if engines fail. So the "can't slow down and can't pull over" might be less of a problem with another technical model.
Nice!! I want one now. One thing Hall doesn't answer well is why US regulations blocked progress across the world. The other review raises that issue as well - financial and health regulation has slowed things down, but not stopped us in our tracks. Are flying cars just on the wrong side of the cost disease? Or is there some other issue (like, say, physics)?
"People seem to manage extensive hours of training, constant practice, and a focus on situational awareness and response while driving."
And yet *look at how dangerous driving is.* Even if flying drivers were no more prone to accidents than terrestrial drivers, there would still be too many accidents, and they'd probably be more deadly.
Self flying cars seems to be an easier problem than self driving cars so maybe there's hope there.
Self-flying cars are an easier problem now, but that would quickly devolve if even 5% of the US population had them and used them as their primary form of transport. Granted, working an integrated Terrain/Collision & Avoidance System (TCAS) for objects moving in 3 dimensions instead of 2 is slightly easier, but if you see my comment above it would be a catastrophe if the car/system gave the choice to the pilot/operator on how to avoid a collision as opposed to the linked computers making that choice.
If you want to see a horror story of just how confusing human/computer/system interfaces can be vis-a-vis collision avoidance, watch this simulation of the Flight 2937 collision of Switzerland.
https://www.youtube.com/watch?v=iYJWWngRxus
As a matter of handling traffic, I imagine flying cars would be restricted to long-distance travel, between or just outside cities, and restricted to tired travel within them. Also freight trucks would probably remain wheeled, given the weight, and given that shippers already have the option of freight airplanes, and aren't using them.
That's pretty much what Heinlein was figuring 70 years ago for domestic transport: the wealthy and important government agents would have city cars that can quickly covert into aircraft in the countryside, then land in the hinterlands of the destination and roll into town.
Why does the *average* person need to be able to do it well for *anyone* to be allowed to do it at all? This seems indicative of the weird current hyperfocus on equity over opportunity which I mentioned elsewhere. None of us can have flying cars until all of us can? Why?
Is flying a flying car so hard that we can't, say, take the 10% best drivers in the nation (about 20 million people) and let *them* fly around? What would that do to boost their productivity? Maybe there are a ton of really smart people from whom we are not getting X new inventions a year because they are freaking stuck in traffic.
I think because on one hand, the gains are kind of speculative: "spend less time in traffic -> more inventions" seems like a stretch, especially considering how many other ways there are to solve that problem: remote presence being the obvious 2020 solution, public transit or a private driver to make 'commuting' time more productive, etc.
And the potential downsides are obvious and specific - the potential for loss of life and destruction is pretty clear (and not just for the people behind the wheel!) in case of accidents, to say nothing about the potential deliberate acts of destruction.
Right. And I get this. As I said elsewhere, we have just become across-the-board more timid about things. "But is it safe? What could go wrong?" tends to trump "But what marvels might we unlock if it goes right?" pretty much all the time.
There are argument both ways, of course, and either "safe at any speed" or "damn the torpedoes" extremes are unwise. But whatever choice we make comes with costs. If we are much more concerned about what will go wrong, then we will be generally less entrepreneurial, take fewer risks, and our social progress will increasingly resemble NASA's progress on human space flight -- very, very slow, but also very very safe.
"we have just become across-the-board more timid about things."
Can't this be seen as a reflection of the increased opportunity cost associated with various forms of risk taking? As measures of expected life satisfaction/life expectancy substantially increased over the 20th century, the costs of risk taking in the form of foregoing expected life increased.
I think it's a good argument in general, but I'm not sure the data support it in this case. Life expectancy in the US has only gone from 70 to 79 since 1960. That's nothing to sneeze at, but it's hard to see it having that big an effect on risk-taking. I don't think life satisfaction has increased at all, and indeed the rising rates of middle-aged suicide, and middle-aged drug use and overdose, would kind of suggest the opposite. Heck, even the teenagers are having less sex, apparently. Four hours of World o' Warcraft doesn't compare to getting it on with the fox from Algebra, so I can't see how *they're* happier.
Good points.
'Life satisfaction' was a poor choice of words on my part. Maybe it would be more accurate to say video games/the internet has largely solved the "problem" of boredom. What equivalent way of wasting time by yourself did teenagers in the 50s and 60s have? Comics? Playing records? Seems like there's no real equivalent to the video game slob stereotype we have now. I wonder if a lot of early-life entrepreneurship (broadly speaking) is just a way to avoid boredom.
We already do let a few highly trained people fly around. Furthermore, we allow them to take passengers, so the rest of us can have the benefit of their skill. They're called airline pilots.
I said 10% not 0.1%.
We don't need 10%. A little tweaking and we'd have enough capacity on our planes.
Not sure I follow. I proposed 10% of drivers could drive flying cars. We do not have 10% of people flying airplanes.
We don't need 10% of people flying airplanes. A little tweaking and we'd have enough capacity already.
Also, I expect that 10% is far in excess of the amount of the population capable of being trained as adequate pilots.
I think a lot of the problem is that there isn't a clear Schelling point for regulation in between "The average person should be able to get a license without it being unduly burdensome" and the current very cautious regime where it takes hundreds of hours of expensive training to get a pilot's license.
Regulations for cars can't get that strict because the average person expects to be able to drive and the population as a whole wouldn't tolerate that changing, but once you accept that only a fraction of unusually skilled people can do it safely the standards might tend to err on the side of caution because of the incentives on regulators.
Well it's a good thing the automobile wasn't invented last year then, isn't it? "You're going to let 16-year-olds sit at the controls of a two-ton metal machine capable of accelerating to 100 MPH? Insane!"
If ten percent of the population is flying freely above the crowded freeways at 100+ mph on their daily commute, the remaining ninety percent are going to really notice this. And be jealous. And vote.
Right, well, you could say the same thing about absolutely any manifestation of wealth. It's not like people don't already notice private planes, yachts, gated communities, private schools, expensive condos with beautiful views in ski resorts.
The way we traditionally deal with this is social mobility. People have the *opportunity* to get rich themselves, through hard work and talent, and then they join the ranks of the 10%. People are smart enough to understand that if everyone has to have the same stuff, it's going to be a low level of stuff, and so they're generally willing to instead enter the lottery of who ends up in the 10% -- as long as they believe it isn't pure chance that produces the winning ticket.
They also usually tend to believe that what the 10% have usually moves down after a while anyway, as long as economic growth continues -- at one point you had to be decently well off to afford a car at all, then everyone had one car but being a two-car family was pretty tony, and now these days it's buying a Tesla for the 3rd car that marks you out as in the 1%.
I agree such attitudes are less common today, though, and there are many more like what you're saying: "Only 10% of us will ever fly above the Blade Runner dystopian squalid streets below, and it's going to be some connected/powerful/aristocratic 10% that will never include me or my kids, so screw that." But the existence of that curious change, and wild speculation on its origin, is kind of the point of the reviewed book.
I think a lot of the military research successes have come because they have goals of making a practical technological application as their first objective. Academic scientists are frequently driven by raw curiosity and/or obsession with the function of some very specific part of the natural world. There's nothing morally wrong with that, but it's not optimal for finding practical applications. They stretch the words in their grant proposals to try to convince funding sources that their work will have some practical benefit--and every so often one of the many projects does--but fundamentally their interests are usually anchored to the thing they're studying rather than a solution to a practical problem.
In contrast the military's objectives are to increase real world combat effectiveness. That's a practical goal, and therefore they're much more willing to sideline or abandon research avenues that are less likely to result in new solutions to practical problems and to do so sooner.
> Academic scientists are frequently driven by raw curiosity and/or obsession with the function of some very specific part of the natural world
Worse than this, I think a lot of academic scientists aren't even all that interested in the very specific thing they study, they're just kinda stuck with it.
Here's the way it works: your PhD supervisor was the world expert on the subject of tantalum oxide. When you came along he racked his brain for five minutes and assigned you a project looking at the ever-so-slightly-different-and-more-obscure subject of tantalum sulphide. You work hard for five years and sure enough, you're one of the world's top experts on the subject of tantalum sulphide. You don't really care about tantalum sulphide. It turns out that tantalum sulphide is completely boring and unimportant. But you _can_ think of a dozen possible tantalum sulphide related projects that you could potentially get some grant money to work on, and you don't know enough about anything else. So you start churning out grant proposals on the subject of tantalum sulphide, starting with "Tantalum sulphide has potential applications in X, Y and Z".
I feel like this is kinda the case for at least half the academic scientists I've met; they're stuck in some kind of dead end doing research that they're not especially interested in but they suspect they can get funding for.
Yeah, the 'just stuck with it cause it's what your advisor had to bequeath upon you' is why I left the academic research field. Naively I originally went in with some very specific research goals and did so specifically because i felt that basically no one was working on them. I eventually found that the fact no one is working on them already, means it's nearly impossible to start doing so within the framework of academia and grant funding. You need to either be rich enough to self-fund, or you need to be fortunate enough to discover an exciting breakthrough in your target direction while still a student--and even then it probably needs to be monetizable to work out. Even if you do one of those it's incredibly hard to start down the new direction in academia because there may only be a handful of people in the world who'd be a good fit as your advisor and there's no guarantee you'll be accepted to (or want to attend) their particular institution.
> As for government science not being worth a damn, that may be true now, as I'm told the funding process has become completely corrupted. However, there are notable successes like GPS and the internet
I'm old enough to remember the exact same criticisms of government science being made decades ago, while those things were being researched. I don't think there's been any major change, just that as always the kind of research that governments fund is unsexy and far from the point at which it would be implemented, so it will only seem valuable in retrospect
My pat answer to the flying car question has always been "We do have flying cars, they're called helicopters".
Now of course there's a good reason why most of us don't own helicopters; they burn a lot of fuel, they need a lot of expensive maintenance, they require a lot of specialised training in order to fly them, they're relatively dangerous, and they're so loud that you're not allowed to land them in most places. But all of those problems are intrinsic to hovering transport, so they preclude your "flying car".
An electric automatically-piloted quad-copter might be able to solve most of these problems to some extent, though.
We also have jumped which have the same problems ,but even worse.
*jumpjets
Does a quadcopter help with the noise part?
Or an autogyro? I wonder if they could be modified to act more like a VTOL. https://www.youtube.com/watch?v=Etcq3lfqIbs
hard disagree. couple hours in a sport plane and in a robinson helicopter plus regular motorcycle rider. of the three, plane is the easiest and motorcycle the most difficult/dangerous. yes, crashing the plane would be bad, but after takeoff and setting it on course and trimming it out, i could practically take my hands off the controls (extremely low-inertia 600 pound aircraft with no autopilot or stabilization aids). it’s much less stressful as you’re making fewer inputs/decisions per minute. as far as take off/landing, the amount of computing power in the average tesla is probably more than enough to automate those anyway esp if vtol as the book proposed.
On the other hand the first time a flying car breaks down and crashes on a kindergarten there will be no end of additional restrictions made about who can fly what over built up areas.
I’ve never heard anyone before say government-funded science was bad for science!" Ayn Rand has as character in Atlas Shrugged who was a brilliant physicist before he got government funding. Then he became useless. Nassim Taleb (I forget which of his books) also argues that many of our significant inventions came from outside academia.
Funny that the author uses a machine learning analogy, as ML is definitely an invention of academics.
I don't think it would be hard to imagine that there is an *appropriate* level of government funding for science and technology, and that at levels that are too high it sucks all the oxygen out of the room -- and then all your science becomes done in the way you do government science, which is not the only way to do it, and in some cases not the best way to do it.
It is certainly the case that 50 years ago there was a far more vigorous realm of basic R&D outside of government and academia. You think of Bell Labs, or IBM Yorktown, Xerox PARC, Exxon Annandale -- these place attracted absolutely first-rank talent, and in their day invented amazing technologies. But corporate R&D has been eviscerated, and it isn't *completely* out of the question that part of that is the entry of the 800lb gorilla -- government funded academic research. From the point of view of the new researcher, there is much to like (initially) about the government/academic model: you don't have nearly as strong a deadline/results pressure, and you can often work on more abstract problems. And for some areas of research that is an excellent shift. But...there are areas of research where a certain amount of bottom-line and practical focus *is* good for results, and can even be quite beneficial to the individual, in the era when it was possible to become very handsomely rewarded by commerce.
I think there's a very active world of corporate research in things programming, and for this we can thank many of the brilliant innovations in computational stuff. And to some extent we still see that in biology, although perhaps not as much as we'd hoped 30 years ago -- biotech is still very tightly tied to academia, and the bio giants (e.g. Big Pharma) don't seem to be *expanding* their basic R&D programs. If Utopia could be created just by brilliant programming, we'd be in good shape, but alas it needs progress in things made of actual stuff also.
I'm pretty sure a lot of private research money is gone because companies realized that groundbreaking research usually doesn't actually pay itself off to the company funding the research.
Leo Szilard wrote a short story in *Voice of the Dolphins* where a character predicted pathologies of government-funded science. IIRC it was written in the late 40s or early 50s, satirical in the form of proposing NSF-style funding as a way of sabotaging an enemy society.
(Szilard was a physicist who proposed the fission chain reaction in the 30s and clashed with Manhattan Project management.)
Thanks for that. I read the Szilard biography, "Genius in the Shadows" years ago and loved it.
I've heard it before. A number of years ago I met a very smart person whose name escapes me (associate of current MIRI researcher Eliezer Yudkowsky), at a SENS research conference on aging who brought it up during a Q&A. Most of the attendees get their paychecks via grant money so his question was laughed off, but at the time I recognized his name and went to talk to him about it afterwards. We had an interesting discussion about historical funding of research, but most of it was about pre-industrial revolution research where discoveries in the sciences were mostly made by rich or patronized-by-rich people and no one had even considered government funding go through a bureaucracy to be distributed to researchers.
It's not that nothing is ever discovered when research is funded via government grants, but it shouldn't be a surprise to anyone most of the funding goes to things which enhance the prestige of the researchers of yesteryear who are now in charge, rather than to people who intend to make progress in completely new directions. I'm not sure I'd attribute ML primarily to academics either--the math of it is a very old idea. Rather computers have recently reached a point in all their capacity types where the technique is useful. Academics working in ML are primarily "just" fiddling with the variables of layer counts and compounding systems to get new results. It's important work, but not fundamentally a new paradigm.
I don't have a particularly strong opinion for/against public research funding, but I think dismissing the idea without a lot more discussion is exactly the kind of failure that "Where's My Flying Car?" argues we already suffered from. Similarly labeling the idea "libertarian", and especially associating it with Ayn Rand, makes it more political, and therefore more controversial, than it needs to be and wrongly biases a lot of people against the idea. We'd need much better evidence that the bureaucratic funding model benefits ML research more than a private funding model would before I'd find its use in the review ironic.
"I'm not sure I'd attribute ML primarily to academics either--the math of it is a very old idea. Rather computers have recently reached a point in all their capacity types where the technique is useful."
Those facts are correct, but the old math was originally invented by academics as well. So it is certainly attributable primarily to academics.
As a side note on this idea of government funding in historic periods...The idea of trying to separate 'the wealthy' such as the random Lord, Duke, Earl, and sundry types of lesser nobleman from the idea of government is invalid in my view when talking about that era.
The rich and powerful people funding or lending their power to various researchers, artists, and merchants in the enlightenment era were either directly members of the government through their aristocratic positions or were themselves strongly beholden to support from such nobles.
The idea of trying to say this funding wasn't run through large government bureaucracies in the 1700s doesn't make sense as such structures by and large did not exist at the time with the major government actions being tax collection, the military, and running a fairly simple court system to arbitrate disputes and process criminal charges. I don't think there was an NIH or NAS type body at the time and the main equivalent would have been the nascent University sector and the not-so-private actions of nobles with governmental authority to individually fund whomever caught their interest.
It sounds like a coloured view of history seen through a modern lens which ignores how people lived at that time. I'm obviously reading into that position from only a few words, but it sounds fairly anachronistic to me.
You're understating the case for ML. The fact is that there have been major advancements in ML. GANs, Transformers, reinforcement learning, just really large networks, RNNs, etc... are all serious innovations in ML. And most of them have come out of private labs (in fact most of them have come from FB and Google...).
I was about to say the same thing. Arguments against government funded/controlled science go back to the 1950s.
And the thing is - they kinda have a point. Instead of coming up with new and innovative methods, we end up taking years to publish papers that are mostly forgotten and ignored.
It also freezes in the horrible bachelor-master-phd requirement. Let's get real: I can train anyone to be a good experimental biologist in about a 100 hours or less. 12 years' effective apprenticeship? Get. Real.
Nah. You can train someone to be a half-way decent technician in that period.
I've yet to get to the point in my PhD at which they sit me down and tell me how to think independently, I personally just think you just have to learn from experience.
> And the thing is - they kinda have a point. Instead of coming up with new and innovative methods, we end up taking years to publish papers that are mostly forgotten and ignored.
Would we suddenly get a lot better at coming up with new and innovative methods if government funding were removed, though?
> Let's get real: I can train anyone to be a good experimental biologist in about a 100 hours or less
You can train someone to do biology experiments, yes; the part that requires an expert is understanding which experiments are worth doing.
(Or, less pithily, the job of the expert is planning out a full research program that will advance a particular sub-sub-field of biology, assembling a team of 100-hour doofuses aka PhD students to actually do it, and then properly communicating these results to the other experts in the field.)
Yes, you do need that extreme knowledge - to _plan a full research program_. Which is something you only start planning late into your postdocs.
The cycle at the moment is:
- Go to 4 year undergraduate course and learn a bunch of theoretical stuff
- Forget it
- Use your grades to get into a PhD program
- Start learning practical skills
- Start seeing the use of theoretical stuff from your practical skills
- Start learning a bunch of theoretical stuff...
Wouldn't it make a lot more sense to start with the practical skills, and then build the theoretical knowledge on top of that?
I wish. The main job of the PI is to raise money. It's like running a start-up in the VC stage forever.
This depends a lot on how government funding works. In the US and UK, yes. In most remaining Europe, no. It is part of the job, but by far not the main part.
In most universities in Germany or France, you can get by without raising any external money at all. You will have less PhD students, and you won't be a superstar, but even superstars do not spend the main chunk of their time allocating money.
This does have downsides. Whether it is overall a good or a bad thing, this is really complicated.
"In most universities in Germany or France, you can get by without raising any external money at all. You will have less PhD students, and you won't be a superstar, but even superstars do not spend the main chunk of their time allocating money."
In France, the current annual funding of a researcher is about 2000 euros. Its is almost impossible to do anything without grants, at least from experimental people.
Well, in principle you can do it in the US once you have tenure, but...you'll never be promoted, and you'll get assigned to all kinds of painful committees, get shit teaching assignments, that kind of thing. The university really loves its overhead :(
In fact, that's the corruption I'd most like to see rooted out of the system. When research overhead can make up twice as much of the university budget as tuition, the incentives are pretty screwy.
If it weren't for the 12 years of apprenticeship AND that said process effectively locks you into researching some specific side project of your advisor's specialty, biology research would be my current occupation. The current paradigm is actively turning away people who want to go in directions that the fewest people were interested in historically, and that's inherently going to bias results against real breakthroughs.
"A survey and analysis performed by the OECD in 2005 found, to their surprise, that while private R&D had a positive 0.26 correlation with economic growth, government funded R&D had a negative 0.37 correlation!”
It seems widely unlikely to me that this strong negative correlation is causal. Has the - 0.37 correlation by any chance been calculated by mixing developped countries (High public RD, low ec grotwh, and developping countries (low public RD, high economic growth) by any chance?
I find causality believable. When I left my job as a bioinformatician, I had in mind to do a bioinformatics start-up. I eventually gave that idea up, because nearly all of the bioinformatics software used in the US is developed using government grants, and made available for free. It's difficult to find anything in the bioinformatics space where you don't have to compete with people who are getting paid by government grants and giving away their software. But that free software isn't easy to use, and usually isn't supported or maintained.
It seems to me that your observation suggests that goverment money makes producing well maitained product more difficult, but it really doesn't seem obvious to me that well maintained software is a key to innovation.
Firstly, if you use economic growth correlations and/or practical applications as measures for how well science is being done, something is wrong - especially in what concerns fundamental research.
What should be analysed is how public funding criteria have often created an environment where risky projects are disincentivised, especially as more and more people got PhDs and exhausted what little slack existed for trying wacky stuff without fear of being outcompeted by conventional incremental stuff.
Anyway, since private companies will do R&D in search of profit regardless, it seems logical that public funding should go primarily to that research with no obvious profit routes. The question is how that funding can be more efficiently allocated in today's highly competitive academic world.
If you haven't ever wondered whether government funding of research was bad for science, you haven't done research with government funds.
I have so many horrible stories that I don't know where to begin. But let's start with Congress. Congress understands that we need basic research, but we also need that research to lead to profitable businesses, and to solve urgent technological problems. So, in a lot of basic research funding, they've decided to kill 3 birds with one stone by requiring that basic research must also be applied research, and must be the basis for a profitable start-up business. In the SBIR program, which I'm most-familiar with, your grant proposal must explain how it is basic research, how it is applied research, and what clients you have interested in the product that will come out of this "basic research". If you get to Phase 2, which you must get to at least 1/4 of the time in order for SBIR grants to be profitable, your Phase 2 grant proposal must have a client lined up and committed to provide part of your funding. To get to Phase 2, you must focus on your business plan, and spend about 90% of your budget on producing a cool-looking Phase 1 demo.
The result is that few SBIR grants do much basic research. In some years, I've looked at most of the unclassified "research" grants offered by the US government, and concluded that the only actual basic research being done by any government agency was by DARPA. DARPA is the very best agency we have for doing basic research.
When I submitted, won, and ran a DARPA contract, the COTR (Contracting Officer's Technical Representative) in charge of my $100,000 project was also managing a project which, if I recall, had a budget of $100 million, so that it was literally not worth his time to read any of the reports I read, or to answer any of my emails or phone calls. He had some underlings inform me of this. My team was one of 5 or 6 other teams awarded a Phase 1 contract. Near the end of the contract, after I'd worked 2 months of 12-hour days and weekends preparing the final report and demo, I got an email telling me not to bother completing it, because the COTR had already decided who he wanted to award the Phase 2 grant to. It was given to the only team which appeared, from its slide presentations, not to have produced anything other than slide presentations.
That was one of the most-successful government research projects I ever worked on--the software I developed did eventually make the company a lot of money--for the simple reason that, although no one was interested in the results, at least no one on the project wanted it to fail.
Contrast this with the NASA/FAA grants I worked on. Back around 1970, Congress ordered the FAA to use NASA engineers for airspace research projects, in order to avoid suddenly firing all those engineers after the moon landing. So air transportation research projects were managed by NASA, and carried out by government contractors (so those NASA engineers got fired anyway). NASA was monitored by the FAA, which saw NASA as a bunch of pointy-headed nerds with no practical experience who should stop telling them what to do. Plus the FAA didn't really want to automate air traffic control, because while it would save lives and a lot of fuel, it would put FAA employees out of work.
Or contrast it with the government-funded bioinformatics work I did for a genome research institute, where I was supposed to automate the work of the genome annotators, who were supposed to help me with the program and approve it when it was ready, after which they would be fired. That worked about as well as you'd expect it to.
The most-successful project I ever ran was a NASA project. As often happens, the original COTR who wrote up the project solicitation had been rotated out before the project entered Phase 2, and no one else in NASA or the FAA had the slightest interest in the project. Even the new COTR, whose job is to ensure that I carry out the work approved in the contract rather than repurposing it to some other objective, encouraged me to repurpose it to some other objective that someone actually cared about. So I did; and that project made the company a lot of money, and saved NASA $40 million, though it will probably never be used for its intended purpose (to automate air traffic control).
But we haven't even begun to talk about the main reasons government research grants waste money. One is that government funding centralizes funding, so for every agency there's somebody in a room in Washington DC who's responsible for $10 billion of grant funding every year. They'd much rather manage ten $1 billion projects than ten-thousand $1 million projects, even though the 10,000 $1 million projects would be much, much, MUCH more efficient.
(I once read an NIH blog post describing a study of the relative efficiency of NIH research grants. They compared the output of grants of between $1 million and $20 million by counting the number of research papers produced per project. They concluded, IIRC, that projects costing more than $10 million were almost twice as productive as projects costing less than $5 million. But they forgot to divide project output by project cost. The blog post summarizing the report was written by the director of an Institute, managing hundreds of millions of dollars worth of grants yearly, who DID NOT KNOW YOU NEED TO DIVIDE OUTPUT BY COST to compute productivity, because Institutes aren't incentivized to check on overall monetary efficiency.)
Most of the money in government contracting goes to pay for reputation. Big agencies prefer to award contracts of $50 million and up, because otherwise they have too many projects to manage. So some COTR has to award a $100 million contract. She's gonna award it to some big company, like Raytheon or IBM, because most big contracts fail, and she won't get fired if IBM fails, but she will get fired if Name-You-Don't-Recognize fails. BigCorp will try to line up a bunch of subcontractors to do the work. The competing subcontractors each line up famous experts who claim they'll support the work.
So with a big contract, you end up with a hierarchy, with BigCorp at the top, subcontracting corporations below them, and PIs at the bottom who manage the productive work. BigCorp's job is to vouch for the reliability of the subcontractors, for which they take about a 50% cut. Each subcontractor's job is mostly to vouch for the reliability of the PI, for which they take a more than 50% cut. The famous experts might get a 10-20% cut of the remaining money, usually do approximately nothing, and are being paid for the use of their names, like the famous people on a company's Board of Directors.
None of this is irrational, given the premise that projects must be big. Big projects fail so often that taking a 50% cut at each reputation level in order to put some credible reputation on the line is worth it.
Do the math, and you'll see that little of the money on big projects is left to do work. Whereas with small projects, there's more money left to do work, but most of it is put into making a cool demo (think the MIT Media Lab), and (rough guess) 90% of small government projects are killed or thrown away without anybody ever using them, either because they threatened someone's job, or because nobody really wanted them in the first place.
This was a very enlightening write-up, thank you.
Thank you. From the not selected book reviews you might like, "Scientific Freedom: Elixir of Civilization".
Where can I find the not selected book reviews?
Item 3. here,
https://astralcodexten.substack.com/p/open-thread-169
This was excellent. And a bit of a shame it's buried in the substack comments. I know at some point in future I'll fruitlessly look for "That ACX comment on science funding stuff".
This needs one or another of Scott's epistemic status notes, but anyway, here's a story for you: One of the "A.I. Winter" events happened after a period in which the Department of Defense had decided that Artificial Intelligence was a priority, but then professors at America's most venerable universities had perfected the art of getting DoD grant money by laying out mumbo-jumbo that promised the moon and delivered nothing. Somebody at one of the big private research labs (probably Bell Labs or IBM) eventually collected some convincing enough evidence that this shitshow was what was going on. DoD responded by just turning off the money spigot. Then for a decade or so everything in AI research got very quiet - though some smart people may have been laying the groundwork for the more productive research directions that eventually emerged later.
As one of those 1980s AI researchers, I think all of the AI researchers I knew were sincere, though a few had crackpot theories. Mostly, symbolic AI never worked out as well as people had hoped. Expert systems were hard to deploy because they were by definition designed to replace experts, and most fields with "experts" have regulations about who is and is not an expert. That turns out not to include computer programs.
The DoD had its own problem with AI. I won't claim this was endemic, but it did happen more than once: The military wanted automated "red teams" for training exercises, and would periodically award grants to people who had state-of-the-art (symbolic) AI systems to control the red team. Then the product would be deployed, and the red team had to be programmed by some Spec 3 technician who /might/ know how to program computers, and that went poorly. Eventually the military would give up and look for something simpler. When they got the simpler thing, it was too simple to be a good red team, and someone would say "We should use more AI!", and the cycle would begin again.
I'll even hop in and say it!
The utter lunacy of this book quite aside, flying cars are around the corner and we have development of greentech to thank: electric cars beget better batteries that can do the job.
can you expand on why better batteries are pivotal?
Not the person you responded to but the simple explanation is increasing energy density.
That's simply nonsense, best batteries have about 2 orders of magnitude lower energy density than gasoline.
I'm comparing newer, better batteries to older, worse batteries, not to gasoline.
How is that enabling flying cars? You can run them off gas (or other similar fuel) much easier.
Agreed, as Tom explained (much better) below me "Energy density. We could have built flying cars off gasoline a long time ago". We seem to have read Matej's question differently. I took their question to mean "why do we need better batteries before they can be used for flying cars", not "why are batteries are better than gasoline".
Check my substack, i have a podcast interview with an EVTOL battery expert that goes into detail. Cell Siders episode 8.
Energy density. We could have built flying cars off gasoline a long time ago, but they would have been extremely pollutive (helicopters get around two miles per gallon). Now, what exactly is the distinction between flying car, helicopter, and airplane is mostly just, like, the branding, but basically nobody is going to release something called a "flying car" and not have it be emission-less today, so that means batteries.
Energy density matters because a flying car starts to run into a rocket equation kind of problem, where the more range you want, the more batteries you need to carry, which means a greater mass, which decreases your range. This means that more batteries doesn't really solve your problem. So to get a longer range flying car, you need higher energy density batteries.
Is there a plausible pathway towards batteries with vastly better energy density?
Oh absolutely, the theoretical limits could be insanely better than we have now. But this is one of those "it's actually just really hard" problems.
For lithium ion, we're near the limit. Tesla's 4680 batteries are probably going to be within 30% of the theoretical limit for lithium ion chemistries. I don't know that number for sure but it's what I recall talking with battery experts I work with.
Solid state chemistries can theoretically do a lot better but there's a reason they're taking a very long time. The history of the battery industry is kind of like fusion power, always 10 years away or something. You should ignore all news that you hear about a "battery tech breakthrough" until it's in a consumer product that you could theoretically buy.
Side point - if we can work from home and in virtual reality and do not need to commute anywhere (for even 20-30% of the population)...do we need flying cars anymore? Might the technological innovation which allows us to work virtually reduce traffic and the need to live near our workplaces or have any kind of commute...might that innovation outpace the battery technology needed for flight? Flying with an electric car requires clearing quite a high bar of innovation and production which could still be 15+ years away from a cheap consumer model. While the battery technology for ground transport is essentially already or very soon to be a consumer grade product and getting better and cheaper over time.
It is sort of like the strange move for 'smart thermostats' which are a short term stopgap until we can build better houses such as passivehaus designs which passively self regulate with a small up front investment to reduce running costs of a building dramatically over its lifespan.
A flying car is cool and fun, but I don't see it being more than a toy like an ATV for Jetski for a very long time in the 30+ year range until a person at home might 'call for a flying car automated taxi' to take them somewhere unless that house is a large mansion - in which case they can just afford to hire a human helicopter pilot right now.
I would say that flying cars are finally becoming available today not just because of dramatically improved battery technology, but also because of dramatically improved control software (and the hardware to run it, of course).
The computer science problem is trivial compared with the chemical engineering problem.
I think they are both quite difficult. Sure, it's possible to design the optimal control system on paper; but we have only recently gained the power to actually implement one that is small enough to fit into a flying car, and fast/smart enough to be usable by the average person (as opposed to a trained helicopter pilot). Doing so required massive advances in computer hardware as well as machine learning.
Not really. The reason that the computer science problem appears to be being solved isn't that it was easy to do, but that computers are getting so fast and so parallel that we don't have to shrink away from brute forcing it so much. Really this goes for a lot of things. Most changes in software stacks today aren't to enable new capability but are instead to make the work of software developers slightly less tedious, and to allow less-smart people to successfully accomplish software development tasks, at the cost of program performance (since we have increasing performance to spare).
Problem: we cannot get enough batteries to run 'green' - namely, wind, solar - energy. It can't be done.
If you're serious about stopping carbon emissions, back nuclear.
It's so aggravating. Like a level 1 engineering analysis would say "oh, renewables are intermittent? Well a large capacitor (battery) can solve that problem."
A level 2 analysis is like "well, do we have enough batteries?" The automotive industry is making incredible strides here. The problem is that all of those batteries are going to be used by the automotive industry. Vehicle-2-grid is not actually a good idea, people tend to want their car to be charged when they go to use it. Battery production MIGHT just scale extremely well now that demand is of no consequence, but it's just weird to see so many people (who are acting out of a very respectable abundance of caution over climate change) betting the future on the assumption that battery production will scale like semiconductors did. Do I hope that's going to happen? Absolutely, in fact my career is trying to make that happen.
But we should build nukes.
What you or I "back" is irrelevant. Fission nuclear is too expensive and too slow and a huge expansion simply isn't going to happen. Internet debates are one thing, but the brute economics and politics are another. Nobody wants to deal with the many tricky problems associated with radioactive waste and you're not going to force them to. It's time to move on.
I'm strangely optimistic about fusion, which I think is a reasonable possibility within the decade. But fission is EOL.
D-T fusion still makes radioactive waste, although you have better control over what it is (there's a rather long list of elements you can't use in a wasteless fusion reactor because of (n,y) or (n,p) creating something with an annoying half-life - "annoying", here, being "too long to just put it in a pond for a couple of years, short enough to make significant radiation").
"Fission nuclear is too expensive and too slow and a huge expansion simply isn't going to happen."
Did you read the review at all? It had giant sections about just why fission is expensive and it had nothing to do with the underlying technology.
Also I was interested in fusion (especially after hearing about MIT's SPARC) but then I read a bit more about energy densities and... they're dismal. Back to fusion.
It has to do with regulation forcing companies to pay the costs that they would otherwise externalize, which makes the projects uneconomic. The risk of the underlying technology is what makes it so expensive. The only thing the book has to add is radiation denialism, which is not super useful.
Batteries are well and good, but how are these things going to get around?
My limited experience with VTOL drones of appropriate size to carry humans makes me highly skeptical that we have technology to automate key system functions (primarily take-off and landing as well as collision avoidance) sufficiently to where people would be able to drive them with an equivalent level of training that we give drivers.
For example: https://news.usni.org/2021/04/27/mq-8b-fire-scout-crashes-into-littoral-combat-ship-uss-charleston-on-deployment
Landing on a ship (which is always moving) is much more difficult than landing on the ground (which doesn't move). This is like saying that we don't allow cars to dock with trucks in motion so we shouldn't allow drivers to be licensed at all.
While landing on ships can be harder than landing on a fixed runway, that isn't the problem here:
https://www.navytimes.com/news/your-navy/2020/11/18/fire-scout-drone-crashes-at-california-base/
hmm. the book’s framing would say that contemporary battery/energy storage tech is finally on the level of 40s-era petrol tech, thus limited flying cars with likely similar range/speed/cost as a pitcairn autogyro.
We shouldn't have flying cars until people are willing to require that only computers may fly them. There isn't enough airspace above a city for humans to avoid crashing into each other.
"The public is wrongly terrified of nuclear energy, but they shouldn’t be. Radiation killed 0 people at Fukishima"
You really lost me with the Fukishima minimization. As if deaths at the time of the incident are the only relevant concern. How much land exactly is contaminated forever?
Well, the worse places in the Fukushima area are giving out 90 mS a year right now. That's about twice what we allow a radiation worker to be exposed to every year and given that these limts are conservative it's not clear that there's anything wrong with living there now. That's slightly smaller than the smallest yearly dose linked to cancer but given that we let people smoke, have wood stoves in cities, etc we should probably let people live in Fukushima right now. I wouldn't move there until levels are down to the 10mS/year where Ramsar and we know the people there aren't getting more cancer than usual. But that's most of the Fukushima exclusion zone.
That is no small thing. And who is to say the next nuclear disaster doesn't create a far worse contamination problem. For a looong long time. To not even mention that aspect of the issue in a blithe dismissal of concerns about nuclear energy is just crazy to me. This is a real pattern in the arguments of nuclear apologists I've noticed, and doesn't exactly inspire confidence.
I don't mean to come across as blithe. I wouldn't endorse someone smoking even 1 cigarette each day, eating bacon for breakfast every day, or other risks in roughly the same range. I wouldn't live in Fukushima right now myself. But it's still not a huge risk and if other people have more tolerance for risk than I do I think that they should be allowed to smoke or eat bacon or live in Fukushima. And it isn't forever. Different isotopes decay at different rates. One with a half-life of 1 year is roughly 10 times more radioactive than one with a half-life of 10 years so the most radioactive isotopes tend to decay fastest, though the least radioactive ones will be with us for a long time. Still, I think that in a few decades when the worst parts of Fukushima are merely as radioactive as Denver it will be unreasonable to worry about the radiation levels and at that point I will actually move my attitude to blithe dismissal.
Yes, I meant the author was being blithe (I assume that is not you). Your clarification was quite helpful, thanks. My use of "forever" was of course not literal, but along the lines of "ruined for human use for decades/centuries"
Yes, that's certainly a serious cost to the accident. But it's also the case that many other kinds of power generation have even higher environmental costs, on average. For my part I'd say that once we finish getting rid of every fossil fuel power plant we should start getting rid of fission but I'd hold off on it until then and I wish we'd built more in the past before solar became cheap.
I'm certainly willing to entertain a cost/benefit comparison of nuclear with other options. That is why I find it frustrating when I am presented with what seems to be a deliberately incomplete or even misleading one such as a we find in this review.
Who can say there are no aliens walking around the Earth? Can you prove there are none? Should we formulate policy based on that premise?
The fact that you are calling people apologists is not a sign of good faith intent to weigh the risks here. All electricity generation comes with risks. Solar panel manufacturing involves toxic chemicals which have a half-life of infinity years (non-radioactive chemicals don't decay) The silicon sand refinement process releases tiny silica particulate in the air which leads to silicosis, responsible for tens of thousands of deaths a year. More people will die this year falling off a rooftop installing solar panels than will die building nuclear power plants. Are you an outspoken advocate against solar panel manufacturing?
You shouldn't be. Energy, on net, saves lives and improves global welfare. Why would anybody just be a nuclear apologist? Have you considered that so-called nuclear apologists genuinely believe that it's the safest form of energy generation?
Pretending that day-of-meltdown causalities are the-only relevant metric of the nuclear power risk is ridiculous to the point that anyone who employs such a ridiculous metric can fairly be called an apologist in good faith.
I think a lot hinges on you saying the phrase "only relevant metric." Those are words that you have attributed to the author, and yet the author did not write, yet you are using to cast aspersions over his intentions and all else who may just genuinely think this is a good policy decision. I urge you to have greater patience and a more open mind.
That is the only metric he or she uses, to the exclusion of all the other relevant metrics I mentioned. It was their decision to make that strange choice, not mine just because I pointed it out.
You keep mentioning "day-of-meltdown" casualties. How big a deal do you think the other casualties were?
The point is casualties aren't the only relevant metric - there is also the issue of land contamination
To be uncharitably pedantic, exactly zero land is contaminated "forever" - the radioactive contaminants will, eventually, decay to undetectable levels.
But if you're asking the more reasonable question of "how much land is contaminated to the point of uselessness on a timescale longer than a couple decades"...well, it honestly doesn't seem like there's very much at all outside of the plant itself. The exclusion zone, such as it is, has pretty much shrunk to a couple towns in the immediate vicinity of the plant, and that's *with* the Japanese government's conservative-bordering-on-paranoid safety regulations.
It's also important to note the discharge of contaminated wastewater into the ocean adjacent to Fukushima, which can concentrate in sea animals, including food animals.
The question is Trutium which is constantly created by cosmic rays in the upper atmosphere, has a half-life of 12 years, and is NOT concentrated by sea animals.
Umm, no: A study published in the journal Science in August 2020 found traces of several other radioactive isotopes in the Fukushima wastewater, many of which take much longer to decay than tritium.
Some of that radioactive material may have already made its way into local wildlife; In February, Japanese media reported that shipments of rockfish were halted after a sample caught near Fukushima was found to contain unsafe levels of radioactive cesium.
Assuming you're talking about this study (https://science.sciencemag.org/content/sci/369/6504/621.full.pdf), the amount of other radioisotopes is utterly negligible. There's 500,000 Bq/liter of tritium radioactivity and around 10 Bq/liter for the other isotopes. For comparison, your body contains 8000 Bq of radioactivity, or about 100 Bq/liter. That is, if you took the tritium out of the wastewater and drank the rest, you'd probably lower the average radioactivity of your body.
As an aside, the 500,000 Bq/liter of tritium becomes 100 Bq/liter if you dilute it by a factor of 5000. Dumping it into the ocean is a good way to do that.
seafood: https://pubmed.ncbi.nlm.nih.gov/22863967/
Tritium
I don't want to dismiss the concern of waste leakage into the ocean...but it's not established that the waste leakage into the ocean is going to cause significant ecological damage. I know that may sound like a crazy claim to make, but radiation is extremely not intuitive.
Most scientists expected Pripyat (outside Chernobyl) to be some kind of toxic, uninhabitable wasteland for a hundred years. Instead, wildlife there is thriving beyond anybody's expectation. Vegetation growth even exceeds what we would expect if there was no meltdown. It's quite possible that Chernobyl-like radiation levels are harmful to large mammals like humans but good for smaller life forms like bacteria and plants. Nature is antifragile.
Now that is absolutely not to say that we should just go dumping nuclear waste into the ocean because it might be good for it, but it's not obviously a catastrophe.
You're talking about radiation hormesis, which has a substantial literature:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2477686/
I don't think so. Even with a standard "linear no threshold" model of radiation damage, the chance that wildlife could notice the difference is ~zero; I expect a flourishing of plants and wildlife purely as a result of the missing humans.
Specifically, it looks like a typical radiation exposure in Pripyat is 0.7 µSv/hr or 6 mSv/year as of 2009 (without any attempt at environmental cleanup AFAIK). For comparison, "In Europe, average natural background exposure by country ranges from under 2mSv annually in the United Kingdom to more than 7mSv annually in Finland." http://www.chernobylgallery.com/chernobyl-disaster/radiation-levels/
I'm sure human scientists have methods sensitive enough to notice a tiny increase in cancer rates due to exposures substantially below 100 mSv — but wildlife is more concerned with how to find its next meal. (Btw, this figure of 7 mSv is the the highest I've ever heard and, if true, ought to make a good place to do a study on radiation risks.)
Regarding your link: beware the man of one study, but especially beware the study by one man! I don't know how best to read the radiation-risk debate, but it's clear there *is* a debate, i.e. evidence for the effects of low-dose radiation seems inconclusive (apart from universal agreement that the effect, whatever it is, is small), and making this especially hard is the near-impossibility of finding healthy human subjects exposed to intermediate levels of radiation (10 to 100 mSv per year). An illustrative paragraph from a study shows this:
"Most of the previous ecological studies investigating associations between childhood leukaemia and naturally occurring sources of ionising radiation have found positive associations for radon (13-15) while for gamma radiation and cosmic rays results have been inconsistent (16-22). Early case-control studies of the association between natural sources of radiation and childhood leukaemia were underpowered and have reported mixed results (23-26). The largest of these, the UK Childhood Cancer Study, included over 2000 cases of childhood cancer and reported weak evidence of a negative association between childhood leukaemia and measured radon concentrations (25) but no evidence of an association with measured gamma dose rates (26). However, the proportion of eligible subjects participating in the measurements was low and varied by socio-economic status. Because exposure to these sources is ubiquitous and variation in cumulative doses received by children of similar age are small, large sample sizes are needed to detect the small predicted risk. Given the rarity of childhood cancer, the only way to achieve such sample sizes is by combining data over long periods of systematic cancer registration." https://boris.unibe.ch/135621/15/Mazzei_JRadiolProt_2019_AAM.pdf
In the same paragraph we see childhood leukaemia associated positively and (in the largest study) negatively with radiation from radon. Also mentioned is the difficulty of measuring the effect (it's especially difficult because cancer is normally caused by something other than radiation, so the effect we're looking for could easily be swamped by other things that affect cancer rates.)
In general this is the problem with the "precautionary principle": if something is banned for being potentially harmful, it becomes extremely difficult to clearly show that it IS harmful.
My understanding on the effect of the relatively low dose of radiation outside Chernobyl on plants and animal is that they are thriving despite clear negative effects of radiation (increase of the ferquency of abnormality, decreased growth ring in trees, etc), because the absence of human pressure more than compensate the effect of radiation. But given our current knowledge, it seems to me extremely unlikely that radiation is good for any known living organism.
"But given our current knowledge, it seems to me extremely unlikely that radiation is good for any known living organism." -- A study here suggests that some amount of radiation isn't just good, it's required: https://www.pbs.org/wgbh/nova/article/life-without-radiation/
At some point, you have to say what form of radiation you're talking about. You know that sunlight is radiation, right? I assume you agree that sunlight is good for life.
Radio waves are radiation. If you're using wireless internet, you're using radiation. Sound is radiation.
As with anything else in science, numbers matter, and what amount of radiation you're talking about is crucial. But it's clear that the idea that all radiation in any amount is bad for life is just ignorant.
It was perfectly clear what kinds of radiation Emma was talking about.
It's not clear to me, then: what kinds?
Thank you, I also thought it was obvious from the context.
"At some point, you have to say what form of radiation you're talking about. "
I thought it was obvious that when I said "the relatively low dose of radiation outside Chernobyl" I meant α and β radiation due to radioactive contaminanation, not sunlight!
"A study here suggests that some amount of radiation isn't just good, it's required: https://www.pbs.org/wgbh/nova/article/life-without-radiation"
This certainly does not demonstrate that (hard!) radiation are necessary. The only well estblished result is that if you hit bacteria with a high dose of radiation they survive better if they had previous exposure to a low dose. That is a far cry from some (hard!) radiation is necessary for life. α particles really do a lot of damage to complex molecules, it seems extremely unlikely that this damage can become necessary.
" But it's clear that the idea that all radiation in any amount is bad for life is just ignorant." Obviously true if you include lower energy radiation. But I maintain that "given our current knowledge, it seems to me extremely unlikely that (hard!) radiation is good for any known living organism".
I didn't know if you meant just alpha and beta, or also gamma radiation.
I think the Nova article says (or implies) that the scientists restricted cosmic rays and maybe natural radiation from heavy isotopes. Cosmic rays are protons and other nuclei (like an alpha particle, I guess); I don't know what the other consists of, but probably neutrons, and alpha and beta particles.
I was wrong to say this radiation is necessary for life, but the experiment suggests that hard radiation is beneficial to life. That contradicts your statement that radiation is not good for any known living organism. Am I missing something here?
Aside: I believe that radiation in some form, probably electromagnetic, is required for all life, because we need it to get mutations in DNA, and we need mutations in DNA to get evolution. It could be that DNA polymerase could've evolved to be more error-prone to make up for less radiation.
Whoa, so the ecological impact of the worst civilian nuclear disaster in history is exceeded by the prior ecological impact of humans simply living there? That definitely lowers my estimate of the ecological consequences of civilian nuclear power.
I am not sure. The impact of humans "simply living there" on plants and animals is huge! Imagine an alien civilization landing on earth and colonizing it for their own purpose, whithout noticing or caring about us, the impact on us would probably be extremely high.
Sure, it's a real paradise there in Pripyat/Tschernobyl ...
Couple questions still remain:
We are the European nations paying Ukraine for building a new "Sarcophagus" on top of the old one?
Why are European countries still routinely measuring radiation levels in food (mushrooms, boar, deer, vegetables, ...) and destroying those exceeding radiation thresholds? (https://www.lgl.bayern.de/lebensmittel/chemie/kontaminanten/radioaktivitaet/ue_2018_radioaktivitaet.htm)
The findings are gradually decreasing (it's still only 45 years ago) and was probably just bad luck that it is only 1,400 km away (800 miles might be close for US citizens but for Europeans there are 4 countries in between)
For the same reason they are banning/overregulating GMO - they are pretty paranoid, European nations.
Well, none of it is contaminated "forever," but the current restrictions on residence apparently apply to about 371 km^2. You may want to bear in mind that industrial pollution routeinly affects much larger swathes of living area, and routinely cases far more deaths. For example, China's Huai River policy encouraging the use of coal north of the river has been argued to cut 5y of life expectancy off the 500 million people who live there:
https://www.pnas.org/content/110/32/12936
Industry pollutes, and if we choose to live technological lives we always run health risks (particularly when industrial plant is combined with things like earthquakes and tsunamis). It's not possible to run zero risks without living in caves and hunting antelopes with spears. So perhaps the focus should be on *relative* risks, and trade-offs. Being blinding Polyanna enthusiastic about nuclear power is dumb. But so is being "terrified."
I thought it was clear that I wasn't using "forever" literally. As I said elsewhere, I'm certainly willing to entertain a cost/benefit comparison of nuclear with other options. That is why I find it frustrating when I am presented with what seems to be a deliberately incomplete or even misleading one such as a we find in this review.
Well, maybe try not to respond to oversimplification with an oversimplified critique?
My critique was not oversimplified. I just used a term loosely in such a way that the meaning was still obvious. Neither did the author oversimplify. They lied by omission. So maybe don't misrepresent the whole exchange?
I don't agree that your oversimplification was obviously not literal, while the author choosing deaths as the comparison criterion is a lie. If you're not willing to be charitable to the author, you shouldn't expect others to be charitable to you.
Really? You thought that by "forever" I meant that when the sun turns into a black hole Fukushima will still be irradiated? Come on. That could not be more different than choosing a deeply misleading metric.
I think someone should enforce a dictum:
EVERY DISCUSSION OF RADIATION RISKS MUST INCLUDE A DIRECT COMPARISON TO COAL!
To pick just one factor: land made inhabitable. And mountain top removal?
And don't forget that some of the pollutants associated with burning coal are themselves radioactive.
Speaking of which, the "meltdown world" in which every single nuclear reactor suffers a Fukushima-style meltdown seems clearly safer than the coal-heavy world we actually live in: https://www.reddit.com/r/nuclear/comments/jtm6hm/how_bad_is_meltdown_world/
I had the same reaction to the Chernobyl minimization. If only 43 people had died, that'd be one thing, but glossing over the health impacts of the disaster on hundreds, if not thousands, of other people seems dishonest to me. I'm pro-nuclear power, and while I think when it fails it fails due to human error and not scientific error, the costs of failure really are very, very high.
You really lost me with the "forever" bit. Yes, yes, you didn't mean it literally. But either this is irrational blind fear, or it's a fundamentally quantitative problem. If it's irrational blind fear, then there's nothing for it but to try and point the stupid people somewhere else. If it's a fundamentally quantitative problem, then you don't address it by saying "how much?" and then introducing a spurious infinity term for rhetorical effect.
About 300 square kilometers near Fukushima have been turned into a de facto nature preserve for the next few decades. Turns out nuclear-power-plant levels of radioactive contamination are a pretty good deal for most animals, because it poses relatively low risk over their natural lifespan but is enough to drive off their chief natural predator and habitat-paver-over.
An analysis I did on SSC or DSL a year or so ago, suggests that if the human race generated 100% of its electric power using nuclear power plants built to 20th-century standards (but no new atom bomb factories), the rotating nuclear nature preserve would at any point be equivalent to IIRC Macedonia or Haiti in size, with individual zones rotating in and out every fifty years or so. Or, if you prefer human habitats to the wildlife sort, you can build your power plants to more modern designs and cut that down.
Compare and contrast to the amount of land we'd have to turn into e.g. solar farms, where the sun never shines and nothing grows. No, it's not enough to just put solar panels on the roofs of existing buildings, and we're not even going to talk about solar roads.
I'm open to the possibility that radiation from meltdown concerns can be addressed. But the author didn't even acknowledge them, let alone make a case as you have here
If somebody proposes to do something useful, like providing abundant cheap clean energy, then the burden of making the case ought to fall on the person saying "no you're not allowed to do that because it's too dangerous". You stepped up to the plate and completely failed to make that case, offering only a rhetorical question including an objectively false assumption.
And that's pretty much par for the course in this business. Lots of people believe that they know that everybody knows that nuclear power is "unacceptably dangerous" and that simply alluding to that "fact" is a slam-dunk win. Almost nobody actually makes the case.
That is not at all what I said, you are dancing with a strawman.
I'd wager it's a lot less land than will be rendered arid by global warming as a result of not having enough nuclear reactors.
I think the real reason for the turn against nuclear was that the public was used to thinking of fallout in terms of being downwind of thermonuclear groundbursts, where it meant dying puking your guts out in hours or days rather than a theoretical increase in your risks of cancer. Order of magnitude comparisons are made hard by the fact that radiation is invisible. I wrote about the matter [on my blog](http://hopefullyintersting.blogspot.com/2019/06/sometimes-you-need-new-word.html) at more length. I already see a change of attitude between the generation who grew up in the shadow of the mushroom cloud die off and as those of us who grew up with reactor meltdows as our image of fallout so I'm optimistic about that aspect of the future.
At the core of The Green Religion is something I call the environmentalist's habitat paradox: if you really like the environment, a natural first-order desire would be to live in a cottage deeply secluded in nature, far from civilization. But this is either unscalable (and therefore antisocial) as you cannot allow too many others to indulge in the same lifestyle, or you DO proselytize this lifestyle and it becomes environmentally catastrophic. The paradox is that if you love the environment (as in truly want to protect it), you must live in a city.
Eco-pragmatism needs better branding. We need extremely lush, literally-covered-in-plants cities powered by cheap nuclear.
Part of the branding problem is lack of good definitions that leads many to group serious thought on the cost/benefit of various levels of protection for the long standing state of the environment, in with the dumbest environmental protesters--chained to trees while using their iPhone to tweet about how a plant has the same moral value as a human. Worse any attempt at serious problem solving or compromise in public debate very quickly devolves into extremes shouting at each other because government's control via regulation makes things winner-take-all at the entrepreneurial investor level and a status signal of tribal politics and virtue at the level of regular voters.
Well I think it's more the same problem which exists with everything else (and why I'm a rationalist) which is that people choose the right answer with their feelings. If you're someone that really likes the aesthetics of environmentalism, well the last place you want to live is a concrete jungle.
Enter Derek Jensen, an anti-civilization advocate who lives in the remote wilderness with bears. He thinks civilization, cities, and industrialization were horrible ideas, and everyone should give them up and go live "more natural lifestyles" where they hang out with bears in the woods. Nevermind that there are probably less than 300,000 bears alive in the world and so, at best, each bear would have to befriend tens of thousands of people. Not only would this certainly not be good for the bears (not sure what the Dunbar number is for bears but I think it's safe to assume it's less than ten thousand), but Jensen himself would almost certainly not want to share his bear friends with ten thousand other people. I know countless environmentalists/activists who would say that Jensen's lifestyle is idyllic.
I'm a person that just naturally has to think through what the natural consequences of things are. Apparently most people don't do that, they just think "I'd like to live in the woods." and that's the end of any kind of consequential analysis. And so to me, these ideologies, when people say "we should all abandon civilization and live in the woods" somewhere in there, either nature is just absolutely destroyed beyond the likes of which we have ever seen (which doesn't sound like their goal), or colossal numbers of humans vanish somehow. And so I am extremely skeptical of these ideologies.
Looking at some very rough numbers, there is enough forest in the world for everyone to live on a little more than one acres of forest. Maybe we could live in small communities of 50-100 people on 50-100 acres, and maybe some people would be happier living on the savannah or other open grasslands, freeing up space for the rest of us to have 1.5-2 acres or whatever.
Some Googling did not get me an answer to how much forested land a person needs to live in a sustainable way, but I know it's a lot more than an acre. That's especially true for the many many people who would have to live in Siberia and other inhospitable forests, where heating fuel for the winters would be a big issue.
On the bright side, the billions of deaths in that first year would certainly help with the "colossal numbers of humans vanish somehow" problem!
I'm quite sympathetic to Jensen's goal, but I think there may be much more sustainable options available. For instance, I choose to live in an area where there are so many forests around that I can literally see at least one from any vantage point within 50 miles. There's several hundred thousand people in that range, mostly clustered in a series of small towns and surrounded by farmland.
I don't think it's intellectually responsible to be sympathetic to Jensen's goals. Climate change and sustainability have always been human problems. We have no direct business case for reducing emissions, but we know we have to, can we make ourselves do it? We can hardly get people to stop feeding wild deer, raccoons, or ducks. Somehow we're supposed to expand the average person's sphere of altruistic concern to contain the ecosystems of all of the "forest[s] of the world" and give them a scientifically rigorous understanding of how to achieve that?
This is a fantasy, perhaps worth thinking about, debating, and taking seriously back in the 1960s when it emerged in force, but given humanity's track record since then we have no reason to believe such a psychological/sociological/educational stunt can be performed, and it's frankly dangerous to continue entertaining it. Green hippies are in their bubble, waiting for everyone to come join the drum circle, while mainland America is still rolling coal and ICEing EV charging stations.
To be sympathetic is not to actually support. I'm "sympathetic" in the sense that most humans (and incidentally also environmentalists) have been misled by their ignoring the numbers in favor of policy ideas based on feelings, and who can blame them for just doing the usual human thing?
Now, when I point out the numbers to someone and they're like "you red tribe bastard!" and I'm like "I'm not red tribe" and they're like "whatever I'm outta here", that's when my sympathy dries up.
Yes. We don't have to live in cities of a million persons though. We can live in 100 dense pedestrian pockets connected by silent inconspicuous hyperloop.
I don't think that it's obvious the ideal size is 10,000 people and afterwards you see diminishing returns to scale for agglomeration. Even if it were, a 10 billion human planet would require one million such small cities. A quick google search tells me that there are currently 10,000 "cities" worldwide. We should absolutely not want to find 100x as many locations around the world for more cities. That would necessarily mean fewer nature reservations.
Call it what you want, but not green.
Pockets of 10,000 persons.
It seems to me that we've moved on to trickier problems to solve that are mostly based on coordination rather than simply maximizing consumption. For instance, imagine a 40-story office building with about 2,000 workers where each one commuted by flying car. How many landing strips do you need? Remember that unlike parking spots, you can't stack them -- each one needs to be open to the sky -- and you probably need *minimum* five minutes' clearance between cars. If everyone arrived between 8am and 9am, that means that each landing strip can serve a dozen employees, so you'd need 166 total just for this one building. From a perspective of land use and of time spent getting from your parking spot to your destination, this just sounds terrible. So we should be happy we don't have flying cars, because the societal equilibrium they'd put us in would be terrible.
One could argue that this particular problem is specific to transportation technologies, but social media has amply demonstrated that it's possible for many technologies to lead to bad equilibrium outcomes.
All this is not to say that techological stagnation isn't a problem, but when people talk *specifically* about flying cars, I discount their arguments specifically for the above reasons.
I'm not really sympathetic to this kind of argument because you could probably make similar sounding arguments about a lot of technologies we take for granted today.
Imagine if you had to propose making, say, the national power grid in todays climate if it didn't already exist. You are going to make a giant country spanning grid of copper wires that make a circuit into every single household? The wires themselves are ugly and you want to have them on every street? And these wires are dangerous if anyone fiddles with them or damges them. And if they are burried then and someone were to hit some burried ones with a spade? Wouldn't that hurt them? And then you have the wiring in the walls of your wooden houses? Wouldn't that be a fire risk? And the sockets themselves have a potentially deadly voltage just sitting there on the wall where any kid could stick a fork and kill themselves.
How could you ever possibly scale this?
If something is useful on an individual level, then it will be slowly adopted and then we will find a way to scale it later.
I don't think the argument I'm making is about "how do you scale this" -- spatially inefficient transportation technologies have negative returns to scale. If you have the first car? Sure there aren't any gas stations, but it's still awesome (especially compared to the first telephone) because you can get anywhere 5x faster than anyone else. But if everyone has a car, it clearly makes your car go slower.
So in fact, I would make exactly this argument about technology (the car) that we do have today. How come scientists haven't solved traffic? It's because road space is a special kind of resource where the richer you get, the scarcer it gets.
If you are saying that your argument about Flying Cars applies equally to regular cars, then how do you explain the fact that Cars are widely in use today?
Doesn't that imply that Flying Cars would still be in use widely even with the challenges that you forsee?
You mentioned a bad equalibrium before and compared it to social media, but that implies that regular cars are in a bad equalibrium too and that we would be better off without them? I don't think I can agree with that.
I do think that regular cars in the way currently used in the US are a bad equilibrium! If you look at Europe or Asia, the number of car trips per capita is about half of what's in the US, and it's not because they're behind in car technology. Similarly, I don't think that making cars better technologically solves the problems we have with them. Teslas can accelerate much faster than internal-combustion cars, but do they actually get you where you're going any faster?
Yeah, the way cars are used in the US seems like a nightmare for many French people...
Counterargument: The most appealing argument for flying cars was reduced reliance on public infrastructure, highways. Also flying around buildings is dangerous. Restrict flighted travel to outside cities. Continue using urban parking garages designed for tired vehicles.
Heinlein's "The Puppet Masters" has a lot on how flying cars would be a boon in the rural Midwest, but his seem dangerous in NYC or even low rise DC.
Honestly, this is what we should have said with ground-based cars. Driving around people is dangers. Restrict >20mph travel to outside cities. Continue using urban transport using small and/or dense vehicles like bikes, feet, and streetcars.
And the noise!
Stand back a few paces from this discussion, and ask yourself honestly: Do you really believe there will ever be ten million flying cars buzzing around the US?
For me is just ludicrous to imagine that.
For comparison, can you imagine millions of persons moving around the US at 300 mph in silent hyperloops?
I sure can.
I don't think the argument from personal incredulity is especially strong here; I find both scenarios equally easy/hard to imagine.
> so you'd need 166 total just for this one building. From a perspective of land use and of time spent getting from your parking spot to your destination, this just sounds terrible
I mean, if you had a 2000-person building where everyone commutes by non-flying car then you'd need a pretty big parking structure too. In order to just maintain the same 166-space footprint that you already think is too big, you'd need a twelve-storey parking structure, which is very large.
From a land use point of view, flying cars (idealised flying cars anyway) would allow us to reclaim all the space currently devoted to streets and roads.
Also if we had (idealised) flying cars we probably wouldn't bother having 40-storey office buildings anyway, our cities would be less dense because you could travel greater distances with ease.
One nit to pick: general (private) aviation was not done to death by regulation as much it was by product liability torts. Lawyers somehow got extremely good at convincing juries that the aviation accident equivalent of "16 year old who just got their driver's license buys a Ferrari, drives it at 120mph on a twisting mountain road at night in a rainstorm, and predictably winds up dead after careening off a cliff" was somehow Ferrari's fault, and awarding the idiot's family millions of dollars in damages. Given that Ferraris are already a low volume market, it doesn't take too many such lawsuits to drive the cost to buy a new one through the stratosphere.
(The actual scenario would be that a rich retired athlete or businessman would buy an expensive, complex high performance airplane, do the minimum amount of training required, then fly off into bad weather in unfamiliar areas - which they should have known not to do if they had been paying attention in flight school - and run into a mountain, or building, or just plain crash. And their widow would then sue the airplane manufacturer, and usually win.)
In 1994, Congress passed https://en.wikipedia.org/wiki/General_Aviation_Revitalization_Act, which was supposed to fix this. Lawyers just switched targets from the manufacturers to the mechanics who work on planes, with the predictable result that airframe & powerplant mechanics refuse to sign off on an airplane's annual inspection unless everything is perfect, increasing cost of ownership for private airplanes.
All that being said, as a private pilot, the idea of having to share the skies with several orders of magnitude more aircraft, being flown by the equivalent of your average automobile driver who can't be bothered to use their turn signal or put down their phone while driving, is terrifying.