Wednesday, 26 November

09:00 EST

Texas Buys $5 Million In BTC ETF As States Edge Toward First Government Crypto Reserves [Slashdot]

Texas has purchased $5 million worth of BlackRock's bitcoin ETF as an initial step toward creating the first state-level bitcoin reserve in the U.S. "[O]ther states having previously invested in such funds with public-employee retirement money," notes CoinDesk. "Michigan has been building such an investment, and Wisconsin sold its $350 million pension-fund stake in the BlackRock ETF in May. From the report: A few weeks ago, Texas moved past its deadline to "capture the industry's best practices so it can utilize these practices in the implementation and management" of its bitcoin BTC reserve, according to its formal request for information issued in September. Entities across the industry provided input on how it could set up and manage the stockpile conceived of in the Texas Strategic Bitcoin Reserve and Investment Act. Last week, the state comptroller's office moved to secure $5 million in BlackRock's iShares Bitcoin Trust (IBIT) as a placeholder, a spokesman for the Texas Comptroller of Public Accounts told CoinDesk on Tuesday. It's an opening move as the state continues to work toward a contract with a custodian, he said, which will take place after it develops its formal request for proposal.

Read more of this story at Slashdot.

07:00 EST

How the turkey trotted its way onto our Thanksgiving tables — and into our lexicon [NPR Topics: News]

One of the two national Thanksgiving turkeys, Waddle and Gobble, which were presented to journalists in the Willard Room of the Willard InterContinental on November 24, 2025 in Washington, DC., for the 78th annual Turkey Pardoning at the White House.

Much of the turkey's early history is shrouded in uncertainty, historians and etymologists say — which is particularly true of how the bird got its name.

(Image credit: Anna Moneymaker)

Pope Leo XIV to visit Turkey and Lebanon on first foreign trip [NPR Topics: News]

Pope Leo XIV leaves after a Mass for the Jubilee of the Choirs in St. Peter

Pope Leo XIV is embarking on his first foreign trip, a pilgrimage to Turkey and Lebanon, amid Mideast tensions and the media glare that will document history's first American pope on the road.

(Image credit: Alessandra Tarantino)

The origins of your dog's unique look may be older than you think [NPR Topics: News]

Modern dogs come in all shapes and sizes. A new study finds they started evolving much of that physical diversity thousands of years ago.

A new study finds that about half of the physical variation seen in modern dogs existed during the Stone Age.

(Image credit: Stephanie Keith)

The U.S. produces a lot of food waste. This place wants to address it [NPR Topics: News]

A compost facility in Staten Island, New York aims to repurpose the city

Even amid rising grocery prices and increased sensitivity to environmental issues, Americans still trash once-edible food at alarming rates.

(Image credit: Keren Carrion)

06:00 EST

Study Claims To Provide First Direct Evidence of Dark Matter [Slashdot]

An anonymous reader quotes a report from the Guardian: Nearly a century ago, scientists proposed that a mysterious invisible substance they named dark matter clumped around galaxies and formed a cosmic web across the universe. What dark matter is made from, and whether it is even real, are still open questions, but according to a study, the first direct evidence of the substance may finally have been glimpsed. More work is needed to rule out less exotic explanations, but if true, the discovery would go down as a turning point in the decades-long search for the elusive substance that is said to make up 27% of the cosmos. "This could be a crucial breakthrough in unraveling the nature of dark matter," said Prof Tomonori Totani, an astrophysicist at the University of Tokyo, who said gamma rays emanating from the centre of the Milky Way appeared to bear the signature of the substance. [...] To search for potential dark matter signals, Totani analysed data from Nasa's Fermi Gamma-ray Space Telescope, which detects the most energetic photons in the electromagnetic spectrum. He spotted a pattern of gamma rays that appeared to match the shape of the dark matter halo that spreads out in a sphere from the heart of the galaxy. The signal "closely matches the properties of gamma-ray radiation predicted to be emitted by dark matter," Totani told the Guardian. Details are published in the Journal of Cosmology and Astroparticle Physics. If Totani has seen dark matter at work, the observations suggest it is made from elementary particles 500 times more massive than the proton. But far more work is needed to rule out other astrophysical processes and background emissions that could explain the signals. Totani said the "decisive factor" would be detecting gamma rays with the same spectrum from other regions of space, such as dwarf galaxies. According to Prof Justin Read, an astrophysicist at the University of Surrey, the lack of significant signals from such galaxies strongly argues against Totani having seen gamma rays emitted from dark matter particle annihilation. Prof Kinwah Wu, a theoretical astrophysicist at UCL, urged caution, saying: "I appreciate the author's hard work and dedication, but we need extraordinary evidence for an extraordinary claim," he said. "This analysis has not reached this status yet. It is a piece of work which serves as an encouragement for the workers in the field to keep on pressing."

Read more of this story at Slashdot.

03:00 EST

China Launches An Emergency Lifeboat To Bring Three Astronauts Back To Earth [Slashdot]

China launched an uncrewed Shenzhou 22 spacecraft to serve as an emergency lifeboat for three astronauts aboard the Tiangong space station after a docked return craft was found to have a cracked window likely caused by space debris. "A Long March 2F rocket fired its engines and lifted off with the Shenzhou 22 spacecraft, carrying cargo instead of a crew, at 11:11 pm EST Monday (04:11 UTC Tuesday)," reports Ars Technica. "The spacecraft docked with the Tiangong station nearly 250 miles (400 kilometers) above the Earth about three-and-a-half hours later." From the report: Chinese engineers worked fast to move up the launch of the Shenzhou 22, originally set to fly next year. On November 4, astronauts discovered one of the two crew ferry ships docked to the Tiangong station had a damaged window, likely from an impact with a small fragment of space junk. [...] Now, 20 days after the saga began, the Tiangong outpost again has a lifeboat for its long-term residents. Astronauts Zhang Lu, Fu Wei, and Zhang Hongzhang will return to Earth on the Shenzhou 22 spacecraft next year, soon after the arrival of their three replacements. The Tiangong astronauts will head outside the station on a spacewalk to inspect the damaged window on Shenzhou 20. Eventually, Shenzhou 20 will depart Tiangong and reenter the atmosphere with cargo. Assuming a smooth landing, Chinese engineers will have an opportunity to get a closer look at the damage on the ground to inform the design of future spacecraft. A preliminary assessment of the window indicates the crack is in the outermost layer of heat-insulating glass in Shenzhou 20's porthole window, according to Chinese state media. Engineers on the ground conducted simulations and wind tunnel ablation tests to determine whether the window might fail during reentry. "The results showed that the cracks would still propagate further," reported CCTV, China's government-run television network. "We held review meeting, and everyone agreed that ensuring the safe return of the astronauts was too risky with the glass damaged," Zhou said. While this crew is just one month into their planned six-month expedition, an emergency could force them to leave the station and return home at any time. Although remote, another collision with space junk, a major systems failure, or a medical emergency involving one of the astronauts could trigger an evacuation. That's why Chinese officials wanted to quickly launch Shenzhou 22 to give the crew a ticket home.The International Space Station follows the same policy, with SpaceX's Dragon spacecraft and Russian Soyuz ships serving as lifeboats until their crews' scheduled return to Earth.

Read more of this story at Slashdot.

Italy now recognizes the crime of femicide and punishes it with life in prison [NPR Topics: News]

Activists perform on the occasion of the International Day for the Elimination of Violence against Women, in Rome, on Tuesday.

Italy's parliament on Tuesday approved a law that punishes femicide with life in prison. The vote coincided with the international day for the elimination of violence against women.

(Image credit: Roberto Monaldo)

Why AI Safety Won't Make America Lose The Race With China [Astral Codex Ten]

If we worry too much about AI safety, will this make us “lose the race with China”1?

(here “AI safety” means long-term concerns about alignment and hostile superintelligence, as opposed to “AI ethics” concerns like bias or intellectual property.)

Everything has tradeoffs, regulation vs. progress is a common dichotomy, and the more important you think AI will be, the more important it is that the free world get it first. If you believe in superintelligence, the technological singularity, etc, then you think AI is maximally important, and this issue ought to be high on your mind.

But when you look at this concretely, it becomes clear that this is too small to matter - so small that even the sign is uncertain.

The State Of The Race

We can divide the AI race into three levels: compute, models, and applications2. Companies use compute - chips deployed in data centers - to train models like GPT and Claude. Then they use those models in various applications. For now, those applications are things like Internet search and image generation. In the future, they might become geopolitically relevant fields like manufacturing and weapons systems.

Compute: America is far ahead. We have better chips (thanks, NVIDIA) and can produce many more of them (thanks, TSMC). Our recent capex boom, where companies like Google and Microsoft spend hundreds of billions of dollars on data centers, has no Chinese equivalent. By the simplest measure - total FLOPs on each sides - we have 10x as much compute as China, and our advantage is growing every day. A 10x compute advantage corresponds to about a 1-2 year time advantage, or an 0.5 - 1 generation advantage (eg GPT-4 to GPT-5).

Models: The quality of foundation models - giant multi-purpose AIs like GPT or Claude - primarily depends on the amount of compute used to train them, so America’s compute advantage carries over to this level. In theory, clever training methods and advanced algorithms can make one model more or less compute-efficient than another, but this doesn’t seem to be affecting the current state of the race much - most advances by one country are quickly diffused to (or stolen by) the other. Despite some early concerns, neither DeepSeek nor Kimi K2 Chinese models provide strong evidence of a Chinese advantage in computational efficiency (1, 2).

Applications: This is where China is most likely to dominate3. They already outdo America at most forms of advanced manufacturing and infrastructure deployment (eg solar, high-speed rail). And as a command economy, they have more ability to steamroll over concerns like job loss, intellectual property, et cetera.

China knows all of this and is building their AI strategy around it. The plan, which some observers have dubbed “fast follow”, goes like this:

  1. Work hard to catch up with US chip production. They are very far behind here, but also have a long history of catching up to the West on things when they put their mind to it, so they feel up to the challenge. They estimate this will take ten years.

  2. For the next ten years, accept that they may lag somewhat behind America in compute, and therefore on models. But if they can smuggle in chips and steal US technological advances, they can keep this to a manageable 1-2 year gap, rather than a disastrous 4-5 year gap.

  3. Leverage their applications advantage as hard as possible. They imagine that sure, maybe America will have AI that’s 1-2 years more advanced than theirs. But if our smarter AI is still just sitting in a data center answering user queries - and their dumber AI is already integrated with tens of thousands of humanoid robots, automated drones, missile targeting systems, etc - then they still win.

This is a very practical strategy from a very practical country. The Chinese don’t really believe in recursive self-improvement or superintelligence4. If they did, they wouldn’t be so blasé about the possibility of America having AIs 1-2 years more advanced than theirs - if our models pass the superintelligence threshold while theirs are still approaching it, then their advantage in humanoids and drones no longer seems so impressive.

What is the optimal counter-strategy for America? We’re still debating specifics, but a skeletal, obvious-things-only version might be to preserve our compute advantage as long as possible, protect our technological secrets from Chinese espionage, and put up as much of a fight as possible on the application layer.

The State Of AI Safety Policy

It’s worth being specific about what we mean by “AI safety regulation”.

The two most discussed AI safety bills of the past year - California’s SB53 and New York’s RAISE Act - as well as Dean Ball’s proposed federal AI safety preemption bill - all focus on a few key topics:

  • The biggest companies (eg OpenAI, Anthropic, Google) must disclose their model spec, ie the internal document saying what their models are vs. aren’t banned from doing.

  • These companies should come up with some kind of safety policy and disclose it.

  • These companies can’t retaliate against whistleblowers who report violations of their safety policy.

  • These companies should do some kind of evaluation to see if their AIs can hack critical infrastructure, create biological weapons, or do other mass casualty events.

  • If they find that the answer is yes, they should tell the government.

  • If one of these things actually happens during testing, they should definitely tell the government.

These are relatively cheap asks. For example, the evaluation to see whether AIs can hack infrastructure will require hiring people who can conduct the evaluation, allocating compute to the evaluation, etc. But on the scale of an AI training run, the sums involved are tiny. Currently, two nonprofits - METR and Apollo Research - do similar tests on publicly-available models. I estimate their respective budgets at $5 million and $15 million per year. Nonprofits can always pay lower salaries than big companies, so it may cost more for OpenAI to replicate their work - for the sake of argument, $25 million. Meanwhile, the likely cost to train GPT-6 will probably be about $25 - $75 billion, with a b. So the safety testing might increase the total cost by 1/1000th. I asked some people who work in AI labs whether this seemed right; they said that most of the cost would be in complexity, personnel, and delay, and suggested an all-things-considered number ten times higher - 1% of training costs.

But all activists start out with small asks, then move up to larger ones. Is there a risk that the next generation of AI safety regulations will be more burdensome? From what I hear, if we win this round beyond our expectations, the next generation of AI safety asks is third-party safety auditing and location verification for chips. I don’t know the exact details, but these don’t seem order-of-magnitude worse than the current bills. Maybe another 1%.

What about extreme far-future asks? Aren’t there safetyists who want to pause AI progress entirely?

Most people who discuss this want a mutual pause. The most extreme organization in this category, Pause AI, has this on their FAQ:

Q: If we pause, what about China?

A: […] We are primarily asking for an international pause, enforced by a treaty. Such a treaty also needs to be signed by China. If the treaty guarantees that other nations will stop as well, and there are sufficient enforcement mechanisms in place, this should be something that China will want to see as well5.

When we look at concrete AI safety demands, they aren’t of the type or magnitude to affect the race with China very much - maybe 1-2%.

So Is It Impossible For Regulation To Erode The US Lead?

Running the numbers: we started with a 10x compute advantage over China.

Safety legislation risks adding 1-2% to the cost of training runs.

So if we were able to train a model 10x bigger than China’s best model before safety legislation, we can train a model ~9.8x bigger than China’s best model after safety legislation.

Does that mean that America’s chip advantage is so big that no regulation can possibly lose us the race?

Not necessarily. Consider AI ethics regulations like the Colorado AI Act of 2024. It legislates that any institution which uses AI to make decisions (schools, hospitals, businesses, etc) must perform yearly impact assessments evaluating whether the models might engage in “algorithmic discrimination”, a poorly-defined concept from the 2010s that doesn’t really make sense in reference to modern language models. Anyone who could possibly be affected by an AI decision (students, patients, employees, etc) must be notified about the existence of the AI, its inputs and methods, and given an opportunity to appeal any decision which goes against them (for example, if a business used AI when deciding not to hire a job candidate).

In the three-part division that we discussed earlier, the Colorado act most affects the application layer. Instead of imposing a fixed per-training run cost on trillion-dollar companies that don’t care, it places a constant miasma of fear and bureaucracy over small businesses and nonprofits. Some end users might never adopt AI at all. Some startups might be strangled in their infancy. Some niches might end up dominated by one big company with a good legal team that establishes itself as “standard of care” and keeps customers too afraid of regulatory consequences to try anything else. None of this is easy to measure in compute costs, nor does a compute advantage necessarily counterbalance it.

China is relying on this. They know they can’t compete on the compute and model layers in the near-term6, so they’re hoping to win on applications. They imagine America having a slightly better model - GPT-7 instead of GPT-6 - but our GPT-7 is sitting in a data center answering user questions and generating porn, while their GPT-6 is helping to run schools, optimize factories, and pilot drones. America’s task isn’t micro-optimizing our already large compute and model advantages - gunning to bring the score to GPT-7.01 vs. GPT-6. It’s responding to the application-layer challenge that China has set us.

AI safety only tangentially intersects the application layer. There’s no sense in which schools and hospitals need to be doing yearly impact assessments to see whether they have created a hostile superintelligence. Aside from the AI companies themselves, our interest in end users is limited to those who control weapons of mass destruction - biohazard labs, nuclear missile silos, and the like. These institutions should harden themselves against AI attack. All our other asks are concentrated on the model layer, where China isn’t interested in competing and the American position is already strong.

But What If I Really Care About A 1% Model-Layer Gap?

One might argue that every little bit helps. Even though I claim that AI safety regulation only increases training costs by 1%, maybe I’m off by an order of magnitude and it’s 10%, and maybe there will be ten things like that, and when you combine them all then we’re getting to things that might genuinely tip close races. What then?

Here it’s helpful to zoom out and look at the scale of other issues that affect the US-China AI balance, of which the most important is export controls.

America’s biggest advantage in the AI race is our superior chips, which provide the 10x compute advantage mentioned above. Until about 2023, we had few export controls on these. China bought them up and used them to power their own AI industry.

In 2023, the US realized it was in an AI race with China and slashed chip exports. Chinese access to compute dropped dramatically. They began accelerating onshore chip development, but this will take a decade or more to pay off. For now, the Chinese AIs you’ve heard of - DeepSeek, Kimi, etc - are primarily trained on a combination of stockpiled American chips from before the export controls, and American chips smuggled in through third parties, especially Singapore and Malaysia.

Institute For Progress has a great report analyzing the stakes. They project how much compute each country will add in 2026.

Because our compute advantage keeps growing, we look better in flows than stocks: in a world without smuggling, America adds 31x more compute than China next year. But if China can continue to smuggle at their accustomed rate, the lead collapses from 31x to 10x.

If the US knows about Chinese chip smuggling strategies, why can’t it crack down? The main barriers are a combination of corporate lobbying and poor funding. That is, chip companies want to continue to sell to Singapore and Malaysia without too many awkward questions about where the chips end up. And the Bureau of Industry and Security, the government department charged with countering smuggling, gets about $50 million/year to spend on chips, which experts say is not enough to plug all the holes. To put that number in context, Mark Zuckerberg recently made job offers as high as $1 billion per AI researcher. If America cared about winning the race against China even a tenth as much as Mark Zuckerberg cares about winning the race against OpenAI, we would be in a much better position!

It gets worse. NVIDIA, America’s biggest company, constantly lobbies to be allowed to sell its advanced chips to China. It’s not afraid to play dirty, and stands accused of trying to get China hawks pushed out of government for resisting; Steven Adler reports “widespread fear among think tank researchers and policy experts who publish work against NVIDIA’s interests”. Foundation for American Innovation fellow David Cowan goes further, saying that “NVIDIA is a national security risk”.

All of this lobbying has paid off: the administration keeps proposing changing the rules to allow direct chip sales to China. So far cooler heads have prevailed each time, but the deal keeps popping back onto the table. NVIDIA tries to argue that the models being proposed for export are only second-rate chips that won’t affect the compute balance, but this is false - last month’s talks involved the most price-performant chip on the market. Here’s IFP’s calculation for how caving on this issue would affect the AI race:

It would decrease our compute advantage from 10-30x to about 2x. You can read the report for more scenarios, including one where aggressive chip exports actually give China a compute advantage.

Commentators have struggled to describe how bad an idea this is. Some say it would be like selling Russia our nukes during the Cold War, or selling them our Saturn V rockets during the space race. The problem isn’t just that Russia gets free rockets. It’s also that every rocket we sell to Russia is one that we can’t use ourselves. We’re crippling our own capacity in order to enrich our rivals.

Yet some of the loudest voices warning against AI safety regulation on “race with China” grounds support NVIDIA chip exports! For example, White House “AI and crypto czar” David Sacks, a strident opponent of AI safety regulation, has been instrumental in trying to dismantle export controls and anti-smuggling efforts. According to NYT:

Mr. Sacks disliked another Biden administration rule that controlled A.I. chip sales around the world. He also questioned Washington’s consensus that selling A.I. chips abroad would be bad for the United States.

Some people argue that giving China our chips prevents them from learning to make their own. I think this is historically naive: has giving China our advanced technology ever worked before? “Maybe letting China access our technology will open up new markets for American goods” is the “maybe the stripper really likes you” of international trade. We have tried this for decades; every time, China has stolen the tech and made their own, better versions. China is obsessed with autarky - the idea that after a “century of humiliations”, they shouldn’t depend industrially on any outside power. They aren’t going to give up on chip manufacturing, a vital dual-use technology. We shouldn’t blow our entire present-day AI lead in the hopes that China will do the thing which it has never done once in history and which its entire industrial culture is centered around not doing. If we give them chips, they’ll both use our chips and develop their own (remember, China is a command economy, and they don’t have to stop developing their own chips just because there’s a lower-cost option). Then they’ll use their AIs, built with our chips, to compete with American AIs on the international market.

Others argue that chip sanctions just encourage China to be smarter and more compute-efficient, and that we’ll regret training them into a scrappy battle-hardened colossus. I think this is insulting to American and Chinese researchers, who are already working maximally hard to discover efficiency improvements regardless of our relative compute standing. More important, it doesn’t seem to be true - Chinese AIs are no more compute-efficient than American models, with most claims to the contrary being failures of chip accounting. I’m not even sure the people making this argument believe their own claims. When I play devil’s advocate and ask them whether America should perhaps pass lots of AI safety regulations a hundred times stricter than the ones actually under consideration - since that would increase training costs, reduce the number of chips we can afford, and cripple us in the same way that chip sanctions cripple China - these people suddenly forget about their bad-things-are-good argument and go back to believing that bad things are bad again.

A final argument for chip exports: right now, chip autarky is something like China’s number five national priority. But if our AI lead becomes too great, they might increase it to number one, and catch up quicker. If we allow some chip exports to China, we can keep our lead modest, and prevent them from panicking and working even harder to catch up. This is too 4D chess for me - we have to keep our lead small now so it can be bigger later? But again, if you support keeping our lead small to avoid scaring China, you can’t turn around and say you’re against AI safety regulation because it might shrink our lead!

Absent galaxy-brained takes like these, reducing our 30x compute advantage relative to China to a 1.7x compute advantage is extremely bad - orders of magnitude worse than any safety regulation. So why do so many of the same people who panic over AI safety regulation - who call us “traitors” for even considering it- completely fail to talk about the export situation at all, or engage with it in dumb and superficial ways?

I don’t think this combination of positions comes from a sober analysis of the AI race. I think people have narratives they want to tell about government, regulation and safetyism, and the AI safety movement - which has “safety” right in the name! - makes a convenient villain. The topics that really matter, like export controls, don’t lend themselves to these stories equally well - you would have to support something with “controls” right in the name!7 - so they get pushed to the sideline.

But the people who care most about the race against China are focusing most of their energy on export controls, some energy on application-layer regulations like the one in Colorado, and barely think about AI safety at all.

It’s Too Early To Even Know The Sign Of AI Safety Regulations

Narratives about regulation stifling progress are attractive because they are often true. A time may come when the Overton Window shifts to a set of AI safety regulations strong enough to substantially slow American AI. Perhaps this will happen at the same time that China finally solves its own chip shortage - likely sometime in the 2030s - and America can no longer rely on its compute advantage for breathing room. Then the threat of Chinese ascendancy will be a relevant response to concerns about safety. Perhaps people raising these arguments now believe that they are protecting themselves against that future - better to cut the safety movement out at its root, before it starts to really matter. Be that as it may, their public communications present the case that AI safety regulation is already a big threat. This is false, and should be called out as such.

But also, it’s based on a flawed idea that the only way AI safety can affect the race with China is to slow us down. I’ve already argued that the magnitude of any deceleration is trivial. But I’ll go further and say it’s too early even to know what the sign of AI safety regulations is; whether they might actually speed us up relative to China.

First, safety-inspired regulation is leading the way in keeping data centers secure. Secure data centers prevent hostile AIs from hacking their way out, but they also prevent Chinese spies from hacking their way in. The safety-inspired SB 53 is the strictest AI cybersecurity regulation on the books, demanding that companies report “cybersecurity practices and how the large developer secures unreleased model weights from unauthorized modification or transfer by internal or external parties.” So far, no other political actor has been equally interested in the types of measures that would prevent the Chinese from stealing US secrets and model weights; this is a key factor in developing a model-layer lead.

Second, safetyists are pushing for compute governance: tags on chips that let governments track their location and use. This would be a key technology for monitoring any future international pause, but incidentally would also make it much easier to end smuggling and prevent the slow trickle of American chips to Chinese companies.

Third, China is having its own debate over whether it can prioritize safety without losing the race against America! See for example TIME - China Is Taking AI Safety Seriously. So Must The US. If America signals that it takes safety seriously, this might give pro-safety Chinese factions more room to operate on their side of the ocean, leaving both countries better off.

Finally, small regulations now could prevent bigger regulations later. In the wake of a catastrophe, governments over-react. If something went wrong with AI - even something very small, like a buggy AI inserting deliberate malware into code that brought down a few websites, or a terrorist group using an AI-assisted bioweapon to make a handful of people sick - the resulting panic could affect the AI industry the same way 9/11 affected aviation. If safety regulations halve the likelihood of a near-term catastrophe at the cost of adding 1% to training runs, it’s probably worth it.

More generally, industry leaders tend to play up how much they want to win the race with China when it’s convenient for them - for example, as a way of avoiding regulation - then turn around and sell China our top technology when it serves their bottom line. Safetyists may have some other priorities layered on top, but we actually want to win the race with China, because a full appreciation of the potential of superintelligence produces a natural reluctance to let it fall into the hands of dictators. A recent Washington Examiner article pointed to “effective altruists” in DC as responsible for some of the strongest bills aimed at preserving American AI supremacy, both during the last administration and the current one.

When the wind changes, and the position of industry leaders changes with it, you may be glad to have us around.

1

For purposes of this post, I am accepting the race framework as a given; for challenges to it, see eg here.

2

This section comes mostly from personal conversations, but is pretty similar to the conclusions of Nathan Barnard and Dean Ball.

3

Especially in hardware applications. The US has a good software ecosystem, and more advanced models might let us keep an edge in AI-enabled software applications like Cursor.

4

With the notable exception of Liang Wenfeng, CEO of DeepSeek. This is maybe not so different from the US, where tech company CEOs believe in superintelligence while the government tends towards more practical near-term thinking. But in America, companies are more influential relative to government than in China. In particular, DeepSeek is much poorer than the American tech giants and has little access to VC funding. So where the US tech giants can engage in massive data center buildup on their own, a similar capex push in China will need to be led by the government.

5

It’s more complicated than this, because the US is in a stage of the race where it’s mostly working on building AIs, and China is in a stage of the race where it’s mostly working on developing chips. If a treaty bans both sides from building AI, China can still develop its chips, and be in a better place vis-a-vis the United States when the treaty ends than when it began. A truly fair treaty would have to either wait until China had finished developing its chips and was also in the building-AI stage of the race (5-10 years), or place restrictions on Chinese chip development, or otherwise compensate the US for this asymmetry.

6

It will take until about 2035 for China to be able to seriously compete on compute. After that, they most likely end up with a large compute advantage due to their superior manufacturing base, energy infrastructure, state capacity, and lack of NVIDIA profit margins (see footnote 7 below). If America doesn’t have superintelligence by then, we are in trouble. I don’t know of anyone who has a great plan for this besides trying to improve on all these fronts, and I also don’t have a great plan for this.

7

Future developments may threaten these people’s China hawkery even further. NVIDIA has a 90% profit margin on every advanced chip sold in the US. China is still working on developing advanced chips, but once they get them, the government will make Huawei sell at minimal profit margins, to support the national interest of winning the AI race. That means that at technological parity, US chips will cost 10x Chinese chips, and it may become a live topic of debate whether the US government should force NVIDIA to cut its own profit margins. I can only vaguely predict who will take which side of these debate, but I bet it won’t line up with current levels of China hawkery.

02:00 EST

Trump spares turkeys — but not his political opponents — at annual pardoning ceremony [NPR Topics: News]

President Donald Trump and first lady Melania Trump, stand next to national Thanksgiving turkey Gobble during a pardoning ceremony in the Rose Garden of the White House, on Tuesday, in Washington.

President Trump brought grievances to the Thanksgiving pardoning ceremony on Tuesday. He joked about sending the turkeys to a prison in El Salvador, and claimed that last year's turkey pardons were invalid.

(Image credit: Evan Vucci)

National Parks to raise fees by $100 for international tourists to popular U.S. parks [NPR Topics: News]

Tourists flock to Mather Point at Grand Canyon National Park, Oct. 1, 2025, in Grand Canyon, Ariz.

The National Park Service says it is going to start charging international tourists an extra $100 to enter popular parks. They will be left out of fee-free days, reserved for American residents.

(Image credit: Ross D. Franklin)

Tuesday, 25 November

23:00 EST

Britain Plots Atomic Reboot As Datacenter Demand Surges [Slashdot]

The UK is seeking to fast-track new atomic development to meet soaring energy demands driven by AI and electrification. According to a new report published by the government's Nuclear Regulatory Taskforce, excessive regulation has made Britain the most expensive place in the world to build nuclear projects. The report is calling for a sweeping overhaul to accelerate reactor construction -- everything from "streamlining regulation" to relaxing environmental and safety constraints. The Register reports: The document outlines 47 recommendations for the government, which come under five general areas: providing clearer leadership and direction for the nuclear sector; simplifying the regulatory approval process for atomic projects; reducing risk aversion; addressing incentives to delay progress; and working with the nuclear sector to speed delivery and boost innovation. Among the recommendations is that a Commission for Nuclear Regulation should be established, becoming a "unified decision maker" across all other regulators, planners, and approval bodies. The report also talks of reforming environmental and planning regimes to speed approvals, echoing the government's earlier decisions to streamline the planning process to make it easier for datacenter projects to get built. It recommends amending the cost cap for judicial reviews and limiting legal challenges to Nationally Strategic Infrastructure Projects (NSIPs), while indemnifying nuclear developers against any damages they might incur as a result of proceeding with their project while a judicial review is still being decided. Another recommendation that may be cause for concern is that the government should modify the Habitats Regulations to reduce costs. These are rules created to protect the most important and vulnerable natural sites and wildlife species across the UK. The report also states that radiation limits for workers are overly conservative and well below what could be appropriately considered "broadly acceptable," claiming that they are many times less than what the average person in the UK normally receives in a year.

Read more of this story at Slashdot.

22:00 EST

Plex Is Now Enforcing Remote Play Restrictions On TVs [Slashdot]

Plex is beginning to enforce new restrictions on remote streaming for its TV apps, requiring either a Plex Pass or the cheaper Remote Watch Pass to watch media from servers outside your home network. How-To Geek reports: Plex is now rolling out the remote watch changes to its Roku TV app. This means that you will need a Plex Pass or Remote Watch Pass for your Plex account if you want to stream media from a server outside your home. If you're only watching media from your own server on the same local network as your Roku device, or the owner of the server you're streaming from has Plex Pass, you don't have to do anything. Plex says this change will come to the other TV apps in 2026, such as Fire TV, Apple TV, and Android TV. Presumably, that will happen when the redesigned app arrives on those platforms. Roku was just the first TV platform to get the new app, which caused a wave of complaints from users about removed functionality and a more clunky redesign. Plex is addressing some of those complaints with more updates, but adding another limitation at the same time isn't a great look. The Remote Watch Pass costs $2 per month or $20 per year, but there's no lifetime purchase option. You can also use a Plex Pass, which normally costs $7 per month, $70 per year, or $250 for a lifetime license. However, there's currently a 40% off sale for Plex Pass subscriptions.

Read more of this story at Slashdot.

21:00 EST

HP To Cut About 6,000 Jobs By 2028, Ramps Up AI Efforts [Slashdot]

HP plans to cut 4,000-6,000 jobs by 2028 "as part of a plan to streamline operations and adopt artificial intelligence," reports Reuters. From the report: HP's teams focused on product development, internal operations and customer support will be impacted by the job cuts, CEO Enrique Lores said during a media briefing call. "We expect this initiative will create $1 billion in gross run rate savings over three years," Lores added. The company laid off an additional 1,000 to 2,000 employees in February, as part of a previously announced restructuring plan. Demand for AI-enabled PCs has continued to ramp externally, reaching over 30% of HP's shipments in the fourth quarter ended October 31.

Read more of this story at Slashdot.

Louisiana's top health official, a critic of the COVID vaccine, will be CDC deputy [NPR Topics: News]

Dr. Ralph Abraham spoke at a Trump re-election rally in Lake Charles, La., on Oct. 11, 2019. At the time, Abraham was a Republican congressman running for governor of Louisiana. He lost that race, but was named the state

Louisiana's surgeon general Dr. Ralph Abraham, who has praised Robert F. Kennedy Jr.'s tenure as health secretary and called COVID vaccines "dangerous," will become the second-highest ranking official at the CDC.

(Image credit: Gerald Herbert)

Snowing! [Pharyngula]

The first big snowstorm of this winter has arrived, in time to give us a white Thanksgiving.

20:00 EST

Google Maps Will Let You Hide Your Identity When Writing Reviews [Slashdot]

An anonymous reader quotes a report from PCMag: Four new features are coming to Google Maps, including a way to hide your identity in reviews. Maps will soon let you use a nickname and select an alternative profile picture for online reviews, so you can rate a business without linking it to full name and Google profile photo. Google says it will monitor for "suspicious and fake reviews," and every review is still associated with an account on Google's backend, which it believes will discourage bad actors. Look for a new option under Your Profile that says Use a custom name & picture for posting. You'll then be able to pick an illustration to represent you and add a nickname. Google didn't explain why it is introducing anonymous reviews; it pitched the idea as a way to be a business's "Secret Santa." Some users are nervous to publicly post reviews for local businesses as it may be used to track their location or movements. It may encourage more people to contribute honest feedback to its platform, for better or worse. Further reading: Gemini AI To Transform Google Maps Into a More Conversational Experience

Read more of this story at Slashdot.

Warner Music Group Partners With Suno To Offer AI Likenesses of Its Artists [Slashdot]

Warner Music Group has reached a licensing deal with Suno that will let users create AI-generated music using the voices and likenesses of artists who opt in. WMG says participating artists will have "full control" over how their likeness and music are used. "These will be new creation experiences from artists who do opt in, which will open up new revenue streams for them and allow you to interact with them in new ways," Suno says, adding that users will be able to "build around" an artist's sounds "and ensure they get compensated." WMG is also dropping its previous lawsuit accusing Suno of scraping copyrighted material. "Along with the licensing agreement, Suno is planning to use licensed music from WMG to build next-gen music generation models that it claims will surpass its flagship v5 model," adds The Verge. "It will also start requiring users to have a paid account to download songs starting next year, with each tier providing a specific number of downloads each month." Further reading: First 'AI Music Creator' Signed by Record Label. More Ahead, or Just a Copyright Quandry?

Read more of this story at Slashdot.

19:00 EST

There may not be a safe off-ramp for some taking GLP-1 drugs, study suggests [Ars Technica - All content]

The popularity of GLP-1 weight-loss medications continues to soar—and their uptake is helping to push down obesity rates on a national scale—but a safe, evidence-based way off the drugs isn’t yet in clear view.

An analysis published this week in JAMA Internal Medicine found that most participants in a clinical trial who were assigned to stop taking tirzepatide (Zepbound from Eli Lilly) not only regained significant amounts of the weight they had lost on the drug, but they also saw their cardiovascular and metabolic improvements slip away. Their blood pressure went back up, as did their cholesterol, hemoglobin A1c (used to assess glucose control levels), and fasting insulin.

In an accompanying editorial, two medical experts at the University of Pittsburgh, Elizabeth Oczypok and Timothy Anderson, suggest that this new class of drugs should be rebranded from “weight loss” drugs to “weight management” drugs, which people may need to take indefinitely.

Read full article

Comments

Poland Probes Apple Again Over App Tracking Transparency Rules [Slashdot]

Poland has launched a new antitrust investigation into Apple's App Tracking Transparency rules, questioning whether Apple misled users about privacy while giving its own apps a competitive advantage over third-party developers. AppleInsider reports: On November 25, Poland's UOKiK has started another investigation into App Tracking Transparency, and whether Apple had restricted competition in mobile advertising. Reuters reports that, to the anti-monopoly regulator, ATT may have limited advertisers' ability to collect user data for advertising purposes while simultaneously favoring Apple's ad program. On November 25, Poland's UOKiK has started another investigation into App Tracking Transparency, and whether Apple had restricted competition in mobile advertising. Reuters reports that, to the anti-monopoly regulator, ATT may have limited advertisers' ability to collect user data for advertising purposes while simultaneously favoring Apple's ad program. This is not the first time that Poland has looked into ATT rules. In December 2021, the regulator held a similar probe following criticism from advertisers. It's not clear what that complaint determined, or if it is still ongoing. Regardless, in the new complaint, the logic is that Apple had a competitive advantage since its own apps were not subject to ATT rules, but third-party apps did have to deal with ATT. Since Apple didn't visibly ask for consent for its first-party apps in the same way, there is a presumption that Apple's rules only applied to other companies. This is despite Apple's repeated insistence that it doesn't use the same kinds of collected data in its own apps and services for marketing purposes, as well as its stance on privacy in general. In short, Apple apps don't use the data, so it doesn't pop up a dialog box asking the user if the app can use the data. There is also the argument that, in setting up an account with Apple, users are providing blanket consent to the company. Implementing ATT on its own apps would therefore be a waste of time, since that consent was already granted. Apple said that it will work with the regulator on the matter, but warned that it could force them to withdraw the feature "to the detriment of European consumers."

Read more of this story at Slashdot.

Feeds

FeedRSSLast fetchedNext fetched after
0xADADA XML 20:00, Wednesday, 26 November 04:00, Thursday, 27 November
AI Daily News by Bush Bush XML 00:00, Thursday, 27 November 12:00, Thursday, 27 November
Ars Technica - All content XML 01:00, Thursday, 27 November 02:00, Thursday, 27 November
art blog - miromi XML 20:00, Wednesday, 26 November 04:00, Thursday, 27 November
Astral Codex Ten XML 20:00, Wednesday, 26 November 04:00, Thursday, 27 November
Blog - Ethan Zuckerman XML 20:00, Wednesday, 26 November 04:00, Thursday, 27 November
Cool Tools XML 01:00, Thursday, 27 November 02:00, Thursday, 27 November
Explorations of Style XML 01:00, Thursday, 27 November 01:00, Friday, 28 November
Geek&Poke XML 00:00, Thursday, 27 November 12:00, Thursday, 27 November
goatee XML 23:00, Wednesday, 26 November 05:00, Thursday, 27 November
Hacker News XML 01:00, Thursday, 27 November 02:00, Thursday, 27 November
Joho the Blog XML 20:00, Wednesday, 26 November 04:00, Thursday, 27 November
LESSIG Blog XML 00:00, Thursday, 27 November 12:00, Thursday, 27 November
Matthew C. Nisbet XML 20:00, Wednesday, 26 November 04:00, Thursday, 27 November
Notes From the North Country XML 01:00, Thursday, 27 November 01:00, Friday, 28 November
NPR Topics: News XML 01:00, Thursday, 27 November 02:00, Thursday, 27 November
Pharyngula XML 23:00, Wednesday, 26 November 05:00, Thursday, 27 November
Philip Greenspun’s Weblog XML 01:00, Thursday, 27 November 03:00, Thursday, 27 November
Philosophical Disquisitions XML 01:00, Thursday, 27 November 03:00, Thursday, 27 November
quarlo XML 00:00, Thursday, 27 November 12:00, Thursday, 27 November
Rhetorica XML 14:00, Wednesday, 26 November 14:00, Friday, 28 November
Science-Based Medicine XML 20:00, Wednesday, 26 November 04:00, Thursday, 27 November
Slashdot XML 02:00, Thursday, 27 November 02:30, Thursday, 27 November
Stories by Yonatan Zunger on Medium XML 20:00, Wednesday, 26 November 04:00, Thursday, 27 November
Study Hacks - Decoding Patterns of Success - Cal Newport XML 20:00, Wednesday, 26 November 04:00, Thursday, 27 November
tinywords XML 00:00, Thursday, 27 November 04:00, Thursday, 27 November
W3C - News XML 01:00, Thursday, 27 November 02:00, Thursday, 27 November