Thursday, April 17, 2025

Friday Links!

Leading off this week, if you think rugby is safer than football, think again: France rugby star Chabal does not remember ‘a single second’ of his career due to concussion

A terrific read: The physics of bowling strike after strike.

Once again, crows are incredible: Corvid intelligence: study shows crows understanding geometry

From Steve W., and the rest of the Dwarf Fortress documentary series is now available!
The Origins of Dwarf Fortress - (Series Episode One)
How Dwarf Fortress Evolved over 16 Years of Development - (Series Episode Two)
How Dwarf Fortress Coming to Steam Changed Everything - (Series Episode 3)
Life After the Success of Dwarf Fortress - Dwarf Fortress Series (Episode 4)

From Wally, and I wouldn't blame them: Op-Ed: “No More Worldcons in the United States?” Nixon's resignation and how it wasn't the end of the nightmare: “Our long national nightmare is over.” A long and interesting essay: We're sorry we created the Torment Nexus. It's not the content. It's how evil people interpret the content to suit their world view: The big idea: will sci-fi end up destroying the world?

From C. Lee, and it's tragic: Inside Elon Musk’s Gleeful Destruction of the Government. Another: Trump Wants to ‘Terminate’ Legal Immigrants’ Social Security Numbers. And more: US fires Greenland military base chief for 'undermining' Vance. An excellent review: To the Success of Our Hopeless Cause: The Many Lives of the Soviet Dissident Movement. Fantastic scientific research: In one dog breed, selection for utility may have selected for obesity. Bizarre: People named "Null" are being punished by computers in the weirdest ways. I had no idea: Why Did We Stop Using The Magnetic Card Stripe? | Secret Genius of Modern Life | BBC Earth Science. Related: How Microchips Made Bank Cards Safer | The Secret Genius of Modern Life | BBC Earth Science. I always wondered: How Sailors Went to the Loo in the Age of Sail. This is very clever: An Oxford comma walked into a room.

Jelly 2

Eli 23.8's iPhone was on its last legs. 

And had been for over two years. 

He'd been delaying replacing it because he's frugal (good), but now charging it was a random process of accidentally inserting the charging cable in exactly the right place with roughly a 10% chance of success (bad).

He'd also been talking about spending too much time on his phone, which for him is about two hours a day.

His friend had a small phone, very small, and said it made him spend less time on his phone because it wasn't as convenient. Eli liked this idea.

Enter the Jelly 2.

He pitched it to me with great enthusiasm. Roughly the size of a credit card and slightly thicker. Very inexpensive (<$150). Perfectly functional, but the small screen would translate into less phone usage. "I ordered it," he said, "and then I remembered the size of my hands." He laughed.

If you remember, when he was in high school I posted a photo of him holding six (seven?) tennis balls in one hand. For a 6'0" person, his hands are enormous. Not many people that height can easily palm a basketball, like he can.

He was still hyped for his tiny phone, though. It was being delivered the next afternoon.

Early in the day, I sent him this:






















[AI Generated]

It was well-received.

Now that he has the phone, he loves it. And it's cut his phone time in half.

Wednesday, April 16, 2025

Claiming Identity

Eli 23.8 is writing about identity in his master's thesis.

Specifically, original DDR programs (Disarmament, Demobilization, Reintegration) focused on the economic side of reintegration. It was assumed that social identity would follow if an economic identity was established, but this turned out not to be true. Identity doesn't work that way.

Being in war forces you to assume a new identity. It's not possible to be the person you were before. And after war, you can't just go back to being who you were, because that person doesn't exist anymore. Neither identity is you.

The problem is that you can't reintegrate into a peaceful society without shedding the identity you created in war.

Even in countries without civil war, it can be extraordinarily difficult to reclaim your identity after serving in the military. I've had friends who left the military and never reclaimed a separate identity, even decades later. They never reintegrated into society, not really.

You would think this would be a powerful incentive to stop sending people to war. Sadly, it hasn't worked out that way.

His thesis is due April 29 and I'm hoping he'll allow me to share it with you. I've read sections and it's a real attempt to contribute something significant to the field.


Tuesday, April 15, 2025

Wondering

I saw a man in a store wearing what looked like a Napa Auto Parts letterman jacket. 

How do you letter in auto parts? Can you just be on the team, or do you need some kind of All-District honor? And do you retain your NIL rights?


Monday, April 14, 2025

AI (your email)

The email was quite passionate. It's clear that those of you opposed to AI don't want to read about it, so I'll label AI posts the way I do political posts (how long has it been since I mentioned that jack-booted thugs are running the country now?).

Some excerpts from your email:

1.
I’d be more comfortable with the conversations around AI if we just called it what it ACTUALLY is - plagiarism engines. 

The reason I want to use that term is that dialogue and debate about "AI" obfuscates the trade off we are making by embracing or legitimizing these tools... By calling it "AI" we are invoking some sci-fi thing where the "consequence" is "maybe it gets smarter than us someday", which is not how LLMs or any of this actually work. Progress in all forms typically comes with consequences for segments of the populace but being realistic about what those consequences are is important.

I don't agree that "maybe it gets smarter than us someday" is not how an LLM works. It's impossible to define those terms, which makes it an impossible statement to evaluate. What can be said, with absolute certainty, is that these models have made stunning leaps forward in the last 24 months, increasing their utility substantially. Does it mean they'll ever be smarter than humans? I don't know. Do they need to be?

2. 
I do want to question your assertion that there's no putting it back. I don't think that's true at all. There are lots of technologies that we, as a society, have decided to put back once the harms of them has become clear. Leaded gasoline, asbestos, there have been lots of cases where we as a society have decided that the harms of something outweigh the benefits, and have regulated those technologies very closely or eliminated them altogether (again, through regulation).

And you might think something like leaded gasoline is a silly comparison, but I actually think there's a real comparison to be made there. In particular, LLMs have huge negative externalities, ones which in my opinion very much outweigh the benefits they provide. A lot of that is environmental, yes, although it remains to be seen whether some of the much scaled down models people have been toying with have value. But the negative impact of destruction of the creative commons, as well as the pollution of our public spaces (through spam, inauthentic content, etc.). We have restricted technologies in the past because those costs were unacceptable, and I authentically believe that that is true of large-scale LLMs.

Leaded gasoline, and asbestos, to me, are not the best examples to use, because in both cases there were clear and well-defined health risks. The health risks of LLMs, through increased use of electricity, are more difficult to define. And there's no guarantee that energy usage doesn't go down in the future, given that personal computers will be able to run these models locally at some point.

The destruction of the creative commons is a much more significant objection. There is no question that this will change the production of creative comment in enormous ways. However--so was the printing press. So were computers. So was digital art and digital editing tools for photographs. Those all altered the creative commons, but didn't manage to destroy it. LLMs will also result in alteration, not obliteration.

On the other hand, not everyone was upset about AI:

3.
I don't want to say that the various fears / qualms / dire warnings about A.I. are baseless -- because I don't think that they are -- but that this is yet another chapter in a very thick book called "Progress - Like It or Lump It". Has mankind ever even tried to evaluate the long-term effects of adopting a technology, much less predicting those effects anywhere near accurately, and then turned away? I can't think of any examples thereof.

Every tech advance in history has displaced workers, because it lowered expenses. People don't have jobs tilling the fields these days, so much, or raising oxen, smithing horseshoes, manufacturing horse carriages, photographic film, cameras, etc. It's not that this is good or bad, per se: you can argue about if it's good or bad, if you like, but that doesn't change the fact that this is what has happened in the past, what is happening now, and what will happen in the future. Complaints about it strike me as similar to the "kids these days" comments from Plato.

And in-between:
4.
From my perspective (using it since it came out, more and more, and seeing the steady progress), I think it's encompassing too many fields (basically, any field) to create constraints for its use. It drastically changes our societies, as we speak. What is a university grade worth today? Why hire a junior to fill out an Excel sheet when an AI agent can do this instantly? It is an amazing tool for us who lived a time when information was somewhat rare, because that led us to have brains that are optimized to search and be curious. It is not an amazing tool for generations who grew up with an infinite supply of internet and videos, because their brains haven't been used to focus and look for something. They're saturated, all the time. If there's a constraint, it should be with young adults but I imagine that it is not happening anytime soon.

This is hugely important: what will these models do to the ability of young people to think? Does anyone ponder anything, or just reflexively look it up? It already happens with the Internet, but this is the same effect writ large. I've written about the danger of social media and the Internet in stifling of our ability to create and instead turning us into absorption machines. There's a real danger that LLMs make this worse.

Does that mean we can stop it? No. And it doesn't mean there still won't be incredible original, transformative creative works. It just means we know this will have an effect, and we're not sure how profound that impact will be.

Thursday, April 10, 2025

Friday Links!

 Leading off this week, a terrific story from Eurogamer about people using games to effect positive change in their lives: From virtual to reality: the people who reshaped their lives thanks to video game simulators.

From Dave Y., and it's for curling fans: Broomgate 2.0: New sweeping controversy comes to a head at WFG Masters.

From DQ Story Advisor John Harwood, and it's delightful: Interstellar Docking Scene – Recreated in LEGO // Blender Animation

From Wally, and it's the first of many (in reference to board games): Tariffs Are Driving Up Game Prices Now

From Leo M., and it's staggeringly beautiful: Tuesday Telescope: Does this Milky Way image remind you of Powers of 10? Also: New MeerKAT radio image reveals complex heart of the Milky Way

From C. Lee, and it's a deep dive: Is The U.S. About To Go To War With Iran? A thoughtful analysis: The powerful force behind Trump’s tariffs. Unsurprising: Trump Supporter Mel Gibson Will Have Gun Rights Restored: Report. We're a kleptocracy now: Big brands are spending small sums on X to stay out of Musk’s crosshairs. One of many possible nefarious uses: AI Can Now Make Fake Receipts—Restaurants and Retailers Beware. Excellent: A guide to the 4 minerals shaping the world’s energy future. Concerning: What's in that bright red fire retardant? No one will say, so we had it tested. There's no compelling reason for this to happen: Toxic dust on Mars would present serious hazard for astronauts. This is incredibly unfortunate: “They curdle like milk”: WB DVDs from 2006–2008 are rotting away in their cases. So true: Nah, Man: It's not Nintendo's fault, but 2025 sure is a time. The Digital Antiquarian comes through yet again: The CRPG Renaissance, Part 5: Fallout 2 and Baldur’s Gate. This is incredibly kind: Nurse opens hotel catering to terminally ill patients. I agree; we are no longer considered reliable partners: The American Age Is Over.

AI (part three)

I'd planned on having a collection of your email today, but I received one this morning from someone who brought up an excellent rabbit hole to explore. What follows is lightly edited for clarity:

I work at one of the big ad agencies and we aren't allowed to use AI for client work unless it's only utilizing a database of assets (jphotos/videos/graphics) that the client actually owns. The concern is if we  use AI that pulls from everywhere we haven't licensed the creative it comes from. We open ourselves to being sued if someone can prove their art was used for something that made it to broadcast or print. In our industry we are leaning very conservative for great fear of plagiarism lawsuits.

The creative work coming through our agency has taken a bit of a dive due to MidJourney. The creative teams can use it internally to generate images to go along with the concepts they are coming up with to sell to clients. This speeds up the process, but it's been a detriment to the quality of ideas. Before, creatives would come up with an idea and create an image for the concept in Photoshop (which would take hours). During that time they would think about the idea/concept and refine it - sometimes improving it because they had to live with the idea for a period of time (also sometimes tossing it out because it wasn't good enough). Now they spend 5-10 minutes and move on. While it's increased the volume of ideas, the quality is lessened as is their understanding. When I ask (as a Producer) "How does this work?" they have no idea.

Here's the thing about creativity: it's time-driven. While ideas often happen out of "nowhere," they've been churning in the background of our minds for much longer. Also, so much of creativity is lateral thinking, not vertical. What AI generation programs for art seemingly do is remove the lateral thinking aspect. 

Much is lost.

When I write, I often don't come up with the right phrase or idea until I've gone through 5+ drafts (not infrequently, 10+). Not everyone works that way, but I do. Without that time spent, the text lacks dimension, and dimension is what gives it vibrance.

What AI can do is generate an infinite amount of content, but without the reflection and iteration necessary to drive it to a higher level. For now, anyway.


Wednesday, April 09, 2025

AI From an Artist's Perspective

I asked DQ Artist Fredrik Skarstedt about AI and whether his views had evolved over the last year, and this was his response: 
I have experimented with AI tools quite a bit, and I am still of two minds about it. I find it amazing to not have to rely on search engines anymore to find answers to questions like "how do I write an unreal script that triggers an animation when a player steps on an object" or "explain the changes in the 2025 tax code to me in bullet points". Those are phenomenal uses for AI. We have also started working on using AI at work (pathology) and I think there's a bright future for it there. There needs to be humans involved every step of the way, but the ability for a computer to point out "hey... these cells look like cancer... a doctor should look at this" is fantastic. I wouldn't trust it as far as I could throw it without having humans look at things, but it will ease the workload of doctors everywhere when it starts to emerge (pretty soon). 

I have tried using generative art tools for my game development and it's frustrating. In pretty much all instances they just can't make the thing I need and instead create something that's either plain wrong or something that is kind of close to it, but doesn't work for me. I think AI is a neat tool to generate ideas, but it's irritating and frustrating for real asset creation. So I do it myself? I don't know. It's not going anywhere and I try to keep an eye out for interesting tools, but so far nothing art related... well... one tool that I use a lot is the sharpness tools for raw photographs that Adobe introduced lately. It's marvelous. Is it AI? Who knows. All I know is that it works. 

There's also the whole copyright thing. If I generate a building in an AI 3D generative tool. How do I know that it's not just something that the AI just scraped off the internet? Am I using someone's textures and mesh without them getting paid for it? It feels icky and wrong. 

I don't think artists will go anywhere anytime soon. I think depending on what you are working on, AI and LLM are tools that can be used to generate background things, but there's nothing out there, right now, that beats the eye of an artist.

Tomorrow: your email, which was quite passionate (and relatively evenly divided).


Tuesday, April 08, 2025

A One-Day Break from the A.I. Discussion For a Screed about UPS/the UPS Store

Mom 95.0 sent me a loaf of P. Terry's banana bread for my birthday. 

If there's one thing I could eat in the world, it would probably be this.

My sister bought the bread, took it to the UPS Store, and shipped it overnight with delivery by noon on Friday. It was expensive (so expensive, in fact, that I'm not letting her do it anymore, even though I appreciate it very much).

There was a "weather event" on Thursday stretching from Texas to Michigan. Heavy, heavy rain, plus tornadoes in some places. On the tracking page, it noted that the weather would be responsible for a one-day delay in delivery.

It happens.

The package arrived in the local warehouse (ten minutes from me) on Friday at 9:18 a.m.

It was delivered Monday at 11:52 a.m.

I wanted Mom to get her money back. Even allowing for a one-day delay, the additional two days were sheer incompetence on the part of the warehouse.

The UPS Store customer service representative (after being on hold for half an hour) said there would be no refund, because, in the event of a "weather event," they're entirely released from any obligation for a delivery date.

"So, theoretically," I asked, "If there's a weather event that UPS says will result in a one-day delivery, and then they deliver the package three weeks later after it sat in the local warehouse for twenty days, UPS has no responsibility?"

"No."

So a one-day delay from a "weather event" (how that's defined, who knows?) gives the carrier an infinite exemption from responsibility unless they lose the package.

It gets crazier. I called FedEx, because surely they couldn't possibly have a policy that stupid, right?

They do.

This is why we need consumer protection laws, because this is hot garbage.

I'm so old, I remember when UPS and FedEx were good.


Monday, April 07, 2025

A.I. Discussion (part one)

Sean's email (Thursday's post) was thoughtful and interesting.

If I understand his arguments, they are both ethical (plagiarism, job destruction, date center energy usage) and philosophical (A.I. is soulless). Let's go in turn.

Is A.I. plagiarism? I don't know if that's the right word. It's the sum of everything it's ever been exposed to, which is not dissimilar to humans. It's certainly imitative, which I think is more strictly accurate, and perhaps it's not possible for A.I. to create a truly groundbreaking creative work.

I don't think we know yet. How could we?

Will it destroy jobs in many industries? Yes. So, so many jobs. Does it use an inconceivable amount of energy? Also yes.

Are these reasons not to use it? No. Every disruptive technology--and this is highly disruptive---has resulted in higher energy usage (computers, as just one example) and huge job losses (factory automation, also as one example). 

"Soulless" is the philosophical objection. It's not unfair in the least, but (in the music world) this charge was also leveled at any form of digital editing software ever used. When Pro Tools came out, it was absolute anarchy. When digital editing software came out for images, same thing. Digital sound effects for films? Same. Now all of these tools (and many others) are standard in the entertainment industry.

We're not stopping A.I. Period. That battle was over as soon as the first LLM was introduced. Too many people will make too much money to stop their use. That's how it's always been with a new technology, stretching back for centuries. It's not going to change now.

He closes with this:

Reasonable people can disagree about the extent to which A.I. tools can be used ethically and effectively, but I don't think anyone can argue that there's any way to use these generative tools in particular without causing at least some harm.

This where I think the argument breaks down, because it's an impossible standard to meet. Nothing has ever been invented that didn't cause harm to at least someone. 

The question, for me, is not whether the A.I. toothpaste can be stuffed back into the tube. It can't. The question is whether we can create constraints for its use. This is where it gets tricky, because the profit incentive is potentially so high that it will be very difficult to draw an effective line. 

I don't want this to sound like I don't respect Sean's argument, because I do. It's a thoughtful email, and he raises entirely fair points. I just think the discussion at this point might turn away from whether A.I. should be used to how we can use it to make our life better.

Thursday, April 03, 2025

Friday Links!

I'm 64 today and have temporarily accumulated enough physical problems in the last 30 days to make an 80-year-old blush. We have a massive links drop this week.

Leading off this week, a riveting analysis: ‘My patient was happy with her partner of 25 years – then started a torrid affair’: a psychotherapist on why people cheat

A tragic tale: ‘There’s a dangerous epidemic in boxing’: the tragic, cautionary tale of Paul Bamba.

Public service: Everything you need to know about bird flu.

If you want to know how an incompetent fool determines tariffs (that aren't actually in any way related to tariffs), I've got you: Trump’s ‘idiotic’ and flawed tariff calculations stun economists

From DQ Iditarod correspondent Meg McReynolds: Chasing the Iditarod Through the Wilds of Alaska. Also, a very good girl: Meet Muppy, the World’s Smallest Sled Dog.

From Wally, and this is quite stupid: People Making AI Studio Ghibli Images Are Now Producing Fake Legal Letters to Go With Their Fake Art. Fantastic: Let Britain’s magical, mythical creatures inspire a patriotism untainted by politics. Award-nominated SF stories you can read: Analytical Laboratory Finalists

C sent this to me and it's genuinely stunning: Excess Mortality Rate in Black Children Since 1950 in the United States: A 70-Year Population-Based Study of Racial Inequalities

From Ken P., and we live in a police state now: Surveillance shows Tufts graduate student detained. It's dangerous and we're stuck with him, for now:  Politics March 24, 2025 What Was the Plan Behind This Fake CDC Website? Leading, as always: Texas is poised to make measles a nationwide epidemic, public health experts say. This is concerning (because of the model's origins): DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI. Not today, clowns: I won't connect my dishwasher to your stupid cloud. Amazing! Three Hundred Years Later, a Tool from Isaac Newton Gets an Update. This is mildly encouraging: Facebook to stop targeting ads at UK woman after legal fight. Very cool: New Portal pinball table may be the closest we’re gonna get to Portal 3

From C. Lee, and refer to my previous police state comment: Weekslong lockups of European tourists at US borders spark fears of traveling to America. Useful knowledge: Here’s what you need to know about your rights when entering the US. An excellent read (or listen): How empathy came to be seen as a weakness in conservative circles. This is incredible: DOGE to Fired CISA Staff: Email Us Your Personal Data. This is helpful: Trump’s ‘climate’ purge deleted a new extreme weather risk tool. We recreated it. Confessions: Did I Really Do That? Well, that's aggressive: Impaling for Love ― Bull-Headed Shrike. A fascinating bit of history: See you in the funny papers: How superhero comics tell the story of Jewish America. A terrific read: Inside RGG Studio: Ryosuke Horii and Eiji Hamatsu Share How the Like a Dragon Series Is Developed Quickly Without Sacrificing Quality.

AI (your email)

I received this from Sean R. after my AI post earlier this week:
As a longtime reader of your blog, I'm saddened to see you posting so much A.I. content of late. 

Respectfully, there are in fact many reasons not to use A.I., from the blatant theft that powers so much of what they do (Miyazaki himself is opposed), to the ways this tech is destroying the livelihood of the creative class (of which I am a member), not to mention the environmental harms their giant data centers cause, and the ways in which they routinely hallucinate answers which can have serious consequences, especially if you're using them for medical purposes.

Beyond that, there's an inherent soullessness in even the best A.I. art and "writing," as it is the product of statistical regurgitation offering (at best) second-hand imitations of true insight and actual human experience. 

Compare the image you shared with any frame of any Studio Ghibli film; while the art style is similar, what is being conveyed about the characters and their world is night and day. Empty calories compared to a home-cooked feast.

Reasonable people can disagree about the extent to which A.I. tools can be used ethically and effectively, but I don't think anyone can argue that there's any way to use these generative tools in particular without causing at least some harm.

I don't agree with everything Sean says, but I think he raises some substantial points, and I'm going to write about them next week.

Wednesday, April 02, 2025

Oh, the Humanity

Nintendo had their Switch 2 showcase today.

Wii? I knew it was going to be great from the first moment I saw it.
Wii U? I had a bad feeling. A very bad feeling.
Switch? I was on the hype train from minute one. 

Three for three, in other words.

My initial reaction to the Switch 2? A disaster.

Besides the hardware changes (which were not insignificant but also not revolutionary):
1. Paid upgrades for better graphics on certain high-profile Switch 1 games you already own.
2. Game prices of $70-80 for first-party Nintendo titles.
3. Console price of $449 in the U.S.

If the console was announced at $349, almost everyone would have been pleased. At $449, it's going to be a very, very hard sell.

Incredibly, it could have been worse.

In Europe, the equivalent conversion to USD for both the UK and the European Union is $510.

I have an enormous amount of affection for Nintendo because of the hundreds of hours I spent playing games with Eli 23.8. It's a wonderful company. 

And they've screwed up.


Tomorrow

Sean R. sent me an excellent email in response to Monday's post about ChatGPT 4.0, and I'm going to use it tomorrow. It raises some interesting and thoughtful points. The only reason I'm not doing it today is because of Nintendo's Switch 2 Showcase.

Site Meter