Collapsing AI hype may offer an opportunity for a nuclear renaissance …

… but not for the reasons you may have heard lately.

Let me first define the topic of today: AI – Artificial Intelligence.
Since every piece of software that contains an IF statement is
called AI these days, we should dig a wee bit deeper to back up our thesis.

When considering AI, today most people will refer to LLMs (Large Language Models), but that’s a narrow vision.

Computer scientists generally consider neural networks as AI. A neural network is basically an adaptive filter. They are around sinds the 60’s.
There are several reasons why they became a hype, and only recently:

  1. Since the original coining of neural networks, computers have become powerful, interconnected, and stuffed to the brim with digitised data. So for the first time in history, usable data in sufficient quantities and compute power reached a critical mass, needed to produce something non-trivial.
  2. Marketeers picked it up hoping to create the next killer app.
  3. Its outputs are not just information on a computer screen, but may activate real-life appliances and systems, such as payments, car steering, traffic light control, and even control the entire power grid. By the way, the idea of controlling the power grid is another mishap originating from the destabilising intrusion of intermittent power sources (i.e., sun and wind). Brigid does not need such hacks.
  4. AI companies are pumping up stock values based on a self-fulfilling prophecy.
  5. Companies are investing massive amounts of money in fear of missing out. Corporate FOMO: it’s a thing.

Useful applications for neural networks do exist: handwriting recognition, speech recognition, image classification, and yes, even language models
producing natural language text and speech. What they all have in common:

  1. monkey-level creation: classical algorithms for these applications do exist, but are hard to code
  2. outputs are not required to be very rigid, i.e., some inaccuracies and even shear errors are tolerated

What are neural networks not? A dozen or more things they are not:

  1. backed by a mathematically well founded technology, allowing proof of stability and objective performance criteria. This means you do not want this controlling your car. If you do, you are suicial.
  2. intelligent. An ant is more versatile smart. Do not fall for unlabeled exponential graphs showing how much smarter AI is compared to an ant. These graphs are PowerPoint lies.
  3. stable. Neural nets are by definition adaptive, and the adaptation algorithm has no mathematically proven stability properties. This means the network’s entropy may rise at every update.
  4. knowledgeable. In spite of what AI companies tell you, spitting out unreliable data is a fundamental property of a neural network. We should no longer call it hallucinations, as if it’s a small fault that will ultimately be solved. It is fundamental, as a neural network just spits out the most probable
    filter match of the input (i.e. prompt) and stored path weights. There is no connection with confirmed facts. Whatsoever. Even if an AI would check its output to the internet, how would it be able to trust and judge that information, as part of it is unlabeled AI slop itself?
  5. efficient. The energy consumption of a neural network is many, many orders of magnitude larger than a biological brain, while delivering nowhere near the same intelligence or versatility. Take one look at a common fruit fly (Drosophila Melanogaster, remember your biology class) and draw your own conclusion: if your assignment were to build a neural network as functional as that of that fly, how would you even start? Google AI claims it has 2500 neurons or 150000 neurons (so what is it, Google? Make up your mind. Oh, darn, you don’t have one…)
    Anyway, that tiny fly brain is sufficient to see, fly, crawl, find food, procreate, shelter, and dodge your fly trap.
  6. generally useful. Generating good-looking non-facts, trivialities and slop is not worth the investment nor the energy. It is a well-known paradox that many useful applications do not require large neural networks.
  7. honest. Illegally using copyright protected content, even content protected by Law, and generating content without flagging it as artificial is cheating.
  8. getting better all the time by scaling it up, longer training, more data. Convergence into a better setup cannot be proven, and using AI content to train AI engines is obviously a destructive feedback loop. Thermodynamical laws seem to be violated here. But hey, these laws are just axioms, so who cares?
  9. accountable. If (when) damage is caused by AI, who will be held accountable? Who forged the chip? Who wrote the software stack? Who designed the system architecture? Who supplied the training data? Who trained the network? Who sold it on the market? Who insured it? Who financed it? Who used it and for which reason? And the list goes on and on and …
  10. new technology. Neural nets are based on a totally outdated sixties model of a biological brain. It was about as naive as the belief that DNA analysis would suffice to cure any cancer by 2020.
  11. understood. Do not think for a minute that AI developers know what they are doing. Architecture, parameters, training data, benchmarks: it is all trial and error.
  12. financed. Every company that earns money on AI is payed by companies that don’t. These are burning seed capital, funding and loans supplied by governments and the companies that do earn money. This circle cannot go on as if it’s a perpetuum mobile. In Belgium, Lernout and Hauspie ended up in jail for a similar construction.
  13. conscience. They’re designed to fake. We don’t even have a clue what consciousness is.
  14. promoted by very smart people, that understand how all this works and we don’t. Both propositions not true.

Disappointing progress after the initial releases of LLMs and AI agents forced the AI industry to prospect a new goal: Artificial General Intelligence AGI is to surpass human brains. (GAI is a more logical name, but was rejected by the marketing guru.) So compare the current neural network developments to a human brain, which uses 20 W of power to perform like, well, like a human. Including full body control (unless you’re a bit drunk). Good luck.

So people are now (rightfully IMHO) concluding that AI will fail meeting its marketing promises, and consequently the financial return. Most people debunking AI are still using financial arguments, not technical or moral arguments.

This being said, we’re now at the Brigid connection.
Why do people now state that AI hype may offer an opportunity for a nuclear renaissance?

The reason is energy consumption. Since neural networks are trained in mega computer centers, consuming GW of electrical power, AI companies with deep pockets are considering reviving old PWRs, ordering SMRs and even investing in fusion companies to power their data crunchers.
The original idea of using sun and wind obviously led to reliability issues, hence nuclear reactors.
If governments hesitate on the development of nuclear reactors, industry takes over, but for their own purpose, and not to power society.
Brigid elaborated earlier on the wacko idea of distributing SMRs across the countryside. So we’ll not dwell here any longer.

So here’s Brigid’s view on the matter.

AI companies will soon discover they have to write off their investment without any ROI, since the projected lifetime of contemporary AI chips is 5 years, but AI developments require faster refitting than that. Brigid will collect superfluous data centre hardware to expand their computer
infrastructure for simulating our reactors and power plants. We may even re-write Quantum Physics, with a bit of luck. Not by using the AI features, but but using the matrix-vector multiplications for some hard-core physics simulations. There’s plenty of physics riddles waiting to be reviewed using some massive compute power.

Instead of building crappy SMR nuclear power plants to power data centers to generate AI slop, the data centers will be used to both develop and monitor state-of-the-art MSR power plants to
power society.

In this sense, the collapse of the AI bubble can indeed contribute to the nuclear revival.