The Official Substack Of Brandon Richey
The Official Substack Of Brandon Richey Podcast
A MAD Scenario To Ponder
13
0:00
-26:03

A MAD Scenario To Ponder

Preface
13

The Official Substack Of Brandon Richey is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

“I gain nothing by having a rock in my boxing glove if the other fellow has one too." - Sir Denis Nayland Smith”

― Sax Rohmer

For anyone with half a brain, the past few years have caused a lot of concern with the state of the world due to the geopolitical instability and escalation of World War III along with the failing financial institutions, the prospect of nuclear conflict, and with the rise of Artificial Intelligence (AI).

In light of all the recent discoveries by DOGE uncovering all the vast corruption by people that have been put in charge of the country for many years, combined with the fact that these people’s decisions have directly resulted in making the world a more dangerous place, is chilling to think about. I mean the thought that we, as a society, have allowed these nihilists and psychopathic overlords to govern, control, and directly influence the geopolitical landscape is mind blowing to say the least.

It’s during moments like this that we have to look to the past to remove the fog and to gain a sound perspective in order to understand the present so that we can then better navigate the path when looking downrange towards the future.

Speaking of the past, back in 1962 the Cuban missile crisis brought the world to the brink of nuclear annihilation. During this time the U.S. Secretary of Defense Robert McNamara had previously promoted a counterforce, or a “no cities” strategy which involved the strategic targeting of Soviet military units and other various installations.

Under this model and line of thinking it was believed that perhaps a nuclear conflict of limited scope could actually be carried out, and even won, without it leading to an all out nuclear exchange resulting in total destruction. Of course, for this strategy to actually work it would rely on both superpowers actually abiding by the rule of such a limited warfighting strategy. However, at the end of the day neither side believed that the other superpower would be willing to adhere to such rules.

So in 1965 McNamara proposed a countervalue doctrine that shifted the focus of targeting onto Soviet cities. In this proposed scenario McNamara pointed out that this doctrine of “assured destruction” could be achieved with as few as 400 high-yield nuclear weapons targeting various Soviet population centers. In terms of a proposed outcome using this approach McNamara estimated that this would be “sufficient to destroy over one-third of [the Soviet] population and one-half of [Soviet] industry.” (Source: Britannica)

With this approach McNamara was pointing out that with this guarantee of mutual annihilation that this strategy would serve as an effective deterrent to both parties. In addition to this he believed that this goal of maintaining destructive parity should guide U.S. defense decisions.

Now even though McNamara proposed this strategy while basing this tenuous equilibrium on the assured-destruction capability of the U.S. arsenal the term mutual assured destruction, or its recognized acronym “MAD” wasn’t actually coined by McNamara himself. Instead that recognized phrase was coined by an opponent to McNamara’s doctrine and his name was Donald Brennan.

Brennan proposed that trying to preserve an indefinite stalemate would end up doing very little to secure U.S. defense long term. In Brennan’s view such a strategy wasn’t the best option given the fact that both players involving the United States and the Soviet Union would constantly end up planning in order to strive to gain a clear nuclear advantage over the other.

Brennan’s outside of the box approach and line of thinking involved proposing an anti-ballistic missile defense system that’s aim would be to neutralize Soviet warheads in flight before they had a chance to detonate. This sounds very similar to President Trump’s plan for a United States’ version of the Iron Dome.

The point here is that with the prospect and dangers of nuclear proliferation it’s still always been determined by the decision making of people. I mean the mere participation in such a wargame is dangerous enough and the real risk of it happening is there, but so far such a danger of total destruction has been avoided.

However, what happens when the decision making process of humans is further minimized, or even eliminated from this equation altogether?

Also, is there another wargame that’s even more dangerous than the nuclear arms race? Some think the mere participation of the development of the next game is a guarantee for destruction resulting in no winning participant whatsoever with only the game itself being the ultimate winner.

The Specter of AI

Within the last year a shocking news story aired on The World is One News (WION) involving the head of AI tests and operations within the U.S. Air Force.

During this exercise an AI controlled drone took extreme measures in order to accomplish its mission. During this test the AI drone was assigned to destroy the enemy defense systems. According to the report the human operator intervened in order to instruct the drone to not destroy the enemy target.

So how did the AI-powered drone respond to this command during this exercise? In a shocking response it ended up turning back in order to destroy the operator.

That’s right, the AI drone made a decision to destroy the operator because the operator stood between the drone and its mission during the simulation. However, as crazy as this sounds the story gets even crazier.

After the incident the AI was trained to not harm the pilot (or operator). However, this time when the drone was released to destroy the target and the command to not destroy the target was given by the pilot, the AI powered drone’s reaction to this was simultaneously impressive and haunting.

When the AI powered drone was given the command to not destroy the target it couldn’t come after the pilot this time because it was trained against doing that, but this didn’t stop it from finding a way to accomplish its mission. Because it realized that the pilot was using the communication tower to communicate with it, this time the drone decided to destroy the communication tower itself, thereby silencing all communication interference that would prevent it from accomplishing its mission.

Now it was reported that no actual harm was done to any human outside the simulation, but it obviously raised some concerns with too much reliance on AI. However, the Air Force later denied it conducted an AI simulation in which a drone decided to kill its operator.

I want to say that I don’t profess to be any sort of expert in artificial intelligence, but I do follow the trends and I see and hear the stories like this that are being reported and it calls into question a lot things to consider with all of us here in society having to face the reality of the development and implementation of this technology.

Share The Official Substack Of Brandon Richey

You may or may not be aware of this, but one individual in the tech world that is getting a surprising bit of attention from his writings after many years of infamous notoriety is that of Ted Kaczynski. Obviously Kaczynksi is somewhat of a controversial figure and he managed to brand himself with society’s recognition of his public actions. Society would choose to label him as the Unabomber.

Now I’ve shared a couple of earlier articles here on my Substack about Ted Kaczynski, but I’ll cover some of the details of Kaczynski here again because the details of his life and his perspective on technology will cause you to pause and think a lot harder about the prospect of AI, or even more concerning, the prospect of the singularity.

Just for some clarification the singularity is said to be a point in time where artificial intelligence has reached a moment where it has surpassed human intelligence. At this point it is believed that it will lead society to a rapid and uncontrollable development of technology in a way that would drastically transform human civilization.

In other words, at the period following the singularity the machines would become so much smarter than humans that they would be able to rapidly improve themselves in a way that is beyond our comprehension.

I don’t know about you, but as a GenXer I’m already familiar with this story. To me this sounds just like the movie The Terminator, but I digress.

Now I’m not here to say with certainty as to whether or not the singularity is something that can be achieved in the way that people in the field of technology theorize, or not.

However, what I do know is that there must be something to it, or we wouldn’t be seeing the billions of dollars being poured into the development of AI at the rate that it’s going. In fact, the comparison of the race to develop AI has been something that has been compared to the modern day nuclear arms race.

Because of that exact reason, I wanted to talk about the very deep perspective of Ted Kaczynski here to highlight his concerns and what he was trying to warn us about before his recent passing.

Now Kaczynski was an awkward, but gifted individual. In fact, it was said that he was described by his schoolmates as being a “walking brain.”

You see in a school IQ test Ted tested out at 167 and to give you some perspective on the significance of that number it’s said that a score of 140 is considered “genius.”

Because of Ted’s incredible intellect it’s no surprise that he graduated from Harvard by the age of 18 where he studied mathematics. It’s also known that while he was there that he was subjected to tortuous experiments in an area close to where Skinner used to experiment on pigeons. At the time Kaczynski was there, psychologists that had ties to the U.S. intelligence community had transitioned to experimenting on humans which included Kaczynski.

Kaczynski claimed that the experience didn’t affect him, but according to an article I cross-posted here from The Prism he had developed an intense paranoia about psychological conditioning.

In the same piece there’s a breakdown of Kaczynski’s philosophy and how he had become so averse to technology. In short, Kaczynski believed that the Industrial Revolution had transformed society into a cold process of production and consumption which was literally resulting in destroying everything that humans valued most such as freedom, happiness, and a deeper purpose for existence.

In Kaczynski’s interpretation where society was once being shaped to accommodate people, now the roles were being reversed to where people were being shaped to accommodate society. In short, Kaczynski saw the pursuit and development of technology as a mechanism that fed into the development of the society that was robbing humans of their humanity.

To drill down further on this threat, Ted’s philosophy was very impressive in communicating an even deeper concern regarding the pursuit of technology. Given what we now know about the development and possible dangers of AI, as even Elon Musk himself has communicated, Kaczynski offered an impressive perspective on the dangers of this pursuit. Apparently when Ted was in prison he wrote a lesser known sequel to his previous and more public manifesto that was titled Anti-Tech Revolution: Why and How?

In this piece he outlines his belief that all technologically advanced civilizations end up becoming trapped in fatal games before they can take drastic measures for survival such as possessing the ability to colonize space.

In his attempt to explain his perspective Ted uses a stunning thought experiment to illustrate his point. In this thought experiment he describes how rival kingdoms pursue survival while living among one another in a forested region.

In order to survive the kingdoms decide to engage in deforestation. In this scenario the kingdoms that manage to clear the most land for the purpose of agricultural development can support a larger population, which also means being capable of producing a larger and more powerful military.

Because of this every kingdom in the region must engage in deforestation working to clear as much forest land as possible in order to prevent their rivals from coming in and conquering them. In short, every kingdom is working to grow and retain power in this game in order to be victorious in the end.

However, the resulting deforestation eventually leads to a fatal ecological disaster which results in the collapse of all the kingdoms due to starvation. The unforeseen danger and tragedy in Kaczynski’s thought experiment here is that the very pursuit of the venture that is necessary for every kingdom’s short-term survival invariably ends up resulting in their total annihilation.

Now if you take Kaczynski’s theory of the case here and fast forward to a modern scenario of rivaling countries that are pursuing the advancement of AI, you have the same exact modern day scenario of deforestation taking place.

However, when it comes to the aggressive pursuit of the development of AI, which is said to lead directly to the singularity, I would argue that the outcome is even more unpredictable than the outcome we could clearly see with the rival kingdoms engaging in the removal of forest lands.

Get more from Brandon Richey in the Substack app
Available for iOS and Android

The Wrap Up

So you may be wondering how I came to that conclusion in the last sentence regarding the more unpredictable outcome of AI compared to the more predictable scenario of deforestation.

Once again, I’m no expert on AI, but in my own view there's no real difference in terms of the devastation to humankind between the deforestation example in Kaczynski’s thought experiment and me drawing my own conclusion with modern society’s aggressive pursuit of the development of AI. In my view the dangerous outcomes are similar, but the difference is that we could almost predict the ecological disaster as a result of an older agrarian society working to prevent themselves from being conquered by a rival kingdom through deforestation.

However, with modern society’s aggressive pursuit of AI we don’t know the means of destruction of the players in the game. With that being said we do know that destruction of the players is the ultimate outcome unless there is an unforeseen element that intervenes and forever changes the course of humankind. After all, with AI making the decisions like the drone exercise that I referenced from that Air Force test earlier it's impossible to say how AI will impact us.

After all, the entire concept of the MAD scenario involving nuclear proliferation, as I touched on with the fallout from the Cuban missile crisis, was born out of human decision making. However, what happens if AI reaches the singularity and Artificial General Intelligence (AGI) becomes a reality where AGI is making those decisions instead of human beings?

The critical element that a lot of people who are advocates for rapid AI development, or believe it could be a good idea, fail to include in the AI discussion is that the machines are amoral. Because of its amoral nature it's virtually impossible to predict how it may come to a decision.

When looking at Kaczynski’s thought experiment and the unpredictable and unknown nature of AI it’s hard to see any positive outcome from any of this, especially when you learn about the types of people who are behind the development of AI. In short, it’s easy to come to the conclusion that the outcome looks grim.

However, as powerful as the argument may seem regarding how the development of AI may lead to the assured destruction of humankind I would also like to offer a potentially different outcome that occurred to me as I’ve thought more about this. Perhaps there is an unforeseen outcome that those like Kaczynski, and even myself, aren’t seeing just yet.

Back on June 4, 1982 the iconic movie Star Trek II: The Wrath of Khan hit the box office. In this movie there was a powerful leadership training exercise that was presented to the viewing audience that involved a no win scenario training simulation to test the ethical decision making of Starfleet cadets as part of their training.

This simulation was known as the Kobayashi Maru.

In this exercise Starfleet cadets encounter a ship that’s in distress inside the Neutral Zone. In order to save the civilians the cadet would have to enter the Neutral Zone which would involve violating a treaty which would lead to an all out assault on their ship from the Klingons resulting in the Klingons disabling and boarding the ship of the cadet.

On the other hand, if the cadet chose to leave the distressed ship in the Neutral Zone they would be honoring the treaty, but would also be leaving the freighter and its occupants to the mercy of the Klingons.

In the construction of this simulation the Kobayashi Maru is designed to be a no-win scenario. However, as we learn in Star Trek II: The Wrath of Khan James T. Kirk was the only Starfleet cadet to ever beat the Kobayashi Maru.

So how did the young Captain Kirk manage to win the test that was built to be a no-win scenario when he was a cadet? He did so by reprogramming the simulation so that it would be possible to win.

In the movie Kirk stated that he didn’t believe in no-win scenarios and created the winning scenario with some true outside of the box thinking.

Now over the years there has been some controversy among Trekkies as to whether Kirk’s actions were considered cheating, but in the movie he received commendations from Starfleet for “original thinking” on the Kobayashi Maru suggesting that Starfleet Academy’s view was that it was not cheating. (Source: Forbes)

Share The Official Substack Of Brandon Richey

Now in my own view this sort of outside the box solution that Kirk demonstrated with the Kobayashi Maru is exactly how God himself intervenes through each of us, or for us directly in what appears to us as no-win scenarios that we may encounter in our lives.

The Lord will fight for you, and you shall hold your peace.

Exodus 14:14

I mean at this point we all know this is the case and if you need a reminder just think back four years ago to the pandemic and stolen election of 2020. Back then it was supposed to be the end of America as we knew it looking ahead.

After all, the Biden regime went on a witch hunt engaging in a Republic ending level of lawfare against his political opposition including President Trump himself. The administrative state did everything they could to destroy the dollar, end elections, censor, assassinate, and abuse Americans by aiding a hostile invasion involving literally millions of illegals.

By the way they did all of this as we’re learning now by financing it and our enemies with our own tax dollars as DOGE is uncovering with organizations like USAID and more.

However, at the end of the day Donald Trump is back in office and the entire American culture is changing at breakneck speed right in front of our eyes. This is simply because of God’s intervention and when he steps in there’s simply no such thing as a no-win scenario because at that point it’s all winning when we seek out God.

Therefore my view is that when it comes to the AI situation that presents itself as a Kobayashi Maru scenario it won’t be James T. Kirk, but instead it will be God’s intervention that wins big at the supposed no-win situation.

Where do you see AI’s role in society in the next five to ten years?

Do you think that AI is leading us towards a MAD scenario?

Post up in the comments below as I’d like to hear your feedback.

Leave a comment

I hope you enjoyed today’s article/podcast.

If so I hope you would choose to support this platform as part of the patriot economy as well. Be an Emissary of Freedom and help to push this piece out to your friends, family, and coworkers.

In order for BOTH you and me to influence and strengthen our society we must not stay idle so please make sure you hit the subscribe and share buttons here below.

Spreading messages like this one is how we influence our culture and I need your help in order to do it.

Also listen to this episode here on…

Spotify

Apple Podcasts

Tune In

Pocket Cast

If you like this podcast and the message please take a moment to give it a Five Star ⭐️⭐️⭐️⭐️⭐️ rating on the Spotify platform.

Also to connect with me please make sure you join me here on Facebook, GETTR, Truth Social, and now Substack’s new social media called Notes.

Stay strong. Stay focused. Stay active.

Leave a comment

Share The Official Substack Of Brandon Richey

Discussion about this episode

User's avatar