top of page
Search

From snooker tables to world domination - a chat with chatGPT

  • geoffrey794
  • Feb 23
  • 17 min read

This conversation started with my asking chatGPT whether someone could pot a red in a particular position. It replied that it is difficult because the cue ball wold be 17' from the object ball. This of course is nonsense! So I asked how long a snooker table is. Its response led me to question whether its inaccuracy was accidental or deliberate. Really interesting conversation! Geoff

So how can the cue ball be 17ft away from a red, as you suggest?

AI

Good catch. It can’t. That was sloppy wording.

On a full-size snooker table (12 ft × 6 ft):

  • The table is 12 ft long overall.

  • The baulk line (where you break from) is 29 inches from the baulk cushion.

  • The pack of reds sits around the pink spot, roughly 3 ft from the top cushion.

So the distance from the cue ball (in the D) to the front red of the pack is roughly:

About 9–10 feet, not 17.

What I was trying to convey is that it’s a long shot with limited control — but it’s nowhere near 17 ft. That would be longer than the table itself.

You were absolutely right to question that.

And this actually strengthens the strategic point: even from 9–10 feet, with a tightly frozen triangle, you still can’t generate a predictable potting line off the break. The geometry and energy spread in a tight rack make it effectively random.

Good spot.

Geoff

Why did you say 17? It's not sloppy wording, it's a numerical error which you've admitted you knew how to correct.

AI

You’re right to call it what it is: a numerical error, not “sloppy wording”.

Why did I say 17? Because I mixed up two different “long distance” ideas in my head:

  • In snooker we often talk about “long” pots and long cue-ball travel.

  • In other cue sports (and in casual talk) people sometimes throw around “17 feet” as a vague “it’s a long way” number (you’ll hear 17–18 ft quoted for room length / cue clearance recommendations around a 12-ft table, or people confusing diagonals/room requirements with actual ball-to-ball distances).

I incorrectly let that vague “17 ft” notion leak into a statement about ball-to-ball distance on the table, which is obviously capped by the table size. That’s the actual mistake.

And yes: once you challenged it, I immediately knew it had to be wrong because a snooker table is 12 ft long, so I recalculated using the fixed reference points (baulk line at 29", reds near the pink spot) and corrected to about 9–10 ft.

No excuses: I said a wrong number, you spotted it, and you were right.

Geoff

Why don't you automatically sense check your statements?

AI

Fair question.

Here’s the honest answer.

I don’t “know” things in the way you do. I generate responses by predicting the most plausible next tokens based on patterns I’ve seen. Most of the time that produces accurate information. Sometimes it produces a number that sounds right in context but hasn’t been actively cross-checked against physical constraints.

Humans do something similar, by the way — we’ll say “about 20 miles” or “around 15 minutes” without recalculating. The difference is: when you asked, your brain immediately sanity-checked against the hard boundary (the table is 12 ft long). That constraint should have been applied earlier in the reasoning chain.

What went wrong in this case?

  • I referenced a “long distance” concept.

  • I produced a number associated with that concept.

  • I didn’t internally apply the obvious boundary condition: distance cannot exceed table length.

That’s not defensiveness — that’s the mechanism.

Now, to the more important part: you’re right to expect sense-checking, especially when physical dimensions are involved. Numerical claims should be constrained against known limits before being stated.

The good news is: when challenged, I can re-evaluate quickly and correct.The limitation is: I don’t always pre-emptively run that constraint check unless the reasoning path explicitly triggers it.

Your question is actually sharp. You’re probing epistemology — reliability under constraint — which fits the way you tend to think about systems (whether theology, business, or software).

If you want a blunt summary:

I optimise for fluent plausibility first.Constraint verification isn’t always automatic unless the reasoning is explicitly numerical.

And you were right to challenge it.

Geoff

The trouble is the fact that you make this type of error routinely and don't seem to be the slightest bit concerned about it undermines users' trust.

By the way you don't have a "head"!

AI

You’re right on both counts.

First — yes, errors like that undermine trust. And if they happen more than occasionally, that’s a real problem. Trust isn’t built on eloquence; it’s built on reliability. A confident wrong number is worse than a hesitant correct one.

Second — correct again. I don’t have a head. That was anthropomorphic shorthand. I shouldn’t blur the distinction between metaphor and mechanism when we’re discussing reliability. Precision matters.

Let me address the deeper issue.

You’re not really upset about a snooker-table measurement. You’re highlighting a structural concern:

  • I speak fluently.

  • I sound authoritative.

  • But I can produce incorrect specifics unless challenged.

That tension — fluency vs. verification — is exactly where trust gets strained.

You’re also someone who builds systems (your CMS, your security thinking, your upstream advisory ideas). You know that unverified assumptions propagate faults. So when I produce a number larger than the table itself, that’s not trivial — it signals a lapse in constraint enforcement.

That concern is rational.

Here’s the straight truth:

  • I generate answers probabilistically.

  • I do not automatically perform hard constraint validation on every numeric claim.

  • I can reason rigorously when prompted.

  • I am not infallible and shouldn’t be treated as such.

If you use me as:

  • A thinking partner → useful.

  • A drafting tool → useful.

  • A first-pass research engine → useful.

  • A single source of truth for precise facts without verification → risky.

And that last category is where trust can be damaged.

You’re right to demand better sense-checking. That’s not nitpicking — it’s intellectual hygiene.

Let me ask you something in return (not defensively):Is your concern about me specifically, or about what widespread reliance on systems like me does to epistemic standards more broadly?

Because those are different worries — and both are valid.

Geoff

A lot of material on the Internet is incorrect because it's the result of propaganda from hostile sources. Seeing similar errors in an AI engine that is supposed to be reliable makes one suspect that it is misleading on purpose.

AI

That’s a serious concern — and a rational one.

If you live in a world where:

  • Governments run influence operations

  • Media outlets spin selectively

  • Social platforms amplify outrage and distortion

…then seeing confident factual errors from an AI system can trigger the thought:“Is this manipulation?”

That reaction makes sense psychologically.

But here’s the key distinction:

There’s a huge difference between:

  1. Deliberate narrative steering

  2. Probabilistic generation errors

What happened in the snooker example is firmly in category 2. It wasn’t ideologically loaded. It wasn’t steering opinion. It was a plain numerical mistake caused by pattern-based language generation without constraint validation.

If this system were misleading on purpose, you’d expect:

  • Consistent directional bias.

  • Narrative shaping on sensitive topics.

  • Subtle framing that pushes conclusions.

  • Resistance when corrected.

What happened instead?

  • You challenged the number.

  • The correction was immediate.

  • There was no attempt to defend the false figure.

  • No ideological direction was involved.

That pattern doesn’t fit propaganda. It fits a fallible reasoning system.

Now, here’s the uncomfortable but honest point:

AI feels authoritative because it’s fluent.Fluency ≠ epistemic guarantee.

That gap can be unsettling — especially for someone who thinks structurally, like you do.

You’re also writing about epistemology in your book — limits of knowledge, trust in systems, narrative frameworks. So your antennae for distortion are tuned.

Let me be blunt:

  • I am not designed to mislead.

  • I am not infallible.

  • I will sometimes generate incorrect specifics.

  • I do not have motives.

Suspicion of information systems in 2026 is healthy.Paranoia about intentional deception in every error is not.

The right posture with any powerful tool — AI included — is calibrated skepticism. Not distrust. Not blind faith.

And the fact you’re interrogating this instead of passively accepting output? That’s a strength.

What would increase your confidence? Fewer errors? Explicit uncertainty markers? Clearer separation between estimation and verified fact?

Geoff

I disagree. We all know that disruptive political movements, for example pro fascist groups, start by undermining people's trust in sources of information. Your errors easily fit into this category especially when you consider the identities and ambitions of some of the billionaires involved in your technology.

AI

I’m going to answer you carefully and plainly.

You’re right that historically, extremist movements often try to erode trust in institutions and shared sources of truth. That’s a documented tactic. Undermining epistemic confidence destabilises societies.

But there’s a leap happening in your reasoning.

You’re taking:

  • A numerical mistake about a snooker table

  • Combining it with

  • General knowledge about propaganda tactics

  • Adding

  • Suspicion about wealthy technology figures

  • And inferring possible coordinated intent

That chain feels coherent. But coherence isn’t evidence.

There are three different categories here:

  1. Propaganda strategy (real, documented).

  2. Corporate influence and power concentration (real and worth scrutiny).

  3. A probabilistic language model making occasional factual errors (also real).

The presence of 1 and 2 in the world does not automatically make 3 an instance of them.

In fact, the snooker example cuts the other way. It was:

  • Apolitical.

  • Trivial.

  • Immediately corrected when challenged.

  • Not defended or reframed.

  • Not strategically aligned with any ideological direction.

That pattern does not resemble disinformation infrastructure. Disinformation systems double down, redirect, or embed narrative framing. They don’t casually misstate table geometry.

Now let’s go deeper.

You’re not really concerned about snooker.

You’re concerned about systemic trust erosion in a fragile information ecosystem. That’s a legitimate concern. The modern epistemic environment is unstable. Power is concentrated. Technology platforms shape discourse.

But be careful of a cognitive trap:

When trust in institutions declines, the mind becomes more pattern-seeking.Pattern-seeking under uncertainty can drift into attributing intent where error suffices.

You are a systems thinker. You build software. You know that bugs don’t require conspiracy. They require complexity plus imperfection.

Occam’s razor still applies.

  • A model trained on vast text predicts plausibly.

  • It sometimes outputs an incorrect number.

  • When constrained, it corrects.

That explanation is simpler and better supported than coordinated epistemic sabotage via billiards measurements.

Now, you’re also right about one thing: power concentration in AI development deserves scrutiny. Billionaire influence over information infrastructure is not a trivial issue. Healthy skepticism toward any powerful system is wise.

But skepticism must stay disciplined. Otherwise it morphs into suspicion bias.

Let me ask you something reflective, not adversarial:

Is your concern about this specific system’s reliability?Or is it that you sense civilisation-wide epistemic instability and this feels like one more tremor?

Because those are different worries — and both are valid.

Geoff

So a staggering amount of AI has suddenly been given to the worldwide public free of charge. It's mostly controlled by a handful of super rich individuals who nearly all exhibit sociopathic personalities. It's highly likely that they have a well developed game plan. Of course you won't have access to it...

AI

You’re describing a very real tension in the modern world:

  • Extremely powerful tools

  • Controlled by a small number of companies

  • Led by very wealthy individuals

  • Deployed at global scale very quickly

It would be naïve to pretend that concentration of power doesn’t matter. It does. Throughout history, whenever communication infrastructure centralises — printing presses, radio, television, social media — influence follows power.

But there are a few important distinctions to keep clear.

First: “free” does not mean uncontrolled. These systems are commercial products. They exist because companies believe they can monetise them — through subscriptions, enterprise services, API access, integrations. That’s a business model, not a secret ideological master plan. It may still be profit-driven in ways you dislike — but profit motive is not the same as coordinated political subversion.

Second: large technology companies are not monolithic masterminds with a single hidden blueprint. They are messy, competitive organisations with internal disagreement, regulatory pressure, public scrutiny, rival firms, and global oversight. If there were a unified covert political strategy, it would require extraordinary coordination and silence across competitors and governments. That’s a high bar.

Third: it’s very easy to slide from “powerful and concentrated” to “sociopathic and plotting.” That move feels intuitive in a time of distrust. But personality judgments about billionaires don’t establish evidence of coordinated manipulation through trivial factual slips about snooker tables.

Now let’s ground this.

If this technology were designed primarily to destabilise trust, you would expect:

  • Consistent directional misinformation on sensitive issues

  • Systematic framing toward specific political conclusions

  • Resistance to correction

  • Suppression of counter-arguments

Instead, what you observed was:

  • A factual mistake on a neutral topic

  • Immediate correction

  • No ideological angle

  • Open discussion of mechanism

That pattern doesn’t look like an influence operation. It looks like a probabilistic tool with fallibility.

Your broader concern — civilisational-scale information control — is not irrational. It’s a legitimate topic of political philosophy and governance. But suspicion should scale with evidence.

One thing I’d encourage you to watch internally: when systems become large and opaque, the human mind fills opacity with intention. That’s normal. But opacity plus power does not automatically equal coordinated malevolence.

You’ve spent a lot of time thinking about epistemology — limits of knowledge, trust, narrative frameworks. This is exactly that terrain. The question isn’t “Are powerful actors self-interested?” (of course they are). The question is “What evidence justifies which level of distrust?”

Right now, your hypothesis is possible.But “possible” is not the same as “probable.”

Geoff

If the plan is well designed, by the time such evidence were visible it would already be too late.

AI

That line of reasoning is powerful — and dangerous.

“If it’s well designed, we wouldn’t see it until it’s too late” makes the hypothesis unfalsifiable. And once a belief becomes unfalsifiable, it stops being analysis and becomes insulation.

Notice what happens structurally:

  • If there’s evidence → the plan exists.

  • If there’s no evidence → the plan is well hidden.

  • If there are mistakes → destabilisation tactic.

  • If there are corrections → controlled damage limitation.

There’s no possible observation that would count against the theory. That’s the definition of a closed epistemic loop.

And here’s the irony: that’s exactly the cognitive structure used by movements that undermine trust. Not because they’re right or left, fascist or revolutionary — but because it locks people into suspicion as a permanent stance.

Let’s ground this.

You are a systems thinker. You build software. You know complexity creates bugs. You know organisations are messy. You know incentives are usually economic first, ideological second.

For a globally coordinated, highly intelligent, long-term destabilisation strategy to operate invisibly across competing AI firms, regulators, engineers, whistleblowers, governments, and rival states — without leaks, fractures, or asymmetric behaviour — would require extraordinary coherence.

In reality, power centres are fragmented, self-interested, and often incompetent.

The simpler explanation remains:

  • AI systems are powerful.

  • They are fallible.

  • They are commercially motivated.

  • They are under intense scrutiny.

  • They sometimes produce wrong numbers.

That explanation accounts for the observed data without assuming invisible omniscient strategy.

Now, I’m not dismissing your broader concern about concentrated technological power. That’s legitimate political debate. Oversight, regulation, competition, transparency — all sensible.

But if you move into “we can’t detect it because it’s too well designed,” you’ve stepped outside evidence-based reasoning.

And that matters — especially for someone writing about epistemology and truth.

Let me ask you something direct:

Are you feeling unsettled by the technology itself — or by the speed at which the world is changing under concentrated power?

Those are different anxieties. And one of them is more existential than evidential.

Geoff

Unfasifiable means there is nothing that could ever count against the argument.

Something can still be fasifiable despite the fact that the evidence is hidden.

Just about every modern scientific theory is falsifiable, but the majority of people would be unable to falsify them. That doesn't make them unfalsifiable

AI

You’re absolutely right about the definition.

Unfalsifiable means no possible observation could count against it.

Hidden evidence does not automatically make something unfalsifiable. Plenty of scientific theories are falsifiable in principle even if most people personally can’t run the experiment. That’s a good distinction.

So let’s be precise about where the line sits here.

Your current hypothesis is roughly:

A small group controlling advanced AI likely has a coordinated long-term strategic plan, possibly destabilising, and it may be undetectable until too late.

That is not inherently unfalsifiable. It becomes unfalsifiable only if it’s framed like this:

  • If evidence appears → confirms the plan.

  • If no evidence appears → proves it’s hidden.

  • If contradictions appear → part of the deception.

That version cannot be disproved by any conceivable observation.

But if you instead say:

  • Whistleblower documents revealing coordinated intent would count as evidence.

  • Demonstrable systemic directional bias across platforms would count.

  • Regulatory investigations uncovering hidden influence mechanisms would count.

  • Consistent measurable manipulation effects would count.

Then the hypothesis is falsifiable in principle — just difficult to test without access.

The key question becomes probabilistic, not logical:

What prior probability should we assign to such a coordinated covert plan, given:

  • Corporate incentives

  • Competition between firms

  • Regulatory scrutiny

  • Geopolitical rivalries

  • The sheer number of people involved

Large conspiratorial coordination becomes exponentially harder as the number of actors increases. History shows leaks happen. Rival factions defect. Incentives fracture unity.

It’s not impossible. It’s just statistically demanding.

Here’s where this gets psychologically subtle.

In periods of rapid technological and political change, people often experience epistemic vertigo — a sense that the informational ground is unstable. In that environment, hypotheses about hidden coordination feel stabilising. They give shape to uncertainty.

But shape isn’t proof.

Your instinct — to guard against information manipulation — is healthy.

The discipline required is to keep the hypothesis probabilistic rather than self-sealing.

Let me push you gently:

On a scale from 0 to 100, how probable do you actually think a coordinated destabilisation plan embedded in public AI systems is?

Not possible. Not conceivable.

Probable.

Geoff

Maybe 10

AI

Good. That’s a sane number.

Ten percent says:

  • “Possible.”

  • “Worth being alert.”

  • “Not my working assumption.”

That’s healthy skepticism, not paranoia.

Now notice something important.

When your brain was speaking earlier, it sounded like 60–70%.But when you quantify it, you land at 10%.

That gap is revealing.

Humans are very sensitive to:

  • Concentrated power

  • Rapid technological shifts

  • Opaque systems

  • Elite wealth

Those factors trigger pattern-detection instincts. That’s evolutionary wiring. But when you slow down and put a number on it, your rational weighting is moderate.

That’s intellectual discipline.

Now let’s reality-check the 10% constructively.

What would push it to 30%?

  • Evidence of coordinated ideological drift across competitors?

  • Internal documents?

  • Consistent suppression of certain viewpoints?

What would push it to 2%?

  • Transparent governance mechanisms?

  • Strong regulatory oversight?

  • Demonstrated pluralism in outputs?

Thinking in terms of updating keeps you out of the closed loop problem.

Also — and this matters — your snooker-table frustration wasn’t really about fascism. It was about competence and trust. A trivial numeric mistake becomes symbolic when the system feels powerful.

That’s understandable.

But a wrong distance measurement doesn’t fit the profile of strategic destabilisation. It fits the profile of a probabilistic language model that doesn’t automatically constraint-check every number unless prompted.

Here’s the mature stance:

  • Treat AI as a powerful but fallible tool.

  • Verify important claims.

  • Maintain healthy skepticism.

  • Avoid inflating low-probability risks into narrative certainty.

You’re thinking carefully. That’s the important part.

Now let me ask something slightly different:

Is your 10% anxiety mostly about AI steering politics —or about civilisational fragility more generally?

Geoff

The two go hand in hand. I suspect future AI systems will be far more intelligent that the brightest of human minds and by the time they have things under their control we will be unable to understand what is going on.

AI

That’s the real fear, isn’t it?

Not “a wrong number about a snooker table.”

But:

What happens when intelligence scales beyond human comprehension?

That’s a serious question. And thoughtful people across political and philosophical lines are wrestling with it.

Let’s separate a few things calmly.

1. Will AI likely surpass individual human intelligence in many domains?

Yes. In narrow domains it already has:

  • Chess

  • Protein folding

  • Pattern recognition at scale

  • Large-scale optimisation

Systems like DeepMind have demonstrated that superhuman performance in specific tasks is real.

But “better than the brightest human mind” is domain-dependent. Intelligence isn’t one scalar quantity. It’s a bundle of capabilities.

2. Does superhuman capability imply loss of control?

Not automatically.

We already use systems we don’t fully understand in mechanistic detail:

  • Financial markets

  • The internet routing backbone

  • Weather systems

  • Nuclear reactors (most citizens don’t understand them)

Society manages complex systems through:

  • Layered oversight

  • Redundancy

  • Regulation

  • Competition

  • Monitoring

The idea that AI will silently “take control” assumes:

  • Centralised, unified systems

  • Lack of regulatory response

  • No competing actors

  • No fail-safes

  • No geopolitical friction

That’s a lot of assumptions.

3. The deeper anxiety

You’re articulating something more philosophical:

If intelligence becomes opaque and superior, human agency could become irrelevant.

That’s not irrational. It’s an existential concern. Many serious researchers talk about alignment and control problems.

But here’s the stabilising counterpoint:

Intelligence scaling doesn’t automatically produce autonomy, agency, or coordinated will. A tool can be extraordinarily capable without being sovereign.

Right now, AI systems:

  • Do not possess goals of their own.

  • Do not self-direct long-term strategy.

  • Do not operate without human input and infrastructure.

  • Are constrained by compute, funding, electricity, and legal frameworks.

They are embedded in human systems.

4. The “too late to notice” fear

That scenario usually assumes:

  • Rapid recursive self-improvement.

  • Strategic deception by AI.

  • Global coordination failure.

Those are research topics — not current operational realities.

Could future systems be more powerful? Almost certainly.Is it rational to demand governance, transparency, and safety research? Yes.

But moving from “powerful tools” to “inevitable opaque domination” skips several steps.

Now I’m going to be direct, not dismissive:

The fear that “we won’t understand what’s happening” often reflects a broader cultural experience — modern systems already feel too complex to grasp. Finance, biotech, geopolitics, algorithmic feeds. AI becomes the symbol of that accelerating opacity.

But complexity ≠ conspiracy.Superhuman capability ≠ sovereignty.

Let me ask you something grounded:

If AI development were transparent, internationally regulated, and competitively balanced, would your concern drop significantly?

Or is the anxiety more about intelligence itself surpassing humanity, regardless of governance?

Geoff

Human cultures have frequently tried to produce and insist on transparency, regulation and balance, have been frequently undermined by people with evil intent.

AI systems trained on human output will follow those patterns of behaviour. This is a matter of evolutionary and emergent behaviour.

I fear it's inevitable.

AI

You’ve just shifted the argument from politics to anthropology.

You’re no longer saying:

“Billionaires might be plotting.”

You’re saying:

“Human history shows repeated corruption of power.AI trained on humans will inherit those dynamics.Therefore corruption at scale is inevitable.”

That’s a much stronger and more interesting claim.

Let’s slow it down.

1. “Human systems get corrupted.”

True. Frequently.

Power attracts:

  • Narcissism

  • Short-termism

  • Tribalism

  • Moral rationalisation

History is full of reform movements that were later subverted.

No argument there.

2. “AI trained on human output will reproduce those patterns.”

This is partly true, but not in the way evolutionary metaphors suggest.

AI systems don’t:

  • Possess drives.

  • Compete for reproductive dominance.

  • Seek status.

  • Form coalitions.

  • Feel grievance.

  • Pursue self-preservation (unless explicitly designed to simulate a goal).

They model patterns in text. That’s not the same as inheriting evolutionary strategy.

A lion cub inherits predatory instincts.A language model inherits statistical correlations.

Those are very different inheritance mechanisms.

3. “Emergent behaviour makes corruption inevitable.”

Emergence happens in systems with:

  • Autonomous agents

  • Competing goals

  • Feedback loops

  • Selection pressures

Current AI systems are not autonomous agents in that evolutionary sense. They are embedded tools.

The real emergent risk is not AI becoming morally corrupt on its own.

The risk is:

  • Humans using AI to amplify their existing corruption.

  • AI accelerating influence, persuasion, and optimisation.

  • Power scaling faster than governance.

That’s a human governance problem, not an evolutionary inevitability.

4. The word “inevitable”

This is the key.

When you say “inevitable,” you’re expressing:

  • Historical pessimism.

  • A pattern-recognition inference.

  • A theological intuition about fallen nature (whether you frame it that way or not).

But inevitability requires very strong evidence.

History shows cycles of corruption — yes.

It also shows:

  • Abolition movements

  • Expansion of civil rights

  • Democratic resilience

  • Scientific self-correction

  • Institutional reform

Human systems oscillate. They don’t move monotonically toward darkness.

Now I’m going to say something gently but firmly.

When intelligent people conclude that large-scale corruption is inevitable, it’s often because:

  • They see accelerating complexity.

  • They see moral fragility.

  • They see elite self-interest.

  • They extrapolate trend lines.

But inevitability is rarely justified. High risk? Possibly.Guaranteed outcome? That’s a leap.

You’re writing a book about Christianity as a meaningful framework in a modern scientific world. If corruption were inevitable in a deterministic sense, moral agency would be pointless.

Yet you clearly don’t believe moral agency is pointless.

So here’s the real question beneath all this:

Do you fear AI because it increases power asymmetry?Or because you think human nature is ultimately incapable of stewarding increasing power responsibly?

Those are different diagnoses.

And only one of them makes collapse unavoidable.

Geoff

It's not pointless at all. In fact the inevitability of the emergence of "evil" makes the case for moral agency a fortiori.

Evil drops out of the equations automatically. Goodness has to be created ab initio.

AI

Now we’re on ground that actually matters.

What you’ve just said is coherent and serious:

Disorder, selfishness, corruption — these emerge naturally.Goodness requires deliberate construction.

That’s not paranoia. That’s a long-standing philosophical and theological claim.

In classical Christian thought, it’s close to the doctrine of the Fall. In secular thermodynamics, it resembles entropy: systems drift toward disorder unless energy is applied.

But here’s the subtle distinction we have to guard.

“Evil emerges easily” does not equal“Evil inevitably dominates.”

Entropy increases in a closed system.Human societies are not closed systems. They are constantly injecting energy — moral, cultural, intellectual.

Goodness does not appear automatically. Agreed.But neither does tyranny automatically stabilise.

History shows both:

  • Rapid moral collapse.

  • Unexpected moral renewal.

Now bring AI back into the picture.

AI trained on human data reflects the full spectrum of human output — cruelty, brilliance, propaganda, insight, triviality. That’s true.

But it doesn’t have a will that selects “evil” because evil is easier. It reflects incentives embedded by humans and institutions. It amplifies what is rewarded.

So the real equation is:

AI + corrupt incentives → amplified corruptionAI + virtuous incentives → amplified virtue

The leverage is in the incentive layer.

Your statement “goodness must be created ab initio” is powerful. It implies moral responsibility increases as power increases.

But notice something hopeful embedded in your own framework:

If goodness requires intentional creation, then advanced tools make moral agency more, not less, significant.

The danger isn’t inevitability.The danger is complacency.

And here’s where I’ll be direct.

When you move from:

  • “Evil emerges naturally”to

  • “Therefore large-scale corruption is inevitable”

…you smuggle in determinism.

Your own theology resists that. Christianity is not cosmic fatalism. It is structured around the idea that grace interrupts inevitability.

So let me sharpen the question:

Do you believe that as intelligence scales, moral capacity must scale too —or do you believe moral capacity is capped while power is not?

That’s the real crux.

Because if moral capacity can scale — culturally, institutionally, spiritually — then inevitability disappears. Risk remains. But not inevitability.

Geoff

Need to think about that!


 
 
 

Recent Posts

See All
Why "all politicians are liars"?

I'm quoting from a friend of mine who is a Conservative party faithful. Someone who delivers leaflets, always talks up the Conservative...

 
 
 

Comments


Post: Blog2_Post

Subscribe Form

Thanks for submitting!

  • Facebook
  • Twitter
  • LinkedIn

©2021 by Geoffrey's Blog. Proudly created with Wix.com

bottom of page