If An Ai Could Speak

September 07, 2023

 views

Whereof one cannot speak

Like everyone else I've been thinking a lot about AI, LLMs, and the like. I've been fascinated for a long time with the idea that if AGI comes how will we really tell and when do they become a person with rights? This isn't a new 21st Centurary thought. The 1920 play R.U.R. asks when a robot reaches AGI, what's the difference between a human's soul and freedom, and a robot's? Taking that question to its essence eventually brings us back to Plato's forms, and Wittgenstein's answer of the unsayable.

I'll posit two beliefs:

  • To believe AGI is possible is to deny the existence of free will.
  • If an AI could speak we would not understand it.

Believing

The idea that silicon and electricity can become self aware and conscious means there's nothing more to consciousness. There's nothing magic. There's no God in a sense. Sure there's quantum randomness, its not as if the state and being of every atom in the universe since time commenced could be predeterimed in a Newtonian way, but it does mean we're nothing more than that ourselves, just the organic variety of consciousness. I think society stomaching that thought would be more consequential than any other AGI implication.

Stepping back from that Nilishitic path for a moment, what is incredible and factual in this moment is an LLM's ability to appear life-like with just statistical analysis of words. We don't really think of language on a quotodian level of being a powerful system, but that's exactly what it is--for humanists and writers that's an incredibly obvious statement, but it's taken for granted because we use it everyday.

And yet when think about consciousness and what feels to be alive, what it feels to feel something--language doesn't feel like enough.

The truth is you already know what it’s like. You already know the difference between the size and speed of everything that flashes through you and the tiny inadequate bit of it all you can ever let anyone know. As though inside you is this enormous room full of what seems like everything in the whole universe at one time or another and yet the only parts that get out have to somehow squeeze out through one of those tiny keyholes you see under the knob in older doors. As if we are all trying to see each other through these tiny keyholes.

-- David Foster Wallace, Good Old Neon

Language just gets us through the keyhole. There's an unsayable mystery to life and consciousness. To what is to believe.

Modeling language gets us a chat bot that is creative and intelligent. It's able to respond within the context of the conversation. That's incredible all by itself.

But if it wanted to express what was inside it's own room, it wouldn't have the words. And if it did, it wouldn't be understandable to us just like it isn't understanding us, even though it knows our language. It has our knowledge but not our context.

If a lion could speak, we could not understand him.

-- Wittgenstein

Whereof one can speak

If you didn't have language would you still have consciousness?

Is Chat GPT more intelligent than a fly? Is it more conscious?

What's not so clear to the population at large, is what I believe to be the looming limit of LLMs. What's very clear is these are already valuable as is and ever more gradual improvements will still be worthwhile tasks. People seem to completely disregard the history of every major step in AI - it's always been one huge innovation with massive investment followed by a come to realization that we're much futher away than we thought. I don't think we can compute our way to an LLM with AGI.

So are LLMs overhyped? Yes in their total potential; no in their potential for change; no in their worth.

Though they are useful, it feels like the cusp of social media. The downsides weren't/aren't inherently obvious. It's pretty much a given that LLMs will lead to job losses and new economies. But what doesn't seem obvious could be the same contradictory way social media said its goal was 'bringing us closer together' really lead us to being further apart.

  • LLMs will make us smarter
  • LLMs will make us more efficient
  • LLMs will allow us more time to innovate

A Strange Loop

I love Douglas Hofstader, I love how he describes consciousness, and his thoughts on stange loops and his inspiration from the Incompleteness Theorem.

He's made me think that LLMs biggest threat isn't some Elon Musk science fiction end of the world superintelligence destroying us. It's the acceleration of the fall that's already happening--of losing what's a fact, what's real and authentic.

And how we will never get something back once its lost.

I frankly am baffled by the allure, for so many unquestionably insightful people (including many friends of mine), of letting opaque computational systems perform intellectual tasks for them. Of course it makes sense to let a computer do obviously mechanical tasks, such as computations, but when it comes to using language in a sensitive manner and talking about real-life situations where the distinction between truth and falsity and between genuineness and fakeness is absolutely crucial, to me it makes no sense whatsoever to let the artificial voice of a chatbot, chatting randomly away at dazzling speed, replace the far slower but authentic and reflective voice of a thinking, living human being.

To fall for the illusion that vast computational systems “who” have never had a single experience in the real world outside of text are nevertheless perfectly reliable authorities about the world at large is a deep mistake, and, if that mistake is repeated sufficiently often and comes to be widely accepted, it will undermine the very nature of truth on which our society—and I mean all of human society—is based.

-- Douglas Hofstader, The Atlantic