I’ve been working on LLMs for a while now and I’ve been working with various high level thinktanks to try to understand the philosophical, moral and ethical implications of LLMs.

Often it feels there’s a real disconnect between the technical community and the ethical community. Explaining the inner working of LLMs to a non-technical person is really hard and I’ve been trying to find ways to make it more accessible.

I recently gave a talk to a panel of 10 high-level ethicists with background in philosophy, bio-ethics and theology and here are some key points from our discussion.

AI is not “intelligent”

Using the word “intelligence” to describe AI is a mistake. In French it means information and not intelligence in the common sense of the word. It’s a bad translation that creates a lot of confusion. Therefore artificial information would be less misleading.

Science-fiction

A lot of science fiction movies depict AI as a supernatural force which is generated by nerd goblins in a dark basement. This is not the case. AI is just a bunch of math and statistics. It is convenient for lazy scenarists who want a non-controversial baddie.

Bad journalism

Most journalists have a very shallow understanding of AI and they often write articles that are not accurate. They tend to reuse the marketing material from the companies that are selling AI, which is often misleading and inflates the real possibilities of AI. This is a real problem because it creates a lot of fear and misunderstanding in the general population.

LLMs are just predicting the next word

Text is a sequential structure where word positions have meaning just like music. LLM’s are sequential models that try to predict the next word based on the previous words. It is a deterministic process selecting the next token in a sequence therefore nowhere close to human intelligence.

What is the status of machine-generated speech?

On a philosophical level, next token prediction cannot really be considered on the same level as human thought even if it’s really efficient in pretending to be.

How large are the english culture bias in LLMs?

Most of the training data comes from the internet and the internet is mostly in english. This means that LLMs are really good at english and it seems to push some english culture bias in the generated text that is presented as being neutral.

Could a LLM be used to generate a new tailor-made religion?

In politics we saw advanced individual targeting of individuals using data from social media. Religion have a “carpet bombing” approach where they try to convince as many people as possible with a universal message. LLMs could be used to generate a new religion for each individual based on their own beliefs and biases. For example, you might believe in karma and wish for a cat paradise, the LLM could quickly generate a new custom religion for you that fits perfectly what you want to hear and exploits your religious feelings.