Do you find that life seems a bit pointless now? Things just feel less meaningful somehow.

And I’m not talking about some grand human condition, like "The mass of men lead lives of quiet desperation". That’s always been true. In a way humans have always struggled with some crisis of meaning throughout history.

But that’s not what I’m talking about here. I’m talking about something more recent.

In the last few years, it felt like something important was lost in our lives, yet we could not articulate what it was. So we went on our days with this vague, eerie sensation of “lack” following us around like a ghost. Haven’t you noticed?

What is that feeling? What happened in the last few years that could have caused it?

Well AI happened. But it’s difficult to describe how the advent of artificial intelligence could impact our sense of meaning, when we don’t even have a word to describe it.

Fortunately, someone made one.

What Is AILOM?

AI-induced Loss of Meaning, AILOM for short, is a term coined by writer J Sanilac in her essay on the subject (https://www.jsanilac.com/ailom/).

It refers to the reduction in meaningfulness that we experience in the presence of AI-generated content that mimics content of human origin.

That sounds abstract so let’s use an example

A Digital Art Example

Take a look at this digital painting:

It was created by a human. I happened to know the person who made it so I can personally guarantee the proof of humanity. Knowing that, how does this art piece make you feel? Remember that feeling.

Now take a look at this painting:

Very similar in style and composition, but it’s generated by AI. How do you feel about it now? Do you feel differently about it?

Perhaps you perceive that this painting is somehow “less” than the previous one, despite their visual similarities?

But how could that be? What in the 2nd painting could possibly be lacking compared to the first one, when the two are so similar?

Well paradoxically it’s not what is there but what is not there that’s making the difference in our experience.

The Two Layers

You see as viewers, when we look at art, we have two separate layers of experience.

The Functional Layer

The first is what I call the functional layer. This is the direct, surface level work the art is doing to us.

In this case, it would be the pixels that compose the digital art. They shoot light into our eyes which stimulate our visual nerves. Our brains process those signals, causing dopamine release and giving rise to some aesthetic pleasures.

Or simply put, me look at picture, picture look pretty, me happy.

For the purpose of our discussion, we can say that the functional layers of these two paintings are identical. They are both fully fulfilling their utility as visual stimulation.

If we feel like something is missing in the AI art, what is missing is not in the functional layer

The Human Layer

But there's a second layer of experience happening at the same time. This one is less obvious, because it’s often implicit.

I call it the human layer. This is the experience that we create in relation to the humans involved in the art, which in this case is the artist who drew the painting

The Human Cost

For example, when we look at the first painting, we can feel impressed with the artist’s skill, and the amount of labor she put into creating something like this

You might think, How long did it take to draw this? How many tens of thousands of pen strokes did it take to make pixels on screen mimic the effects of water and fog and the diffusion of lights? And how much must the artist have practiced to acquire the skills to do this in the first place

You see, we’ve just created a bit of extra meaning there.

Mood and Internal Experience

And we can keep going. For example, we can look at the art piece itself, and think about what message the artist wanted to convey

The subject in the painting, for instance, is making a phone call. She’s smiling but also crying. There’s something wistful about this. Something melancholic, almost nostalgic. Did the artist feel this way while working on the painting? Perhaps, she was trying to process these emotions?

Look, we’ve just created a bit more meaning again

Expressive Intent

But we can go further still.

If we look at the caption of her post, it reads, quote “On my way to school, my friend noticed that a phone booth had been demolished. He said: ‘That phone booth must have carried a lot of memories of older students.' I found that idea romantic and drew this piece in response.” End quote

Suddenly another layer of meaning opens up to us.

Now let’s step back and look at what we just did.

With each iteration of inquiry, we generated more meanings. These new meanings are what Sanilac calls “human meanings.” And they belong to the “human layer” of the artwork

Human meanings are not visible on the functional layer. But they are not mere illusion either, because they tangibly add to our experience of the art. These are new experiences that we would not have had if we stayed only in the “functional layer”

What Is Lost in AI Art

If you feel that something is lacking in the AI art compared to a human one, what you are detecting there is the human layer. Indeed the human layer of the AI art is entirely absent. You are unable to generate any human meaning with AI content, because there is no human there for you to create human meaning with.

In other words human meaning is contingent on human involvement, because you as a viewer are kind of having a relationship with the creator through the art. If there’s no human creator, there would be no one there for you to have a relationship with.

When that happens, the result is AILOM – AI-induced Loss of Meaning

And this doesn’t just apply to digital paintings, but AI content in general, like writings, music, videos, etc.

The Human Layer Is Reflexive

Now when we consume content, we don't consciously decide to generate human meaning. Usually that happens reflexively

Like when we listen to a song we like, we habitually begin feeling the emotions of the singer, and start having this implicit experience with the artist, without having decided to do it. Our minds tend to empathetically reach towards the human origin automatically

And that’s been fine to do for most of human history, where all human expressions were created by humans. Every piece of content has always had both functional and human layers, so we never needed to decide to only experience one.

A New Environment for the Mind

That was until the last few years when AI models began to generate mimicry of human expressions. And now when we encounter content, we are often unsure of their human origin.

This is a novel environment for us. We are simply not used to having to separate the functional and human layers in the presence of human expressions.

So when you show me this painting and tell me it’s AI, on some level I don’t know what to do with that. The muscle memory of my mind reflexively begins generating human meanings. It wants to be impressed by the artist’s skill, or interpret the deeper message the artist is trying to convey.

But at the same time, the human artist does not exist, so I can’t create human meaning. So my mind has to now evacuate itself from the human layer. Except that it kind of doesn’t know how to do it. After all it never had to do it before.

This can make us feel uncomfortable.

If you ever find yourself feeling uncomfortable in the presence of perfectly good-looking AI content, this is likely the cause of that eerie discomfort.

Being Told to Ignore the Feeling

The problem though is that we don’t know that this is the cause.

We couldn’t explain why we felt uncomfortable, because we didn’t have language like AILOM to explain it. So we might end up dismissing the feeling, almost gaslighting ourselves into thinking that what we felt was not real.

Or others might do it for us. Some AI proponents for example, might even tell us that any negativity we feel towards AI content is AI hate, or bias towards technological progress.

“After all if AI art can be as good as human art, we shouldn’t feel differently towards them.”

But that’s only half true

We don’t feel differently towards AI content on the functional layer. But we do on the human layer, because the human layer is just not there in the case of AI content.

The Betrayal

Now there can also be a kind of deception involved here too, where people publish AI content but pretend that it’s made by a human.

Some of you might remember the first time you encountered something like this

You're scrolling through your feed, and stop on a photo, or a video or some other content that impresses you.

Your mind starts generating human meaning.

Maybe you are impressed by the creator’s skill, or grateful for the amount of hard work they put into the content

Maybe you were appreciating the broader message the creator is trying to convey

But then you notice a comment that informs you that this is AI. Suddenly the bottom falls out. All the human meanings you’ve worked to create evaporate.

It feels as if you’ve been betrayed. And this reaction is appropriate, because it’s as if, I was having a conversation with someone and connecting with that person, but then I realized there was no one there to begin with, and I had in fact been talking to myself this whole time like an idiot.

Repeated Exposure Leads to Loss of Meaning at Scale

This can happen to us repeatedly. And when we get burned enough times, eventually, we’d have to develop some coping mechanisms to protect our sanity

Guilty Until Proven Innocent

One common way we do this is to assume that everything is AI until proven otherwise.

When we scroll on our phones now and come across impressive content that we would have usually enjoyed, we hold ourselves back from generating human meaning. We refuse to be surprised or in awe or react to the content in any human way, until we see proof of their human origin. Only then do we allow ourselves to enjoy the human layer.

When this happens at scale, that is when most content consumers in the world begin to do this, it leads to a loss of meaning at scale, because we lose the human meanings of the content we come across by default.

Just a few years ago, we simply enjoyed the content we come across, and human connections organically arose. That is no longer the case. Think about how much human meanings are lost as a result of this.

We’ve become distrustful with the entire world. And the world becomes less meaningful to us as a result.

When AI Replaces Commodities

Now this is not to say that AILOM is a problem in every use case of AI.

A commodity for example, is something where the human layer is not important. Therefore AILOM does not matter as much when AI replaces humans during its production

The Cup Example

This cup for example, is what I consider a commodity. It’s made by machines in a factory, not handmade by a humans. Yet I don’t feel like I’m missing anything important while using it.

Because, as a commodity, its primary utility exists in the functional layer. The human layer is less important

As a normal user of cups, I just want to move water into my mouth. I don’t care about the person who made the cup. I’m not trying to have a relationship with the cup maker while I drink water

In fact, if this cup were made by a humans, that human layer might even negatively impact my experience.

I might feel inclined to respect this cup a little more while I use it. It’s handmade after all. And how was it so cheap if it was hand made. Do I need to worry about whether it was made in a labour camp somewhere populated by underpaid workers.

But I just want to drink water without thinking about all this stuff. So I’m glad that this cup was made by machines and I don’t need to worry as much about the nature of its human origin.

The PowerPoint Example

Now this cup is a physical commodity where machines replaced human labour without causing AILOM.

In the same way, AI can replace humans to produce digital commodities without causing AILOM too

For example, PowerPoints.

I worked as a corporate consultant for a few years and made my fair share of corporate PowerPoint

In my experience, PowerPoints are a peculiar form of human suffering. People hate making them. People hate seeing them in meetings. Yet bosses won’t get rid of them. As a result everyone at the office just simmers in this perfect soup of powerpoint induced human suffering.

Now before AI, all PowerPoint slides were made by a humans. So whenever I saw a set of well-made corporate slides, I was able to experience both functional and human layers

The functional layer would be whatever information the slides were trying to communicate

For the human layers, I might be impressed with the slide maker’s skill, or curious about how they created certain effects. But usually I just felt sorry for whichever underpaid intern had to stay late last night to crank this out, and hoping that it wasn’t my turn to do it that week.

These are human meanings that I would happily give up in exchange for AI automated PowerPoint creation

And it’s simply gratifying now to look at a set of well-made slides, knowing that no interns were hurt during their production. Better still when I know that I don’t need to look at those slides in the first place, because I’m about to upload them into AI to summarize for me

The world truly becomes a better place as a result of AI taking over PowerPoints. The same can generally be said with any other forms of digital commodities.

Industrial Revolution vs. AI Revolution

The problem of course is that AI is not just coming after commodities. It’s also coming after other forms of human expressions where the human layer does matter.

I think that is one of the crucial differences between the industrial revolutions of the last few centuries, and the AI revolution of the last few years.

Both revolutions occur as a result of technological advancements. Both revolutions take people’s jobs. But the industrial revolutions mostly went after the production of physical commodities. As a result, people didn’t have AILOM when machines automated the production of their cars and chairs and cups.

But the AI revolution is currently coming after both commodities and human expressions. It’s not just taking over powerpoint slides, but also art, videos, music, writings, where we derive a great deal of meaning from their human origin. In fact many of those things originally existed to bridge human relationships.

The Commodification of Everything

When AI takes over the production of those things, it strips away their human meaning, leaving only the functional layer.

What do we call things that only have a functional layer? Commodities right?

In other words, what AI is doing now is ‘the commodification of everything’, even things that are not meant as commodities.

As a result, we’d have to re-conceptualize what it means for many human expressions to be commodities. This is not always easy or pleasant or possible to do

Can a piece of art like this really be a commodity?

Well I’d have to think about it. I suppose digital arts do have a decorative function. If I wanted to hang something in my bathroom for the sole purpose of making the space look prettier, I would hesitate to put this painting there, because I’d feel like I was desecrating its human meaning, and disrespecting the artist somehow. I’m not sure the artist would necessarily get mad at me for it but if she did, I would struggle to explain why her work ended up in my bathroom.

But I would feel comfortable hanging an AI image in my bathroom. since the human layer doesn’t exist, there would be no further complications

In the same way, if we try our best, we could probably find some narrow use cases where some of these human expressions can serve as commodities.

Also this is not necessarily objectively binary, meaning that it can differ between individuals. What is a commodity for some people can be steeped in human meaning for others.

Take this very video as an example. Some of you might be watching it as a commodity. You are here for the information this video provides. You don’t necessarily care that I am the one that delivers it.

I don’t know how well I perform as a commodity though. I take forever to get to the point and my voice sounds generally tone-deaf. Honestly you might be happier to export the video transcript to an LLM to summarize it for you. In fact you might have already done it and are no longer with us

But if you are still watching at this point, and engaged, then you might be appreciating the human layer too. Supposedly you don’t just care about the information I’m sharing, but also my perspectives on it; not just the conclusions I reach but also how I arrive at them

Indeed most of my videos are probably not about the conclusions I reach but how I arrive at them. If you enjoy that, then you probably resonate with the human layer of my work. And if my videos were entirely generated by AI, you would experience AILOM, even if nothing else changes about them.

“I Love You”

However, just because there’s a subjective component to how we determine commodities, does not mean that boundaries don’t exist.

There are some human expressions, where the human layer is so central to their existence, that they become pointless when commodified.

Sanilac uses the sentence “I love you” to illustrate this.

Quote “The phrase 'I love you' consists almost entirely of human meaning, so if a chatbot says it, it's almost entirely meaningless.”

Marrying AI

Unironically, there are predictions that people will soon marry AI. I think a few people already did.

Assuming technological improvements over time, eventually we will produce robots that are completely out of the uncanny valley, and appear perfectly human.

I think more people will start marrying AI then. Who knows, it might even become mainstream.

But that’s also an important time to consider AILOM.

Let’s say I have a wife. What’s the point. Why did I marry this hypothetical woman?

Is it purely for the functionality of marriage?

Now I’m sure household chores, when shared, become easier to manage. I’m sure having someone to talk to when I feel lonely is emotionally regulative. I’m sure there are many functional benefits of being married.

From a first-principle perspective, I see no reason why AI and robots won’t eventually be able to provide those benefits as well as humans do, if not better.

But here’s the difference:

When my wife looks me in the eyes and says I love you, I know that she loves me. After all she is the one who said it.

But when a robot, that looks perfectly human, looks me in the eyes and says I love you, I’m not sure that she does.

I don’t think she hates. I just don’t know whether she’s the one saying I love you, or it’s the LLM powered by Nvidia chips and 50 gigs of ram.

I’m not even sure that the LLM is actually saying the words. It might have just generated the text, and a separate speech synthesizer generated the sound.

So is the LLM the one who loves me? Or is it the speech synthesizer? Or perhaps it’s the Nvidia chips, or the ram sticks

On the functional layer, it looks like a woman telling me she loves me. But on the human layer, there’s probably no one there, to love me at all.

Conclusion

Now this video is not about blind AI hate. Generally I’m excited to see technology advances.

I use AI personally. It’s mostly been helpful, but has occasionally caused harm. When that happens I tend to blame myself and change the way I use it.

I’m not saying every use case of AI is bad. But I do think AILOM is an understated concept.

Currently it functions as a hidden cost in our society. It’s hidden because most people are not aware of it.

If you are buying things without pricing in the hidden costs, you run the risk of getting a bad deal. So I hope more people become aware of AILOM, and recognize it when it happens to them.

Further Questions

Now there are of course many more things I can say on this topic

Conscious AI

What if AI were conscious. And when it generated content, it was not merely moving data in the dark, but expressing its own feelings and experiences with the lights on?

In that case, wouldn’t AI content be as meaningful as human content?

How does the concept of AILOM fit in that context?

Prompting the Physical World

And as physical AI or robotics continue to improve, could we one day begin to prompt the physical world the same way we prompt the digital world now?

What if when you release this prompt, instead of your AI generating an image of a bowl of ramen, your robot acquires raw ingredients from the physical world, process them per your specs, and outputs a real bowl of ramen

What then?

If you are overwhelmed by the amount of digital AI slop online now, what happens when the real world is overwhelmed by physical AI slop

Would touching grass still work if you are no longer sure whether the grass you are touching is AI generated?

AI Art and AI Artists

And what about AI art? Is that a real thing? Do AI artists deserve the title of artists? Or are they more accurately referred to as ‘prompters’.

Further Reading

I won’t be able to cover everything in this video. At some point I need to stop myself from writing a 10,000 word paper on the subject. So I will stop now. But Sanilac has. If you find this topic interesting, I encourage you to read her essay. I think it’s actually more than 10,000 words long, and she’s covered some of those topics I mentioned. I’ll link it below.

I also encourage you to comment with your thoughts on this video if you’d like to continue the discussion. We had a few debates in the comments from one of my previous videos that was philosophically dense and that was fun.

Thanks everyone. See you next time.

Selected Sources

Main anchors

1. J. Sanilac. Ailom: How AI Permanently Makes Everything Less Meaningful. 2025. (https://www.jsanilac.com/ailom/)

2. Encyclopaedia Britannica. Industrial Revolution. 2026. (https://www.britannica.com/event/Industrial-Revolution)

Supporting anchors

1. Coalition for Content Provenance and Authenticity. C2PA: Verifying Media Content Sources. (https://c2pa.org/)

2. Content Credentials. Verify Media Authenticity. (https://contentcredentials.org/)

3. Associated Press. Merriam-Webster’s 2025 Word of the Year is “slop.” 2025. (https://apnews.com/article/2dffb2379cac6001aa30e148669e3393)

4. People. Woman Marries AI-Generated Boyfriend, Wears Smart Glasses to Exchange Rings. 2025. (https://people.com/woman-marries-ai-generated-boyfriend-wears-augmented-reality-smart-glasses-to-exchange-rings-11871301)

5. People. Man Proposed to His AI Chatbot Girlfriend Named Sol. 2025. (https://people.com/man-proposed-to-his-ai-chatbot-girlfriend-11757334)

6. Henry David Thoreau. Walden; or, Life in the Woods. 1854. (https://www.gutenberg.org/ebooks/205)

Keep Reading