Generative AI is being pushed on you
AI is the decade's greatest buzzword. It's used in just about every industry you can and can't imagine; photography/imaging, video production, editing, writing, programming, and so on.
I see so many instances where a software or platform is shoehorning it in for seemingly no reason other than to say that they have it.
The logical assumption is that some top-level executive is forcing its development onto their team in order to maintain relevance or to keep investors happy. I get it, people have to eat. However, I take issue with the fact that consumers end up paying for it in one way or another, even if it's just the constant bombardment of AI "features" onto services that were previously simple.
The real kicker is that it doesn't even always work. Security issues in programming, horrible advice for those seeking help with their mental health, and a consistent inability to provide sources for information being passed. This only scratches the surface.
Acknowledging a Powerful Tool
I think generative AI can be incredibly helpful in so many places; it can reduce busywork and open doors in ways that previously couldn't have been imagined. I'd be arrogant if I couldn't admit this much.
It's excellent as a supplement to learning. I've used ChatGPT to create outlines for certain topics that I've been interested in and it's impressive to see how quickly it can come up with a plethora of fleshed-out information. It turns otherwise complicated subject matter into simple bullet points, with the capacity to tune just how simple you want them. Additionally, it will start to provide suggestions for the next logical progression of whatever you're prompting about.
Unfortunately, when something this powerful exists and is easy to access, it's bound to be abused. I don't even need to mention how this tool's abilities get used for cheating and shortcuts. I feel great sympathy for those in charge of managing educational facilities and curriculums. Trying to keep up with all of this must feel like you're running after a jet fighter.
Drawing the Line
My biggest problem with AI comes from its interference with cultural creativity — particularly, the arts.
At a fundamental level, people work so that they can earn the time and resources necessary to do the things that they want to do. Historically, these wants are the activities that contribute to a culture: painting, drawing, music, crafts, skills, clothing, acting, writing, building.
I imagine getting work as a writer or artist was hard enough to begin with. Now? I don't even want to know.
When LLMs are created and optimized to do this work for us, it replaces the people and jobs initially responsible for their creation, which in turn will warp the perception of culture.
The future is so very unclear
So many eras in history are in part defined by the arts of their time. How will we look back at the 2020s? Actually, forget those. We are only 2.6% into the 21st century. So much of the rest of the milennium is going to be spent not being able to discern what's real and what isn't.
What are people working towards in their lives? There isn't a strong incentive to hone a craft like writing or graphic design when a chatbot can spit out results in seconds. Even if the quality is trash, the turnaround time is infinitesimal. I understand that the quality can improve and the output can be iterated on, plus these language models get better over time, but my point is that we shouldn't be outsourcing creativity to artificial intelligence. It's backwards.
I truly believe that the people who default to generating images or videos on these chatbots have a lack of imagination. To want to give the process of thinking and creativity away to a bot is a sign that a person has a lack of passion.
There is some nuance here — I'm sure these AI platforms help people with limited/reduced accessibility by allowing them to contribute to certain art forms that they otherwise couldn't. I also see the value in the generation of reference images to use as a basis for studying. I don't take issue with uses of this nature.
What will the next generations think?
How will we convince the next generations that pursuing art or journalism is worthwhile? The obvious shortcut is right in front of us; it's so much easier to write prompts that it almost feels silly to spend the time learning the craft. Will traditional passions still exist, or will children be taught that prompting AI is art? It sounds soulless; cold.
Children will have to be taught what's real and what isn't, and how to tell the difference. How confusing of an upbringing would that create? To not be able to trust the things you see in front of you. To live where a legal system has to constantly adapt because it cannot trust photo or video evidence — things that used to be ironclad.
Billion-dollar boundaries
OpenAI's shutdown of Sora caused Disney to back out of a $1B deal that would've granted access to its characters and intellectual properties. Based on what you've read so far, you should understand why I see this as a win. Whether or not this decision was made solely for the money, the result is the same: the art is left to the artists.
Imagine the rights for Disney characters belonging to a generative AI platform. Such a palpable irony that would've been. Disney is a pioneer in animation. The animated films of the golden age are still some of the most excellent examples of 2D drawings brought to life. It's magical, it's authentic, it's human.
The feedback to the news of Sora's demise online has been overwhelmingly positive. People see this as a protection for the legacy of artists whose characters have been designed and animated with great heart and passion.
You don't get passion from generative AI. It has to learn, so it's fed examples and then it regurgitates cheap knockoffs without giving any credit. Then it replaces the job. It doesn't know why the art it's attempting to replicate is appealing, or why it was created the way that it was. Learned artists are experts; they are practiced and disciplined. Fundamentally, they have backgrounds and techniques and influences that come through into their work.
The physical side to it all
Image and video generation is costly in terms of both power and money. Huge data centers house these language models and consume lots of electricity to run, and lots of water to keep the hardware cool. The data centers emit tons of light and noise; side effects that more often than not get passed on to the citizens who happen to be nearby.
I know that these repercussions will never reach zero, but I'm hopeful they will one day reach a point of efficiency that has a lesser impact on our environment.
Similarly, I know that these LLMs will never disappear outright, but maybe with some time we will continue to see more of the extraneous platforms close down and reduce functionality to only the essentials, or keep the costly actions behind a paywall to raise the barrier to entry.
Sources and Additional Resources
IGN's article on the shut down of Sora
For more information on the use of power by LLMs, read A systematic review of electricity demand for large language models: evaluations, challenges, and solutions and AI is guzzling water and power. Here's what we can do about it
March 26th, 2026