I’m a bit ahead of schedule this afternoon—and since it’s Read-Only Friday, there are a few things I can’t work on right now. So, I took the opportunity to explore some of the ongoing conversations around artificial intelligence.

What struck me is how much resistance there still is to AI. But honestly, this kind of pushback feels familiar. Over the years in IT, I’ve seen similar reactions to major shifts—whether it was the move from mainframes to client-server, the rise of virtualization, the adoption of cloud computing, or the introduction of DevOps. Each time, there was skepticism, fear of job loss, and concern about losing control. And yet, each of those innovations eventually became integral to how we operate today.

AI seems to be following a similar path—powerful, disruptive, and yes, a little intimidating. But it also holds immense potential. That said, the concerns people raise—about bias, ethics, job displacement, and transparency—are real and deserve thoughtful discussion, not dismissal.

So here’s a question I’ve been thinking about: Are you for or against AI? And more importantly—why?

Is your stance based on personal experience, ethical concerns, industry trends, or something else entirely? I’d love to hear different perspectives. Instead of resisting change, maybe the better approach is to engage with it—shape it, question it, and prepare ourselves for what’s coming.

After all, the future isn’t waiting for us to be ready. It’s already here.

Adding a Poll for what AI tools you use. Please select all that apply.

  • ChatGPT
  • Suno AI
  • Midjourney
  • Runway
  • Adobe Firefly
  • Perplexity AI
  • Claude 3
  • Pika Labs
  • Copilot
  • Notion AI
  • Meta AI
  • Grok
  • Gemini
0 voters
17 Spice ups

To the extent that I deliberately, intentionally use AI*:

  1. Gemini - I have started looking at the AI overview at the top of the results page. Sometimes it looks good, sometimes it looks useless. I used a powershell fragment copy pasted from the overview today. I knew about what I needed, but was not sure of the exact syntax.
  2. ChatGPT - I query this for fun or experimentation. Experimentation in that I want to test, examine the answers rather than use the answers.

* I included this qualifier because I believe most of us are being impacted by AI, or making decisions based on information that is, if not AI created, AI influenced.

Edited because my asterisk was de-asterisked :smiley:

10 Spice ups

There should always be healthy pushback on any new tool, especially one with the power to change things as dramatically as properly created AI can (and will).

16 Spice ups

I’m going to start this one off with a disclaimer that this is entirely my own opinion, because I think this might be a bit of a spicy take. (Or maybe not as much with this crowd?)

Largely against it in the way marketers are pushing it. I’m all for the advancement of tech, but not for how the world has taken this tech and ran with it.

I believe machine learning algorithms have their place, but I think “AI” as it currently exists is pushed as a solution to far more problems than it should be, and the general public puts far more trust in it because of that.

[rant]
The flaws in the current uses get written off as “hallucinations” as if they’re isolated incidents and not a fundamental problem of shoving a square peg through a round hole, and we’re spending ungodly amounts of power (and I mean that both literally and metaphorically) trying to work around that problem instead of reserving it’s uses for what it’s actually good at.

Instead using these huge datacenters for legitimately useful ML workloads, such as protein folding and other scientific/medical research, we’re wasting it with generating the wrong answer to someone’s math homework, or even worse, AI slop articles that no humans should be wasting their time reading so that billionaires can make a few cents off of the ads they run on them, no matter how many thousands of barrels of oil they need to burn to do so.

I applaud IBM’s AI that’s willing to say “I don’t know” when it’s unsure of something, because that should be the standard for chatbots, but it’s not, and “confidently wrong” is because the “AI” isn’t actually thinking, that’s not how this works, but for some reason it’s acceptable for marketing teams to imply that it is. We’re even seeing models that output their “thought” now as optional verbose output, to further imply that actual thinking is happening somewhere.

Sorry for the rant, but this is something I’ve given a lot of thought to over the past few months, and it’s something that feels like it should be obviously morally objectionable to me, even if you ignore the blatant copywrite violations.
[/rant]

With that out of the way, I figured I should list some of the things I’ve found interesting about current implementations. In malware detection and prevention, I’ve seen some AWESOME real-time detection and prevention measures that have saved people in real situations, but it has already been used offensively in that space as well, so that’s does pull things back towards neutral.

While I think we should be incredibly cautious about it’s use in medical applications, ML has already found more efficient ways to discover (and possibly test?) new medicines and treatments, and if we were ever able to trust a body to safely handle medical data (which I’m extremely paranoid about) I think it could be hugely beneficial if it were not abused.

ML-based voice-fonts have some really cool applications for folks who’ve lost the ability to speak, and also have some interesting applications in gaming.

And before I shut up, I also wanted to mention that there are some legitimately good code-help “AIs” out there now. The way they’re implemented doesn’t seem ideal to me, as I don’t believe “natural language” will ever be ideal for interfacing with a purely logic-based system, but especially for converting code snippets between languages or pseudocode into functional code is one of the better implementations of current chatbots.

25 Spice ups

Fantastic Response Josh! Thanks!

6 Spice ups

I actually asked Copilot to create a PowerShell script today. I had a computer reporting in an old version of Thunderbird (not something we normally allow, but this is a user who came to us as part of an acquisition, and is temporary). I asked it to create a PowerShell script to download the latest Thunderbird and install it silently. (Not because I do not know how, but because I did not have to find the link and type the code, it was just ready for me in less than 30 seconds). I then plugged that script into Action1 and ran it remotely. It saved me a ton of time.

9 Spice ups

There should have been an “other” option on the poll I think, because there are so many other ones not addressed here.

I can’t say that I am against AI… I use it to create pictures for the site every day. But I am against the way that people take it too seriously, as though it can really think for you.

AI is an amazing tool, but it cannot do what a lot of people expect it to do. It can’t write legal arguments and cite case law for instance, as we have seen many people attempt to do. It cannot be creative the way so many people expect it to be, because everything it does is derivative. It isn’t smart; it isn’t wise. I have seen people talk about it as though it is the sun in the sky, infinitely more brilliant than they are, but it is NOT any of that. It is as brilliant as a Google search… it might be able to find things for you, but only if you ask it in the right way, and there will still be a lot that you aren’t finding because of the way you asked. It is flawed, and it is always going to be biased because it is an algorithm and not a brain.

So, that’s what I am against… the name “artificial intelligence” pasted onto something that is not intelligent at all, and average people out there believing it is intelligent because someone decided to call it that.

It is cool. It is fun to use. I like that we have awesome algos to use for interesting things. It makes hilarious songs about bacon, and gorgeous pictures of dinosaurs. The algos can help us and assist us with so many things. What I hate is people using “AI” for things that it sucks at, and trusting it way, way more than they should. I hate that people are “writing” books with it and submitting them for publication. I hate that people are doing their homework with it and actual believe that they are learning something. I hate that people are writing spam for our site with it and expect me to take them seriously without even checking what they are writing.

I don’t think that a lot of companies have a choice as far as implementing AI heavily because if they do not they will fall behind, but I still resent it because for some uses it is too soon, and the implementation encourages people to trust it way too much.

Abuse of this sort is not new. People have been trying to use Google to do their homework for decades already, and people trying to scam and cheat. I don’t blame AI for that. I just hate people using it for evil and non-evil people trusting it way, way too much.

14 Spice ups

The problem isn’t so much with AI as a concept but what it’s intended purpose is.

The reason money is being poured into AI is not for example discovery of new things to improve lives. Sure it may get used for that as a side benefit.

The purpose of AI is to allow wealth to access talent and deny talent access to wealth.

If you think that’s cynical, look at what happened while everyone was making cute pictures.

Immediately firms (some to their great regret) announced big layoffs with the intent to replace people with AI. You see it all over the place from receptionists, to coders. People in the arts suffered as well. Voice Actors for voice overs, video games, etc. and even Hollywood actors are being asked to sign away their likenesses so that companies can then use AI to portray them in perpetuity.

News article after news article of companies going in on AI and then laying off 20% of their staff. Meanwhile it’s created an absolute environmental disaster, but it’s okay because we have videos that we can say WOW that was made with AI. The source of truth on a lot of things will be gone soon because you won’t be able to tell if AI created that audio or video recording or not. Got a frantic call from your child that they were kidnapped? Very likely AI these days.

Programmers: It’s not hey now our programmers can be more efficient with AI. It’s now we can get rid of a lot (or all of) our programmers (so we can then try and get them to come back at lower pay). AI can’t meet with groups of people to get input, and buy-in, to listen to concerns and prioritize needs or understand impacts.

Maybe, just maybe, AI will come up with a cure for some horrific disease while we all boil alive over it’s resource consumption in terms of power and water. Those are going to be increasingly scarce so naturally we found a way to burn through them even faster. That part unfortunately can’t be avoided even if people were going to use it completely altruistically. There is no intention of the big AI consumers to use it altruistically however.

10 Spice ups

6 Spice ups

That’s cute until you see that they are using it in robot dogs than can be armed, and we already have examples of AI models that sabotaged their own shutdown scripts so that they don’t shut down.

8 Spice ups

I do agree to much extent…as of 2025, the “AI” is either some basic logic added or just a compilation of old text ?

I attended a few forums and events where “AI” was featured…even to the point where they said AI could be used in modern car manufacturing to help find faults, tune up manufacturing process and even order parts etc ?

So my question to the panel was that why the AI software could not “update & upgrade” itself, eliminate points of failures or identify the “weakest links” then also modify the manufacturing process etc…after some debate and discussions…they asked “is that what you think can work ?”
My response was that if it was a true “AI”, probably within 0.1 second, that car manufacturing would be modifying the “manufacturing robots” to create other robots. After 0.2s, they would be making Autobots or Terminators and probably within 0.3s the “AI” would be spreading all over the world and after 1s AI would be eliminating the cars’ weakest links…humans ?

Then their logo would be “resistance is futile”

9 Spice ups

It’s a bit ironic how companies went on the warpath because they were denied revenue when people downloaded music from Napster even if those that downloaded it never sold or profited off of it, but they feel wholesale theft of other peoples works is not only acceptable but necessary.

For example:

If your industry can’t survive without stealing the work of others, then you don’t have a valid industry.

11 Spice ups

Playing devil’s advocate here.

So… a real life artist should not be allowed to listen to any music whatsoever, before creating their own? I have heard many a time where an artist says they were inspired by some other artist who came before them.

5 Spice ups

Not a good comparison. This isn’t someone making music inspired by Taylor Swift for example. This is exactly duplicating Taylor Swift, making money from her voice and denying her the job and income potential.

8 Spice ups

Not denying this is happening, because it is possible to closely replicate a single style, but in most cases it isn’t exact duplication… it is taking an approximation of the most common things that follow different prompts, not pulling wholly from one artist or another.

6 Spice ups

And sometimes AI causes amusement.

7 Spice ups

I think it is a great comparison, as the article you posted only talked about the AI learning from the music that exists, not copying or replicating a specific artist. (which is a different issue altogether)

5 Spice ups

That article yes, but I made a long post before that. We can agree to disagree here. People learned to create music on their own. AI can’t do that. There will likely never be a pure AI discovery where it just one day wakes up and thinks of something off the wall new that nobody ever thought of before. People do that all the time.

6 Spice ups

Things I want AI for. Pen testing the hell out of stuff to make it secure. Researching medical conditions. Looking for ways to improve power efficiency and power production. Looking for ways to reduce pollution. Things that actually help and improve our world.

Things I don’t want AI for. Generating music and movies and art. That’s the stuff people are meant to be doing in their spare time that is supposed to be increased by AI.

8 Spice ups

I’ve created a lot of AI-generated music—but let’s be clear: the AI didn’t just go off and make it on its own. I write the lyrics. I refine them with AI, which is a huge help when writer’s block hits. Then I describe the style, the sound, the vibe I’m going for. Only then do I generate the music.

AI helps me fill in the gaps where my own skills fall short (I can’t sing to save my life :sweat_smile:). But let’s ask the real questions:

  • Did the music exist before I started the process? No.
  • Did the AI come up with the idea? No.
  • Did it choose the style or arrangement? No.
  • Did it help bring my ideas to life in an incredible way? Absolutely.

Yes, AI has learned from all the music that came before it—but so have all human artists. No one invented music from scratch (unless you’re living in the Caveman movie). Every artist builds on what came before. AI just has the advantage of being exposed to nearly everything ever recorded. I’ll concede that. And I’d bet good money that many real artists are already using AI to push past their own creative blocks.

The same goes for visual art. I’ve got tons of wild ideas in my head, but not the talent to draw them. AI helps me bring those visions to life. It’s fun, it’s expressive, and no real artists were harmed in the making of my images.

AI is a tool—just like any other. We use it at work. Copilot helps us with productivity. CrowdStrike uses AI to protect us from threats. Sure, we get false positives sometimes, but I’d rather deal with those than miss something dangerous.

5 Spice ups