Have you tried turning AI off and on again? I’m wondering if that will one day become a standard IT catchphrase…

According to Pew Research, already most Americans interact with AI at least once a day.

When AI is used this frequently — via chatbots, generative AI tools, or AI integrated into existing solutions they use in everyday life — you know users are eventually going to take their AI tech problems to IT, but has that day already come?

Have you been asked to troubleshoot someone’s issues with AI yet? If so, what tool did they need help with, and how did it go?

Vote in our poll, then share your stories in the comments!

Have you had to troubleshoot users’ AI issues yet?

  • Yes
  • No
  • Unsure / Don’t know
0 voters
8 Spice ups

So, troubleshooting Windows issues, do I answer “yes” to this question?

Of course I jest. As far as troubleshooting issues with Co-Pilot, ChatGPT, and other “AI tools”, I haven’t had to troubleshoot anything yet. A vast majority of our users don’t even know what “AI” is; but they’ve heard of it.

11 Spice ups

DUH…

Forget about troubleshooting.

After ripping off foreign workers, a bunch of the big U$ Corps are now trolling for 'muricans to do the same 8-hour+ / day laborious work to manually ID all manner of good and horrific things to train AI.

5 Spice ups

No. Most of them are too embarrassed to admit they use AI and I’m like “We already know you used or downloaded it. As a consolation, when you sent this ticket that was AI. That company wide email I sent? That was polished and proofread by ChatGPT. The original draft was too vulgar.”

5 Spice ups

I know you used AI to raise this ticket, everything was spelled correctly…

8 Spice ups

Instead of “have you tried rebooting it?” the mantra will become “sorry, the AI is lying to you try again”

4 Spice ups

I haven’t had to “fix” one, but I did get a ticket with “ChatGPT said that I could enable this feature in our EMR and told me how to do it, but I couldn’t figure it out. Can you help me?” Yes, it did tell her how, but it prefaced those instructions with “If the feature is available in your environment”, which it is not. That’s why she couldn’t enable it. And without renegotiating our contract to add this cool new feature, we’ll never have it. A bit of researching the feature and speaking with our account rep would have told her that. “AI can make mistakes” is what the header on the ChatGPT reads, but what if the user is the one making the first mistake? :open_mouth:

3 Spice ups

Yeah…but is the feature worth attempting to get?

1 Spice up

Do we get to rest it in an EVA suit and have the computer sing “Daisy-Daisy?”

Happy Canada Day, Eh!

1 Spice up

I will confirm that AI (grok) makes mistakes on technical questions, even when it sounds absolutely certain on its troubleshooting data. It just misses critical things occasionally. I tell it its mistake and it immediately searches out the reason it was incorrect and modifies its troubleshooting story. What it doesn’t know can lead it right astray, without any indication of uncertainty. Which leaves me feeling that there is no intelligence there, it is not “thinking”. It’s just a crazy amount of data that it can reproduce in conversational speech. Still a computer, not a brain yet.

5 Spice ups

That’s it exactly!

Also, welcome to the community!

2 Spice ups

Since my employer doesn’t allow AI, there’s nothing to troubleshoot.

4 Spice ups

According to Pew Research, already most Americans interact with AI at least once a day.

I consider automated phone systems to be AI. I’ve interacted with AI before most Americans have.

3 Spice ups

Since my employer doesn’t allow AI, there’s nothing to troubleshoot.

Was going to say the same thing, but I’ll count uninstalling Co-pilot after every update as troubleshooting AI :grin::joy:

5 Spice ups

What your employer allows (or not) and what Microsoft allows are very different…considering CoPilot is now baked into everything, even replacing the front page of office.com at sign in, chances are it’s being used somewhere. And that’s just CoPilot!

2 Spice ups

Agree for now that AI is no miracle cure-all. You really do need to give it a lot of guidance. But once you give it exactly the correct prompt, generative models can return amazing results, especially for images, music, and video. They’ve worked out a lot of the early glitches.

Still, humans need to have the creativity and the know-how to get the most out of AI… and be able to call it out when it gives you slop. Blindly trusting AI can be dangerous if you don’t know if it’s returning false info. But in the hands of someone with some prior expertise, it’s a very useful tool.

5 Spice ups

We kind of had to troubleshoot Copilot because it wouldn’t load. Turned out to just be an access issue (firewall was blocking it because they were being directed to the Brazil servers). Not sure that exactly qualifies, though.

2 Spice ups

You should document what you did to ‘fix’ it, reverse engineer it so you can intentionally point to the wrong servers, then package it as a script to run anywhere. Not saying that’s the killswitch we all need, but it could be!

1 Spice up

Blindly trusting AI can be dangerous if you don’t know if it’s returning false info.

Comes to mind.

Shout out to @HulkSmash for posting this months ago:
Woman files for divorce due to AI - Water Cooler - Spiceworks Community

2 Spice ups

You do have a point, Jay. Certainly, I have asked questions in Bing and sometimes it will involved Microsoft Copilot to answer them, which I have found to often be useful.

But in other ways it hasn’t gone well. Things like ChatGPT, Google’s Gemini, etc. have been blocked. And I can understand why my employer has blocked them. Someone at work (the name wasn’t given, so I don’t know who it was) foolishly copied-and-pasted lots of patient data into ChatGPT in order to have ChatGPT help them produce a report. That was very foolish and potentially illegal, to post that data to ChatGPT. It was also a surprise because we all have to go through HIPAA training on a regular and recurring basis, so this person should have known better. My employer was understandably upset.

However, for me the problem is the security office tends to take the nuclear option, whenever something like this comes up. Rather then offer training on how to properly use GenAI, their choice is to block AI from being used by anyone. (If they can.) In the long run, this draconian attitude hurts everyone because we’re not allowed to learn how to properly use AI to do our jobs. Worst case scenario, in two years the expected skill for doing a knowledge base employment role is you must know how to use some GenAI to generate reports, better craft emails or posts, analyze data, write and review code, etc. So, we’ll be stuck not being able to move onto other positions elsewhere. Probably, I am overstating what will happen in the near future, but it concerns any nonetheless.

3 Spice ups