I’m going to start this one off with a disclaimer that this is entirely my own opinion, because I think this might be a bit of a spicy take. (Or maybe not as much with this crowd?)
Largely against it in the way marketers are pushing it. I’m all for the advancement of tech, but not for how the world has taken this tech and ran with it.
I believe machine learning algorithms have their place, but I think “AI” as it currently exists is pushed as a solution to far more problems than it should be, and the general public puts far more trust in it because of that.
[rant]
The flaws in the current uses get written off as “hallucinations” as if they’re isolated incidents and not a fundamental problem of shoving a square peg through a round hole, and we’re spending ungodly amounts of power (and I mean that both literally and metaphorically) trying to work around that problem instead of reserving it’s uses for what it’s actually good at.
Instead using these huge datacenters for legitimately useful ML workloads, such as protein folding and other scientific/medical research, we’re wasting it with generating the wrong answer to someone’s math homework, or even worse, AI slop articles that no humans should be wasting their time reading so that billionaires can make a few cents off of the ads they run on them, no matter how many thousands of barrels of oil they need to burn to do so.
I applaud IBM’s AI that’s willing to say “I don’t know” when it’s unsure of something, because that should be the standard for chatbots, but it’s not, and “confidently wrong” is because the “AI” isn’t actually thinking, that’s not how this works, but for some reason it’s acceptable for marketing teams to imply that it is. We’re even seeing models that output their “thought” now as optional verbose output, to further imply that actual thinking is happening somewhere.
Sorry for the rant, but this is something I’ve given a lot of thought to over the past few months, and it’s something that feels like it should be obviously morally objectionable to me, even if you ignore the blatant copywrite violations.
[/rant]
With that out of the way, I figured I should list some of the things I’ve found interesting about current implementations. In malware detection and prevention, I’ve seen some AWESOME real-time detection and prevention measures that have saved people in real situations, but it has already been used offensively in that space as well, so that’s does pull things back towards neutral.
While I think we should be incredibly cautious about it’s use in medical applications, ML has already found more efficient ways to discover (and possibly test?) new medicines and treatments, and if we were ever able to trust a body to safely handle medical data (which I’m extremely paranoid about) I think it could be hugely beneficial if it were not abused.
ML-based voice-fonts have some really cool applications for folks who’ve lost the ability to speak, and also have some interesting applications in gaming.
And before I shut up, I also wanted to mention that there are some legitimately good code-help “AIs” out there now. The way they’re implemented doesn’t seem ideal to me, as I don’t believe “natural language” will ever be ideal for interfacing with a purely logic-based system, but especially for converting code snippets between languages or pseudocode into functional code is one of the better implementations of current chatbots.