An old-fashioned conversation for modern executives

Now, look… before we get carried away, let me start by saying something that might surprise ya.

I like Artificial Intelligence. I do.

There. I said it.

It's a remarkable thing, really. It can read, write, draw pictures, diagnose problems, and in some cases, even give advice that would make your senior staff look twice. It’s like hiring 10,000 interns who never sleep — except they don’t drink your coffee, steal your staplers, or clog up the parking lot.

BUT.

—and this is important, so sit up, put your phone down a second—
There are places where you just shouldn’t use it.

That’s right. For all its glitz, its glamour, its "change-the-world" promises — there are places where AI does not belong. And if you jam it in there anyway, well… you’re just buying yourself tomorrow’s headache at today’s discount.

So today, pull up a chair. Let’s talk, just between us, about where NOT to use AI—and why.


You know what I’m talking about here. The big stuff.

When you’re talking about whether to approve a mortgage, grant parole, terminate someone’s employment, approve life insurance, or allocate medical treatments — these decisions affect people. Real people. Their families, their kids, their whole lives.

Now sure — AI can process applications fast, even spot patterns in fraud better than most human auditors. But here’s the rub: AI doesn’t have judgment.
It doesn’t understand context. It doesn’t hear the tremble in a voice, see the look in someone’s eye, or feel the gut instinct that tells an experienced manager:

“Something about this isn’t right.”

The machine sees numbers. You see people.

When regulators eventually come sniffin’ around (and they will, my friend), you don’t wanna be standing there holding a printout that says:

“The algorithm made the decision.”

Use AI to support human judgment, not replace it.


2. Don’t Use AI Where You Don’t Understand the Black Box

Lemme tell you a story.

There was this fella I knew years ago — ran a manufacturing outfit in Jersey. Real sharp guy. He bought one of those early expert systems to optimize his supply chain. Thing worked like magic at first — orders got filled faster, warehouse space opened up, profits went up. He thought he was a genius.

Until one day, the thing crashed.

Nobody — and I mean nobody — could explain what it was doing or how it worked. The vendor went out bust. The engineer who installed it moved to Thailand. And there my friend stood, staring at a machine making decisions no one understood.

AI can be like that, especially these new deep learning models.
They’re not rules-based. They’re pattern-based.
You feed them a mountain of data, and they do their thing.
And when something goes wrong? Good luck asking it to explain itself.

If you can’t explain it to a regulator, auditor, or angry board member — don’t build your business on it.


3. Don’t Let AI Run Unattended in High-Stakes, High-Variance Environments

AI’s great with averages. Fantastic with patterns.
But when you’re dealing with rare events — black swans, so to speak — AI can get you into trouble.

Let’s say you’re running a power grid.
Or an air traffic control system.
Or an emergency response network.

AI’s not built for rare edge cases.
It’ll optimize for 99% of scenarios — but that 1%? That’s where disasters live.

You remember that little mishap in 2010 — the "Flash Crash" on Wall Street? Automated trading algorithms ran amok. Billions evaporated in minutes. Nobody could even explain why.

“The machines just went nuts.”

When things are humming along, AI looks like a hero.
When things go sideways — you want a human with their hand on the wheel.


4. Don’t Use AI to Replace Trust

I’m gonna say this nice and plain: Your customers don't trust machines.
They trust you.

When they call your bank, they don’t wanna talk to a bot.
When they file a claim after a car accident, they don’t wanna upload documents into the void.
When they’re facing a health scare, they don’t want a "conversational AI" telling them their odds.

They want a person.

AI might save you a few bucks in headcount — but don’t confuse cost-cutting with customer service.
The quickest way to lose brand equity is to replace real empathy with machine simulation.

Sure, use AI to help the rep — pull up records, summarize policies, suggest next steps — but keep the human in the loop.

You don't want your brand to be known as

“That company where you can’t reach a person.”


5. Don’t Use AI to Create Content You Haven’t Vetted

AI can write. My God, can it write.

Blogs. Newsletters. Reports. Marketing copy.
But listen carefully: just because it can write, don’t mean you should hit “publish” without reading it.

AI is like that intern who’s very enthusiastic but occasionally confuses France with Canada and makes up quotes from people who never existed.

These systems don’t know what’s true. They only know what’s likely. That’s how they work.

Garbage in — plausible sounding garbage out.

You wanna trust your company’s brand, reputation, or regulatory filings to that?

Use AI as a first draft, not a final product.


6. Don’t Use AI Where the Cost of a Mistake Is Catastrophic

There’s an old pilot saying:

“If you mess up at 30,000 feet, you don’t get a second chance.”

Same rule applies for certain AI applications.

Medical diagnosis?
Structural engineering?
Nuclear plant management?
Autonomous weapons? (Don’t even get me started.)

These are systems where a single glitch isn’t an inconvenience—it’s a funeral.

AI doesn’t carry the moral weight of consequences. Humans do.

Until you can sit comfortably knowing that you or your board would take full legal and ethical responsibility for every AI decision — don’t put it in charge of life-and-death systems.


7. Don’t Use AI to Avoid Dealing with People Problems

This one’s sneaky.

I see companies reaching for AI as a way to sidestep uncomfortable realities.

“We’ve got bias in hiring? Let’s use AI to screen resumes.”
“We’ve got harassment issues? Let’s use sentiment analysis on Slack.”
“Our managers play favorites? Let’s let algorithms set raises.”

No.

AI won’t fix your culture. It won’t solve your leadership vacuum.
It’ll just automate your existing problems at scale.

And guess what happens when a journalist, regulator, or activist group finds out?
Yep. Front page.

Fix the people. Then use AI to help the people do better.


8. Don’t Use AI as a Substitute for Strategy

I’ll say it straight: AI is a tool, not a plan.

I hear this a lot from executives:

“We need an AI strategy.”

No, you need a business strategy.
AI might be part of how you execute that — but it ain’t the goal.

Too many companies pour millions into AI pilots, dashboards, predictive models… and then realize they were chasing shiny objects instead of solving real business problems.

Start with the customer. Start with the need. Start with the outcome.

Then ask:

“Can AI help us achieve this faster, cheaper, or better?”

If the answer’s yes — great.
If the answer’s no — move on.


9. Don’t Use AI in Ways That Simply Piss People Off

Now, if you’ll pardon my French, lemme talk plain here — because your customers sure will.

There are ways AI gets used today that don’t save money, don’t improve service, and don’t solve any problem. All they do is irritate the living daylights out of your customers.

A few of the classics:

The Useless Customer Service Bot

You call because your internet is down.
You’re on hold for 10 minutes.
Finally, the AI bot comes on:

“I see you’re having trouble with your internet connection. Have you tried visiting our online support site?”

No, genius. I’m calling because the fiber optic cable is snapped. The internet is down. The website you’re recommending doesn’t exist for me right now.

The Overly Cheerful Airline Chatbot

Your flight’s been canceled. You need to rebook.
The bot responds:

“I’m so sorry to hear that! I understand how frustrating that can be. You may find answers to your questions on our website FAQ!”

That’s not help. That’s mockery.

The HR Resume Screener That Ghosts Good Candidates

AI dings applicants for pandemic gaps, or odd career paths, or foreign names.
Good people get blocked before a human ever sees their resume.
Meanwhile you're wondering why you "can't find good people."

The Predictive Ad Engine That Gets Creepy

You mention something sensitive in private, and suddenly you're getting ads for lawyers, doctors, or bankruptcy services.
You've crossed from personalization into paranoia.

The Voice Bot That Can’t Handle Accents

AI struggles with regional dialects, heavy accents, or second-language speakers.
By the fourth failed attempt, your customer’s ready to torch your brand.

The Punchline

Now listen: I’m not here to scare you off AI.

Frankly, any executive not exploring AI, seriously, is already behind.

But I am here to give you a friendly tap on the shoulder — the way an old friend would do over a drink — and remind you:

Not every job is a nail.
And AI is a very big, very powerful hammer.

Use it wisely.

Because at the end of the day, you don’t wanna be the one standing in front of the board saying:

“Well boss, it seemed like a good idea at the time…”


One Last Story Before You Go

There was an old tailor I knew in Astoria, Queens.
Every time I’d order a suit, he’d tap the fabric, check the pattern, and say:

“Measure twice. Cut once.”

That, my friend, is how you should approach AI.

Not with fear. Not with blind faith, but with care. With understanding. With respect for what it can do—and for what it can’t.

That’s how you lead. That’s how you stay out of the the papers.
And that’s how you build something that lasts.

Next Step:

🗓️ Schedule a discovery call
Talk about issues and opportunities for your current system before you commit.