Sign up for AI insights
Every few days, Asher sends sharp insights about what's happening with AI in growing companies. Reality checks you can read in under a minute.
Samples of previous emails
"What about resistance that just protects comfort?"
Yes. Even lazy resistance tells you something. Maybe your culture punishes mistakes, or people feel overworked, or there's no trust in leadership.
That's still intelligence. It may just not change your decision.
Some resistance protects value (workflows, expertise, relationships). Accommodate that.
Some resistance protects comfort (routines, avoiding decisions). Acknowledge it but move forward.
Your job isn't to eliminate resistance or automatically defer to it. It's to read what resistance is telling you, then decide what's worth preserving.
My teen daughter told me she "hacked the algorithm" on social media.
2.7 million views on a single post.
I said: "Do it again."
"Dad, that's not how it works."
Exactly.
"So what did you learn?"
She learned the algorithm rewards authenticity over strategy. That timing matters more than optimization. That what works once rarely works twice.
That successful rollout? Stop asking how to replicate it. Ask what it taught you about your organization.
The intelligence isn't in the success. It's in understanding why it worked here but won't work there.
"This AI will do your analysis" creates resistance.
"This AI will give you superpowers for analysis" creates interest.
Same technology. Viscerally opposite response.
People embrace amplification. They resist elimination.
Frame your next AI initiative accordingly.
Companies need AI champions. But the people qualified to lead are often the biggest resistors.
Why? They see what leadership doesn't. The data that will take 6 months to clean. The workflows that will break.
Instead of labeling them "change resistant," ask: "What are you protecting?"
The answer might save you 6 months and $300K.
OpenAI announced special pricing for federal agencies: $1 per agency for a full year of ChatGPT Enterprise.
They're not selling technology anymore. They're selling adoption.
But if $1 isn't cheap enough to solve adoption, what does that tell you about the real barriers?
The hardest part of AI isn't the technology or the price - it's getting people to use it.
Pilots use willing volunteers, clean data, and leadership's full attention.
Production has reluctant users, messy data, and competing priorities.
Design for the latter on day one.
Story from MIT's new State of AI in Business report: A law firm bought a $50K AI contract analyzer. Their lawyers still use ChatGPT instead.
Why? The expensive tool summarized contracts perfectly. But the lawyers didn't need summaries - they needed help drafting contracts.
They bought what the vendor was selling, not what their lawyers actually needed.
Solve. The. Problem.
Simple test for your AI investment: If it disappeared Monday morning, who would complain by noon?
Not IT (they'd be relieved). Not management (they're checking adoption metrics). The actual users - would they care?
In my experience, 70% of AI tools could vanish without anyone noticing for weeks.
That's not an adoption problem. It's a value problem.
Microsoft adds AI to Office. Google adds it to Workspace. Suddenly every standalone AI tool looks expensive.
Why pay for separate AI when it's bundled with tools you already have?
This is great news for mid-market. The AI wars are driving prices down and integration up. In 12 months, AI won't be a separate purchase decision. It'll just be there, like spell-check.
Your job shifts from "choosing AI tools" to "actually using what's already included."
The winners won't be companies with the best AI strategy. They'll be companies that actually turn the features on.
Your employees are using ChatGPT on personal accounts. IT calls this "Shadow IT" and wants to shut it down.
I call it market research.
Every unauthorized tool = feedback about gaps between what you provide and what people need.
What would you learn if you mapped shadow tools instead of blocking them?
The other 46% are probably still in the pilot phase.
"Harder than expected" reveals the assumption: that technology is the hard part.
Technology works fine. It's the humans.
Frame your next AI initiative as enhancement, not replacement. See what changes.
Stanford and MIT researchers just published a framework for enterprise AI that says AI tools basically do three things:
- Search (finds and summarizes information)
- Act (executes tasks and workflows)
- Solve (creates new solutions or code)
Here's the pattern they found: Most vendors are selling Search tools. Most companies actually need Act or Solve capabilities.
We're buying expensive ways to find information when what we need is something that actually does the work.
That's why your AI investment feels useless. It's answering questions nobody asked instead of doing work nobody wants to do.
Your sales team uses personal ChatGPT while your $50K enterprise AI platform collects dust.
This isn't rebellion. It's organizational intelligence.
Shadow tools reveal what your organization actually needs versus what you officially provide. The gap maps precisely to the gap between reality and leadership assumptions.
Every unauthorized tool is a vote. Every workaround is feedback.
Stop fighting shadow IT. Start reading what it's teaching you.
I vibecoded a fitness tracking app for my wife.
She wanted to see her workout each day so she could re-write it on our whiteboard.
What I built: Custom workouts, user authentication, progress tracking, achievement badges, AI-powered recommendations.
Guess how she uses it? To see her workout each day so she can rewrite it on our whiteboard.
Here's the thing: I'm a software developer AND an AI consultant. I should know better.
But I still got caught up building features instead of solving her actual problem.
Makes me wonder how many AI tools get built the same way.
Solve the problem.
A reader wrote: "Do we really need a data scientist? We barely have an IT department."
One of the cleanest, most successful AI adoptions I've seen was at a brick-and-mortar supply company. No data scientists. No AI team. Just an office manager who automated their invoicing.
She didn't understand machine learning, but she did understand that manually entering 200 invoices per week sucked.
Technical sophistication will come in time. Focus on problem clarity first.
"What if I make a mistake and AI suggests something wrong?"
"What if I rely on this and then it breaks?"
"What if using AI makes me look lazy to my boss?"
"What if it learns my job and they don't need me anymore?"
These very real conversations rarely make it to your adoption meetings.
Your training sessions cover features and workflows. But they don't address the real fears.
Fear of looking incompetent.
Fear of being blamed.
Fear of becoming dependent.
Fear of being replaced.
Until you address the fears, the features don't matter.
VP of Ops said this about the new AI initiative in last week's meeting.
They said, "I don't trust it".
They meant: "I don't understand how it works, so I don't know when it might fail."
They meant: "I don't know what data it's using, so I can't verify its recommendations."
They meant: "I don't control it, so I can't fix it when something goes wrong."
"Trust" isn't about the technology. It's about confidence and control.
You can't train people to trust AI. But you can give them transparency.
Trust comes from clarity, not compliance.
Mark Twain said this when newspapers mistakenly reported he'd died.
I'm reminded of this quote whenever I hear "AI will replace all knowledge workers."
AI isn't replacing people. It's replacing tasks.
Your marketing coordinator isn't being replaced. They're being freed up from first drafts to focus on strategy.
Your customer service team isn't being replaced by chatbots. They're being freed up from the same 20 tier 1 questions to handle more complex issues.
Yes, the fear of replacement is real, but it's usually misplaced.
If your AI tool(s) disappeared tomorrow, how many people would notice?
If the answer is "not many," you don't have an adoption problem.
You have a value problem.
Most companies measure logins instead of dependency.
Which are you measuring?
Every AI tool sends the same notification: "Your weekly usage report is ready."
I haven't met a soul who reads these.
What if they sent notifications about problems solved instead?
"AI helped you avoid 3 late deliveries this week."
"You saved 4 hours on proposals."
Would people pay more attention to impact than activity?
Most tools measure what's easy to count (logins, clicks, queries) instead of what really matters (time saved, problems solved, stress reduced).
What if your AI tools kept score of what really counts?
What if you did?
That manager who keeps pushing back on your AI initiative? The one everyone calls "difficult"?
They remember the ERP disaster from 2018. The CRM that lost your biggest client. The automation that created cascade failures.
Their resistance is institutional memory. They're protecting workflows that prevent expensive mistakes.
Stop trying to overcome them. Start asking what they're protecting.
Resistance is intelligence. Listen to it.
Let's do this
Every few days, Asher sends sharp insights about what's happening with AI in growing companies. Reality checks you can read in under a minute.