This week, I had a consulting call with someone who genuinely wants to use AI to improve his business. I found that refreshing — a person ready to learn, embrace the risk, explore what’s useful, and not get swept away by fear or hype.
I want to share the core insight from that call, because it might help you or your business too.
Plus, I hope you like the Pixel AI Art 🤖
The context:
My client works with large companies that generate massive amounts of information — accounting systems, CRMs, and other business tools.
His idea? Train an AI agent to generate the right reports from all that data.
A smart ambition — but one that needed a small, crucial shift.
What I said:
“Imagine your AI agent or tool as a real assistant.
Would you trust one person to produce an accurate report… completely out of the blue?”
Then I asked:
How would you know the report is accurate?
How would you test for errors?
That simple analogy — AI as a fallible assistant — changed everything for him.
And it might shift your perspective, too.
The smarter approach we outlined together:
First: design the process.
Lay out the steps needed to consolidate data and produce useful reports.
Second: test it manually.
Use traditional tools (like Excel, BI dashboards, etc.) to check the logic and confirm the output.
Third: simulate errors.
Intentionally insert broken or inconsistent data, and see if the system still works.
Only then: automate.
Once the process is clear and tested, use automation tools (like Power Automate or Zapier) to scale it. Most of these already include AI features to help non-technical users set them up.
This might sound less “magical” than training a powerful AI agent to do everything from scratch.
But it works.
And more importantly, it builds trust, because you’ll know how and why your reports are accurate.
If there’s something that can truly trigger fear, resistance, and even emotions like greed or competitiveness, it’s technology.
That’s why I believe this experience is deeply connected to our theme of growing fearless.
New technologies spark our imagination.
But our tendency to anthropomorphize — to project human traits onto tools — can lead to misunderstandings and false expectations.
Whenever a new tool arrives, we see a familiar pattern:
Suddenly, there are “experts” everywhere.
New services pop up.
Innovation gets hyped.
And that’s all fine — as long as we stay conscious of what’s really going on.
I’m an early adopter by nature.
But I’ve also gotten older.
Now I try to observe trends with a grain of salt.
I explore. I learn. I form my own views.
New tech moves quickly through waves of fear, resistance, hype, and hidden agendas.
What we see in the market often reflects those layers more than the actual potential.
So a bit of healthy skepticism is necessary.
But even more importantly, we need to watch our fears — because they can block us from seeing opportunity altogether.
So the question becomes:
How can we face new technology — and get the best from it — without being ruled by fear?
Here’s what I recommend:
Pay attention to practical applications of the technology that make real sense to you.
Identify the skills needed to use those applications well, and begin developing them.
Focus on your whole self, not just your area of specialization. Think across disciplines.
Remember this: AI — or any tech — can’t dream, fear, love, or feel enthusiasm.
It doesn’t have a body. So whatever you create bears the imprint of your unique life. Your creativity isn’t replicable.
Want an example?
Let’s say I’m a freelance graphic designer.
Today, I can use workflow automation tools to:
Process orders
Send invoices
Schedule meetings
Manage client communication, and
even send automatic reminders for unpaid invoices
All of that runs quietly in the background — freeing me to focus on what I love:
creating.
And one final reminder:
Don’t always trust your first instinct or emotional reaction.
There’s often truth in it, yes — but look deeper.
You may uncover fear at the root of that discomfort.
And if you do, you might also find clarity waiting behind it.
With love,
Jose.
Some relevant content:
Don't be afraid of AI
I’ve been researching this topic for months, and I want to share with you what I found useful. Maybe my perspective as an IT professional brings some value.
This is a great piece on ai use. Thank you for sharing
Agree with your thoughts fully. I love the pixelart analogy - not sure if you intended it like that but the 80s graphics represents very well the current level of human-AI co-existence and co-operation - crude, granulated but inspiring...
In my lectures on AI I always emphasize that we as social animals are almost too prone to project sentience into everything (from a child playing with an inanimate toy and up to having a robot AI girlfriend, which seems to be just around the corner) and we really need to be careful. The second part is that AI in its current form and deployment (ie not through predominantly scientific institutions but profit-oriented companies) is - like it or not - designed to upgrade from capturing our attention to capturing our inner emotional world. So ALWAYS remembering to use it as a tool only, is not only practical, it also keeps us safe. This is the one form of fear, which is there for a very good reason.
Love the fact you emphasized testing for errors - this is a must in any kind of software development, doubly so when AI and it's hallucinating is involved. I need to look into Zapier a lot more. Found it daunting and huge, so I kept putting it off.
All the best, Janez