Working at the forefront of delivery and projects on the vendor side, juggling multiple priorities I’m used to having a “busy mind”. It’s the norm, and honestly, it’s what I love about my role, as no two days are ever the same, and even your best laid plans may end in a pivot.
And then… AI entered the chat.
What initially started as a simple way to take my busy brain dumps and turn them into something more cohesive, decluttering my thoughts quickly became a rapid way of fitting more into my day. My initial thoughts of course were “this is fantastic, I feel more efficient and can get far more done than ever before”.
Fast forward a year, I’ve found myself pausing and that reflection is what led me here, with this article taking a few observations from the front line resulting in this thought piece.
Productivity gains aside, for those with busy minds operating across a multitude of priorities, decisions and programs already producing a high volume of outputs prior to AI, it got me questioning, if there’s actually a threshold where this starts to do more harm than good?
It was this thought that took me back to a book, Thinking Fast and Slow where Daniel Kahneman talks about two systems of thinking that drives human thought:
- System 1: fast, intuitive, emotional and automatic.
- System 2: slower, deliberate, logical, but energy intensive.
Which then sparked a conversation with a colleague far more versed in AI than me, does AI push us further into System 1? And if so, are there thresholds, particularly for already busy minds producing high a volume of outputs, often adding more to the plate without realising? And what are the potential long-term implications is that becomes our default?
What initially began as simple use case in using AI, to speed up email comms and organise scattered thoughts has, for me, highlighted something else entirely, the need to be far more intentional about carving out time for deep thinking.
I started to notice it in small ways. The 20 to 30 minutes I used to spend writing an email that needed a strategic lens and deeper thought than the day to day made me pause. Was that actually unintentional deep work? A moment in my busy day where I stopped to slow down and process my thoughts more critically?
Now, I note this may not sound like the highest value task in isolation, but it sparked an important question for me, as someone operating largely in System 1, was this an unnoticed by necessary shift into System 2 thinking?
With AI accelerating that process, allowing me to produce more and think faster it enabled me to operate even more in System 1 through the day, resulting in the need to consciously reintroduce that space to engage System 2 elsewhere.
This isn’t a critique of AI; I genuinely like to approach most things with an inquisitive seek to understand mindset. Personally, I think it’s a net positive in so many ways. But in a world where many of us already run “busy” both in work and life AI can easily make us busier if we’re not careful. And I still ponder what happens if we don’t use and train our own System 2, only ever training the AI bot to produce the System 2 outputs?
Perhaps that’s where the opportunity lies, not just in using AI to move faster, but in being deliberate about when we slow down. While I don’t have the answers by any means, I can apply critical reflection to the way I work, and as AI continues to rapidly change the landscape, ensure I commit to continuous learning and adaptation. I’m curious to know how others are using AI to complement their System 2 thinking, rather than replacing it?
Without a balance between working in our System 1 and 2 states, I can’t help but question if we risk slipping into cognitive ease, letting bias take over, and defaulting to quicker, less considered decision making. Again, leaving me with a few more questions than answers on what this means for the future.
AI can be quite the rabbit hole and no doubt for some non-technical people like me, a little daunting when it comes to where to start. But once you scratch the surface, it can be as equally engaging sparking deeper curiosity.
The Guardian recently published an article “Bosses say AI boosts productivity – workers say they’re downing in workslop”, piquing my interest of how all of this interlinks, and more importantly how we can learn from it.
How are you using AI to support deep thinking, not replace it?
We’d love to continue the conversation, get in touch with us to explore how teams can adopt AI thoughtfully, without losing space for critical judgment.