It started as a throwaway comment mid-conversation.
I was working through a session with my AI system — security audits, emails, the usual — when I mentioned I had a Muse EEG headband sitting around. One of the original 2014 models. Bought it years ago, used it a few times, and let it collect dust. I’d just finished charging it and figured I’d mention it.
What happened in the next ninety minutes is the kind of thing that still makes me stop and think about what these systems are actually capable of.
(A note upfront: this post skips most of the technical details. There were a lot of them — library patches, Bluetooth timing issues, sensor calibration — but they’re not the interesting part. The interesting part is what it felt like to watch the whole thing come together through conversation.)
Step One: Figure Out What I Even Had
I told the system I wanted to get connected to live EEG data while wearing the headband. The immediate response: what model is it, and what do you want to do with the data?
I didn’t actually know the model. I emailed myself a photo from my phone. The system pulled it directly from my Gmail, decoded the attachment, and looked at the image: Muse 1, 2014 model. Bluetooth Low Energy. Four electrodes — two forehead, two ear.
One photo, thirty seconds. We had a starting point.
Step Two: The Part Where Nothing Works
The Muse 1 doesn’t pair like a normal Bluetooth device. It doesn’t show up in System Settings. It has to be connected through code, which most people don’t know — so they assume their headband is broken.
There were several layers of problems: Bluetooth was off, then the headband kept dropping before a connection could form, then the connection would time out. Each one was diagnosed and patched through back-and-forth conversation. I’d describe what the LED was doing. “It’s just going back and forth.” The system would interpret that, adjust the approach, and try again.
Eventually: Connected. Streaming EEG.
Step Three: The Sensor Problem
Getting connected and getting clean data are two different things. The system was reading all four channels and immediately flagged that one sensor — the right forehead electrode — was showing values wildly out of range compared to the others. Noise, not signal.
We tried cleaning it, repositioning the headband, pressing it firmly against skin. The left forehead sensor cleaned up immediately after a wipe with a damp paper towel. The right one didn’t budge even under direct pressure — which meant it wasn’t a contact problem, it was a degraded electrode. Six years in a drawer will do that.
The fix: exclude that sensor entirely and recalculate using the three remaining clean channels. About sixty seconds later, the numbers looked like an actual awake brain.
Step Four: The Dashboard
I wanted to see my brainwaves in real time. Not a number in a terminal — actual live bars I could watch move.
The system built it. Five vertical bars — one for each brainwave frequency band — color-coded and updating multiple times per second. Delta in purple. Theta in blue. Alpha in green. Beta in amber. Gamma in red.
Browser opened automatically. I was looking at my own brainwaves within about thirty seconds of asking for the dashboard.
Step Five: The First Thing I Tested
This is where it got genuinely strange and exciting.
I’d just gotten the dashboard running. The bars were moving. I wanted to see if the system was actually responding to me. On a whim, I bit my hand.
The gamma bar — the red one, highest frequency — spiked.
Gamma is associated with intense sensory processing and rapid cross-region brain activity. Sharp physical input drives it. My EEG showed it in real time, within half a second of the bite.
I don’t recommend biting your hand as a wellness practice. But as proof that this feedback loop is real and responsive? Hard to argue with.
That’s as far as I’ve gotten. I’ve barely scratched the surface — the rest of the bands are sitting there waiting to be explored. But the foundation is working, and the next session should be interesting.
What This Actually Means
Ninety minutes. A six-year-old headband. A live brainwave dashboard built entirely through conversation — no prior code, no prior plan.
I didn’t write any of it. I described what I wanted and responded to what the system found. The technical work — and there was a lot of it — happened in the conversation.
This is the part people underestimate: it’s not just that AI can write code. It’s that it can troubleshoot in real time, adapt when things don’t work, read the error, change the approach, and keep going until the thing is actually running. That’s a different capability than autocomplete.
—
Applied Intelligence builds AI systems for small businesses. If you want something like this — or something far less weird — let’s talk.
Ready to put this to work in your business?
Applied Intelligence helps San Diego and Southern California businesses automate workflows, reduce manual work, and grow without adding headcount. The first conversation is free and takes 20 minutes.
Book a Free Discovery Call →