Some of the best conversations in enterprise IT happen at small, practitioner-level events — not the big conference keynotes where vendors present aspirational slides, but the rooms where CIOs and CISOs sit around a table and talk about what's actually hard.

The Florida CIO & CISO Community at Gartner C-Level Communities was that kind of room. A day of sessions with senior IT and security leaders comparing what's working, what isn't, and what they're watching heading into the second half of the year. Here's my honest take on what came out of it.

FL CIO & CISO Community — Gartner C-Level Communities event memories, May 2025 Christian Merkel with fellow CIO and CISO leaders at Gartner C-Level Communities R.I.S.E. Framework presentation — Gartner C-Level Communities, Florida 2025

1. Identity-first is the floor, not the ceiling

The security conversation kept coming back to identity — least privilege access, continuous authentication, provenance tracking for AI-generated actions. Not as emerging concepts but as baseline expectations. CISOs who haven't fully closed the identity layer are operating with an open surface area that keeps expanding as AI agents proliferate.

What struck me: the organizations that have done this well didn't build it because they anticipated AI agents. They built it because the Zero Trust architecture required it. The work they did on conditional access policies, identity governance, and MFA hygiene three or four years ago is now the prerequisite for deploying agentic AI safely. The organizations who skipped or deferred that work are now discovering it's not optional.

Identity-first isn't a CISO priority. It's the infrastructure that determines what every other priority is allowed to do.

2. 10-minute wins compound. Moonshots mostly don't.

There was a consistent thread through the practitioner discussions: the AI and automation wins that are actually sticking aren't the large-scale transformation projects. They're the incremental, specific, measurable improvements that stack on each other over quarters.

Reduce one manual step in a workflow. Automate one approval chain that was running through email. Build one dashboard that replaces a weekly report someone was assembling by hand. These are unglamorous. They don't make good keynote slides. They also compound — each one frees capacity that gets redirected to the next thing.

The organizations with the most impressive AI deployment stories weren't the ones who launched the biggest projects. They were the ones with the highest discipline for shipping small things completely before starting the next one. Big-bang bets are visible. Compounding outcomes are what actually changes an organization.

3. Practical AI means knowing where it fails

The AI conversation at practitioner events sounds different than the AI conversation at vendor events. Practitioners aren't talking about capability ceilings. They're talking about failure modes — where the model hallucinates, where the output requires human review before it can be acted on, where the training data had gaps that didn't surface until production.

The executives who are deploying AI effectively are watching the dark sides: drift in model behavior over time, user over-trust in automated outputs, the gap between what the AI says it can do and what it actually does when the edge cases hit. These aren't reasons to slow down AI deployment. They're the operational disciplines that make AI deployment sustainable.

Practical AI isn't about limiting the ambition. It's about building the feedback loops that catch problems before they scale.

4. Automation should close the loop

One of the clearest frameworks from the sessions: the difference between useful automation and dashboard proliferation is whether the system takes action, not just reports state. Data → decision → action, in a closed loop. Dashboards that require a human to interpret and then manually trigger a response are not automation. They're expensive notification systems.

The organizations getting the most value from their monitoring and analytics infrastructure are the ones who've connected the data output directly to a decision rule — and then directly to an action. Auto-remediation. Auto-escalation. Auto-provisioning. The human stays in the loop for the exceptions. The routine cases don't need them.

This is where Enterprise Engineering work on telemetry, observability, and event-driven architecture pays off — not as infrastructure for its own sake, but as the foundation for automation that actually closes the loop instead of just surfacing more noise for humans to sort through.

5. Physical and cyber risk need one picture

The convergence theme in the security discussions was real: the separation of physical security and cybersecurity risk into siloed functions is creating blind spots. The attack surfaces overlap. An unauthorized physical access event can be a precursor to a network intrusion. A credential compromise can enable physical access. The organizations treating these as separate risk domains are potentially missing correlation signals.

There's no one-size-fits-all answer here — the right governance model depends on the organization's structure, regulatory environment, and risk profile. But the direction is clear: the executive conversation is moving toward a unified risk picture, even when the operational teams managing each domain remain distinct.

6. Routines and values keep teams steady. Frameworks help.

The leadership track conversation was the one I came back to most. How do you keep a globally distributed IT organization stable through rapid change — new AI tools, restructuring, shifting priorities, security incidents?

The answer that resonated most: routines and values, not just communication. Teams that have clear operating rhythms — regular touchpoints, predictable decision-making processes, consistent criteria for what gets escalated and what gets handled at the team level — absorb turbulence better than teams that rely on leadership announcements to stay oriented.

I talked about the R.I.S.E. framework we use internally — Resilience, Impact, Speed, Empowerment — as the values architecture that helps teams navigate ambiguity without constant direction from above. The framing that landed: a team that knows its values can make good decisions independently. A team that only knows its tasks needs a manager to function.

That's the leadership investment that compounds. Not faster escalation paths. Fewer things that need to escalate in the first place.

The best takeaway from any practitioner event isn't a slide or a framework. It's the conversation after the session — what someone is actually wrestling with, what's working in their context, what they tried that didn't. That knowledge doesn't show up in vendor documentation or analyst reports. It lives in the rooms where practitioners talk to each other.

Why events like this matter

The value of practitioner-level gatherings isn't the formal content. It's the density of candid conversation compressed into a short window. You learn more in a day of honest peer exchange than in weeks of reading industry reports — because the reports describe the aspiration and the practitioners describe the reality.

What I took away from Florida: the organizations that are ahead aren't ahead because they have better technology. They're ahead because they have better operating discipline around the technology they have — clearer identity governance, more rigorous experimentation cycles, tighter feedback loops between data and action, and leadership teams that have invested in the foundations that make all of it possible.

The technology is accessible to everyone. The discipline is not.