The first Internet Identity Workshop (IIW) was launched in 2005. Now, 20 years later —celebrating the 40th IIW — we stand at the beginning of a generational fight for the future of our personal data. With IIW marking its 40th session, it’s clear: we are entering an era where our digital identities are about to be redefined by artificial intelligence.
Imagine a future where autonomous AI agents act on your behalf—filtering, negotiating, and even shaping your digital presence. I call this vision “autonomous self identity“. These agents will not only model our hopes, dreams, and desires but also serve as dynamic representations of who we are in the digital realm.
Yet, without robust privacy-preserving foundations, these digital twins risk being confined to “digital jails”. Today, social media and ad networks continually test variations of ads or posts on us, using rapid-fire simulations to predict our behavior. Extrapolate that to AI agents conducting years’ worth of psychological profiling in seconds. The potential for misuse is enormous, and the implications for personal freedom are profound.
In March 2024, I wrote an internal memo forecasting three critical needs as AI agents grow increasingly autonomous (we ultimately made this vision public under an umbrella term called Verifiable AI or vAI):
- Know Your Agent (KYA): Just as we have KYC (Know Your Customer) and KYB (Know Your Business), we need robust mechanisms to verify and credential AI agents.
- Granular Permissioning: Current technologies lack the detail required to manage the permissions these agents will need.
- Secure AI Wallets: Beyond facilitating payments, these wallets must store identity credentials and verifiable authorizations for both humans and AI.
The most viable way this would happen is if AI apps or “agents” had wallets — not only to store secure means of making payments, but also for storing identity credentials from the humans they were working on behalf of, and the humans (or AI!) that had created them.
What I didn’t know back when I wrote that memo is how this would ever work without an industry standard for how AI apps interfaced with the “outside” world. In the past four months, the centre of gravity has significantly shifted as advancements such as Model Context Protocol (MCP) and A2A (Agent-to-Agent Protocol) slot in the final pieces of the puzzle for making “autonomous self identity” possible. (In case you missed it, check out this detailed perspective on how MCP is pioneering trust in the age of AI.)
Despite these breakthroughs, at IIW 38 only one session touched on AI and identity; at IIW 39 there were only three. A complete redefinition of how our personal data and identity are handled online is not a distant threat! At IIW 40, maybe only 10-20% of some of the finest digital identity experts in the world had even heard about MCP. There were perhaps fewer than 10 sessions about how to tackle the new challenges that the onset of autonomous self identity would bring (three of those were run by me!).
I don’t think our industry has gotten the memo on how crucial it will be to create a practical path forward in the oncoming generation of autonomous self identity apps. This new generation of AI-powered apps will be incredibly addictive and simple to use, and much like the way Facebook, Instagram, and TikTok snuck into our lives. If we don’t do something about it, they’ll be extremely hungry for our personal data.
And so it worries me that the kinds of solutions and thinking I saw at IIW 40 are not really fathoming how quickly the world is changing, and how privacy-first approaches CANNOT be subpar user experiences. We cannot seriously be telling people to scan three different QR codes or to care about the minutiae of different DID methods and protocols.
It’s probably fair to say that the companies that defined the Era of Big Tech and Social Media — Facebook, Twitter et al (all founded around the time IIW began) — won the battle on what “normal” feels like for privacy and confidentiality of our personal data.
But last time around this fight happened, we didn’t have the plethora of privacy-preserving technologies that we now have at our disposal. It would take another 4 years after that first IIW for Bitcoin to be launched, and 10 years before a programmable blockchain like Ethereum was realised.
The reason I bring those two milestones up is that the technological advances that enabled those two events to happen also sparked fundamental advances in cryptography that have brought so many privacy-preserving technologies to life, such as zero-knowledge proofs, selective disclosure, fully homomorphic encryption, BBS+ signatures, zk-SNARKs (like PLONK) and zk-STARKs.
What if the next generation of digital interactions is not mediated by faceless algorithms, but by AI agents that truly understand — and protect — our identities? Can we envision a future where every digital footprint is secured by cryptographic assurances, and every interaction is governed by user consent rather than corporate convenience? Will we continue to let our privacy be shaped by legacy systems that encourage data exploitation, or will we embrace a paradigm where personal control and trust are built into the very fabric of the digital economy? The choices we make now will determine the kind of digital society we build for tomorrow.
A Call to Action
We have witnessed transformative breakthroughs in cryptography and digital identity over the past two decades. With AI now accelerating at an unprecedented pace, every builder in the privacy and identity space must act immediately.
This is not about any specific company. This is not about any specific project or initiative. This is not about any specific technology or protocol. It’s about ensuring that the new era of autonomous self identity is built on robust, user-centric, and privacy-preserving foundations. The Era of Hyperpersonalisation is upon us, and we have one chance to get it right.