Categories
Uncategorised

20 Years On: The oncoming battle for privacy in the Era of Hyperpersonalisation

The first Internet Identity Workshop (IIW) was launched in 2005. Now, 20 years later —celebrating the 40th IIW — we stand at the beginning of a generational fight for the future of our personal data. With IIW marking its 40th session, it’s clear: we are entering an era where our digital identities are about to be redefined by artificial intelligence.

Imagine a future where autonomous AI agents act on your behalf—filtering, negotiating, and even shaping your digital presence. I call this vision autonomous self identity. These agents will not only model our hopes, dreams, and desires but also serve as dynamic representations of who we are in the digital realm.

Yet, without robust privacy-preserving foundations, these digital twins risk being confined to “digital jails”. Today, social media and ad networks continually test variations of ads or posts on us, using rapid-fire simulations to predict our behavior. Extrapolate that to AI agents conducting years’ worth of psychological profiling in seconds. The potential for misuse is enormous, and the implications for personal freedom are profound.

In March 2024, I wrote an internal memo forecasting three critical needs as AI agents grow increasingly autonomous (we ultimately made this vision public under an umbrella term called Verifiable AI or vAI):

  • Know Your Agent (KYA): Just as we have KYC (Know Your Customer) and KYB (Know Your Business), we need robust mechanisms to verify and credential AI agents.
  • Granular Permissioning: Current technologies lack the detail required to manage the permissions these agents will need.
  • Secure AI Wallets: Beyond facilitating payments, these wallets must store identity credentials and verifiable authorizations for both humans and AI.

The most viable way this would happen is if AI apps or “agents” had wallets — not only to store secure means of making payments, but also for storing identity credentials from the humans they were working on behalf of, and the humans (or AI!) that had created them.

What I didn’t know back when I wrote that memo is how this would ever work without an industry standard for how AI apps interfaced with the “outside” world. In the past four months, the centre of gravity has significantly shifted as advancements such as Model Context Protocol (MCP) and A2A (Agent-to-Agent Protocol) slot in the final pieces of the puzzle for making “autonomous self identity” possible. (In case you missed it, check out this detailed perspective on how MCP is pioneering trust in the age of AI.)

Despite these breakthroughs, at IIW 38 only one session touched on AI and identity; at IIW 39 there were only three. A complete redefinition of how our personal data and identity are handled online is not a distant threat! At IIW 40, maybe only 10-20% of some of the finest digital identity experts in the world had even heard about MCP. There were perhaps fewer than 10 sessions about how to tackle the new challenges that the onset of autonomous self identity would bring (three of those were run by me!).

I don’t think our industry has gotten the memo on how crucial it will be to create a practical path forward in the oncoming generation of autonomous self identity apps. This new generation of AI-powered apps will be incredibly addictive and simple to use, and much like the way Facebook, Instagram, and TikTok snuck into our lives. If we don’t do something about it, they’ll be extremely hungry for our personal data.

And so it worries me that the kinds of solutions and thinking I saw at IIW 40 are not really fathoming how quickly the world is changing, and how privacy-first approaches CANNOT be subpar user experiences. We cannot seriously be telling people to scan three different QR codes or to care about the minutiae of different DID methods and protocols.

It’s probably fair to say that the companies that defined the Era of Big Tech and Social Media — Facebook, Twitter et al (all founded around the time IIW began) — won the battle on what “normal” feels like for privacy and confidentiality of our personal data.

But last time around this fight happened, we didn’t have the plethora of privacy-preserving technologies that we now have at our disposal. It would take another 4 years after that first IIW for Bitcoin to be launched, and 10 years before a programmable blockchain like Ethereum was realised.

The reason I bring those two milestones up is that the technological advances that enabled those two events to happen also sparked fundamental advances in cryptography that have brought so many privacy-preserving technologies to life, such as zero-knowledge proofs, selective disclosure, fully homomorphic encryption, BBS+ signatures, zk-SNARKs (like PLONK) and zk-STARKs.

What if the next generation of digital interactions is not mediated by faceless algorithms, but by AI agents that truly understand — and protect — our identities? Can we envision a future where every digital footprint is secured by cryptographic assurances, and every interaction is governed by user consent rather than corporate convenience? Will we continue to let our privacy be shaped by legacy systems that encourage data exploitation, or will we embrace a paradigm where personal control and trust are built into the very fabric of the digital economy? The choices we make now will determine the kind of digital society we build for tomorrow.

A Call to Action

We have witnessed transformative breakthroughs in cryptography and digital identity over the past two decades. With AI now accelerating at an unprecedented pace, every builder in the privacy and identity space must act immediately.

This is not about any specific company. This is not about any specific project or initiative. This is not about any specific technology or protocol. It’s about ensuring that the new era of autonomous self identity is built on robust, user-centric, and privacy-preserving foundations. The Era of Hyperpersonalisation is upon us, and we have one chance to get it right.

Categories
Uncategorised

“My Plaid” and how DeFi identity is coming to disrupt Open Banking

Was intrigued to read in the latest Fintech 🧠 Food that @Plaid has launched a beta product called My Plaid (http://my.plaid.com) that allows users to see which companies they are sharing their financial data with 🧐

Naturally, I wanted to take it out for a spin…

For now, it doesn't seem to have the capability to see which companies have access to data. You can only add accounts, like any personal finance app out there, and see an aggregated view of accounts.

So, nothing *too* differentiated for now 🤷🏽‍♂️

Where it breaks down potentially is that this will likely only work where the origin/destination of financial data uses Plaid APIs.

The alternative – as @ACTobin from @evernym put it – is to “make the user their own API” 💡

And THAT is why I'm bullish about the application of #selfsovereignidentity in #fintech:

1. It goes beyond the scope of what data is available under Open Banking (mostly current accounts & credit cards)
2. It doesn't rely on a single, proprietary vendor like Plaid to work

In a way, I'm glad Plaid is doing this now because it demonstrates clear product-market fit and demand for digital identity services, that we *can* solve in a more efficient and privacy-preserving fashion @cheqd_io 👍🏽

It’s taken SEVEN years since Open Banking regulations were defined in Europe to get to any semblance of consistent access for users being able to take their current/card account data elsewhere.

And this has arguably been GOOD for competition and more consumer choice.

If the financial services industry tried to solve data portability with traditional means, I can see this taking another half a decade.

Do we really want to wait that long? Or will we see bolder fintechs embracing new standards in DeFi identity eat the lunch of incumbents again?

Originally tweeted by Ankur Banerjee (@ankurb) on 22 August 2021.