This is an interview with Harry Halpin. Harry Halpin is a technologist and philosopher who moved from building Indymedia during the anti-globalisation protests to earning a Ph.D. in Informatics under the philosopher Andy Clark and to working under Tim Berners-Lee at the W3C/MIT, where he led the Web Cryptography API. As mass surveillance escalated after the Snowden revelations, Halpin co-founded Nym Technologies as its CEO, steering the development of a decentralised, blockchain-incentivised mixnet designed to withstand global passive adversaries that can monitor the entire Internet, like the NSA or Palantir. His role at Nym fuses cryptographic research, cryptoeconomic design, and an explicitly political commitment to embedding freedom-preserving rights directly into internet infrastructure.
DIFFRACTIONS: Could you share the story behind the creation of Nym? What were the individual backgrounds, interests, or hobbies that intersected and ultimately led to the development of Nym? How did the diverse perspectives shape the project’s vision and direction?
Harry Halpin: The genesis of Nym really comes from a desire to rebuild the foundations of the internet in such a way that they enshrine fundamental rights into the architecture of the internet itself. This is important because the internet is incredibly powerful – perhaps the most powerful technological development since the Gutenberg Press. Just as the release of the Gutenberg Press triggered various civil wars and even the collapse and reformulation of empires and nation-states, similarly the internet is causing massive political, philosophical, and ultimately ontological upheaval in the world.
Nym is an attempt to graft onto the internet a system that makes it impossible for someone to surveil you or control you. This is in line with the ideals of philosophy since the time of Socrates, and to be frank, the Enlightenment – the concept of the autonomy of reason, the ability to be critical, to reinvent yourself despite past actions, and the ability to take power over your own agency. Having a permanent record of your activities and those activities being constantly under surveillance for the purposes of control is incompatible with the ideals of the Enlightenment and many other cultures. We’re trying to deliver not just privacy-enhancing technologies, but technologies that preserve and even spread human freedom.
I became interested in this from quite a young age. As an undergraduate student, I was originally a systems administrator at the University of North Carolina, and hosted my first website on Sunsite, the original place on the Internet that let you download GNU/Linux. At that point in the United States and across the world, we had the anti-globalization movement, which was based on the premise that another world was possible. The primary problem we encountered was that when we had a protest, the mainstream media would silence us by simply not reporting on these protests. So many of my friends, in particular Evan Henshaw-Plath (also known as “Rabble”) and others, helped build a network of websites called Indymedia that allowed anyone to publish news, and I started the site for North Carolina Indymedia on Sunsite. Indymedia stood for “Independent Media.” It was maybe the first social media site that allowed anyone to report on live news. Eventually its core concepts were commercialized and led to Twitter thanks to Evan Henshaw-Plath and Blaine Cook, former Indymedia volunteers who were founding engineers at Odeo and Twitter. I wrote some of this story down with Evan in “From Indymedia to Tahrir Square.”
After I received my doctorate, I started working for Tim Berners-Lee on web standards for protocols, which I believed at the time was one of the best ways for us to effectively evolve the web into a more superior form of knowledge sharing, which would then in turn enable new forms of collective intelligence. That’s the original vision of Tim Berners-Lee, Douglas Engelbart, and J.C.R. Licklider, a vision that I found very compelling. Their seemingly apolitical vision of the Semantic Web, human augmentation and human-machine symbiosis is ultimately political as it would allow both the decentralization of power via the enabling of humans to work together in a vastly more effective manner across time and space. Through my activities as an anarchist and activist, I knew humanity was facing problems that seemed unsolvable, that seemed to go beyond the grasp of our current technology and of what is even thinkable by individual humans – problems such as the climate crisis and other species-level catastrophes.
As artificial intelligence started taking off, I built an early language model as part of my doctorate as part of a paper with Victor Lavrenko. After my experiments with machine-learning, including a short stint at Yahoo, it became self-evident how easy it was to use machine learning and AI to surveil and control people. These models were incredibly effective even almost twenty years ago, when the amount of data we had was very small. The effectiveness of AI is directly correlated to the amount of data that one can gather about someone, which started worrying me as the web was transforming into a surveillance machine.
At the same time, we saw the internet become more and more useful to people wanting to change the world. The Arab Spring was incredibly important to me as it provided inspiration during my darkest moments, when I was being persecuted by the British police for climate change activism – which is a whole other story which I won’t go into here – that effectively drove me into exile from Europe. I helped friends in places like Egypt get the word out about their protests using the Internet – using the last working internet connection in Cairo, and I saw in action how the internet helped mobilize ordinary people, not just activists. Yet internet access by itself was not enough, for I saw how horribly wrong attempted revolutions could go with the collapse of Syria, where my friend Bassel Safadi (Bassel Khartabil) was murdered by the Assad regime, and the ideals of the Arab Spring ended up being upheld by the Kurds in their fight against ISIS and their autonomous administration of Rojava. As revolutionary movements spread and were defeated again and again, I became increasingly aware that the internet was not being used as a way to spread knowledge and empower individuals, but as a way to control, surveil, and eventually imprison and kill the very people who were needed to change the world. The online movement “Anonymous” was also very important in those days, and I started writing in 2012 about how anonymity would be important to escape what I termed the “surveillance machine” in my paper “The Philosophy of Anonymous.”
In the wake of the Egyptian Internet shutdown, I wrote a manifesto with Tim Berners-Lee, the inventor of the Web, about how access to the internet and its capabilities – which do in a very real sense extend your mind – should be a fundamental right in an article called “Defend the Web”. Yet even the standards bodies like the W3C and MIT, where I worked, were slowly getting captured by large corporate interests. For example, Google and Netflix were lobbying to make digital rights management, better thought of as digital restriction management as put by Richard Stallman, part of the web via a web standard called “Encrypted Media Extensions” so people couldn’t share movies and files freely anymore. Except Cory Doctorow and a few others, most of my colleagues thought I was crazy for opposing that standard and even leaving my job if the standard got approved by W3C. Yet then during COVID, most of humanity probably wondered why they couldn’t download movies for free and instead had to pay Netflix or other streaming services. I still have tremendous respect for Tim Berners-Lee and everyone working on web standards, but we wanted to do more radical work to change the Internet.
So I started working on more experimental open-knowledge technologies with Bernard Stiegler, Alexandre Monnin, and Yuk Hui at Institut de Recherche et d’Innovation (IRI) in Paris. In France, under Bernard, we had a “golden age” of the philosophy of the web in Paris. We began trying to build these technologies: first with very small shoe-string budgets, then with slightly more generous budgets from the European Commission as part of joint projects. In light of the Snowden revelations, the European Commission wanted to build an anonymous communication system that was more powerful and more anonymous than even Tor. So I got together with many of my fellow researchers, including Claudia Diaz (KU Leuven), Aggelos Kiayias (University of Edinburgh), and George Danezis (University College London), as we thought: “Now’s the time. We’re going to build a real-world anonymous mixnet, because that’s the only technology that can withstand NSA surveillance”. We worked for multiple years on the PANORAMIX project to build a new mixnet architecture. It was a long and tremendously hard piece of work. In the end, we ran into the problem most academic projects have: we ran out of funds. No European government actually wanted to buy resistance to mass surveillance, as at the time they were all quite comfortable partners of the USA. Sadly, at the time, our technology was still not particularly usable by normal humans who were not technical experts.
Then out of the blue, friends of mine got interested in cryptocurrency. Amir Taaki introduced me to Bitcoin in 2014. After discussing the technical limits of Tor, Julian Assange sat down with me in the Ecuadorian embassy and said, more or less: “You’re always going to have time to work on philosophy, but the world needs this software to resist mass surveillance now.” In retrospect, one could argue that I made the mistake of taking career advice from Assange, but I think ultimately he was correct. So, I decided to stop my academic work to focus on how to get a team together to build the software. We launched a startup, which I called Nym after the old cypherpunk use of the term “nym” as a pseudonym, and worked for more than a year to raise funds for it. A lot of the original cypherpunks in cryptocurrency knew what a mixnet was and understood how powerful this anti-surveillance technology could be and were very supportive, from Adam Back to Zooko Wilcox-O’Hearn. We even got people like CZ (Changpeng Zhao) from Binance to invest, who actually wrote us our first check via Binance Labs. That’s how Nym was founded, thanks to both Assange and CZ.
With this funding, we hired the best programmers we could, including one of my co-founders, Dave Hrycyszyn – with whom I’d worked on Indymedia and anarchist activities in the early 2000s. Dave had just sold his last startup to Facebook but had quit working there on the first day, as he was naturally more interested in blockchain and privacy. With many of the scientists from PANORAMIX like Claudia Diaz and Ania Piotrowska, we all came back together and started building the software. Even from our inception, everyone had a very different skill.
Claudia Diaz has been working on mixnets for the longest time, second perhaps only to David Chaum, the inventor of mixnets. She probably has the deepest knowledge of how to attack mixnets. Ania Piotrowska got her Ph.D. under George Danezis, who was a professor at University College London and also part of our original PANORAMIX project. Ania’s Loopix design is the fundamental mixnet design that we use at Nym. In brief, Chaumian mixnets batch messages together, which works for some cases like sending e-mail, and her continuous-time approach with Loopix works better on the open internet, where servers go up and down constantly, so the connection of a user is not necessarily stable. So you have an academic wing at Nym, then we also have coders like Dave Hrycyszyn, who as mentioned before, jumped on board Nym due to the mission, and he knew how to deliver industrial products. Dave brought in his wife, Jess Hrycyszyn, who led our product team. Our best programmer.
Ultimately, some of us are political activists, and so we think not only about how cryptocurrency billionaires could use this mixnet software, but how it could benefit both normal people and people whose lives were at stake due to surveillance. Dave also brought in his friend Mark Sinclair, who succeeded him as CTO, and has been magnificent in pushing forward our technology and is currently leading our post-quantum upgrade. Our COO, Alexis Roussel, was involved in the Pirate Party of Switzerland and was one of the earliest Bitcoiners in Switzerland. The Pirate Party was very crucial, particularly in Switzerland, for the support of WikiLeaks after their payment processing came under attack and Bitcoin was used to help fund Wikileaks.
Alexis Roussel has been working on these issues of money transmission and privacy, as well as his new concept of digital integrity, for many years and helps us at Nym navigate the Swiss ecosystem. At Nym, everyone brought something very unique to the table. After we started, we have had other people interested in philosophy and politics join Nym – Jaya Klara Brekke came on board to lead our communications, and we’ve had others like Hux and Casey Ford demonstrate how their background in the social sciences and philosophy can make our software better. It’s very hard anywhere, much less in a software-driven start-up project, to get this many crypto-anarchists, philosophers, and coders together and not just have an interesting discussion, but really build something that we think will save people’s lives against the coming surveillance apocalypse we’re faced with.

DIFF: Could you elaborate on mixnets, starting with their foundational design by David Chaum and the evolution through early implementations like Mixmaster for anonymous email? Additionally, could you explain the Sphinx packet format — its role in mixnet design, how it ensures unlinkability and integrity, and why it’s considered a crucial improvement over earlier approaches?
HH: In traditional cryptographic systems, you use cryptography to hide the content of a message from unknown and possibly adversarial prying eyes. That’s what confidentiality and authentication mean, namely that the message is hidden from all but its intended recipient, and the intended recipient knows that it got the message from the sender and no one else. These properties are delivered by public key cryptography, given to people who are using email thanks to the excellent work of Phil Zimmermann and the cypherpunks with PGP.
There’s an underlying problem though. Even if I encrypt a message and send it to you, the fact that I sent you a message is still revealed. If you look at the early work of Julian Assange on mapping social networks – the text is called “State and Terrorist Conspiracies” – or the work of behavioral scientists like Sandy Pentland at MIT in his book on Social Physics, the fact of who you’re communicating with, when, and why often reveals more than the message itself. As the NSA said once when being debriefed after Snowden: “We killed people based on metadata.” That means that when you send a message, even if that message is encrypted, the fact that you send a message from a particular location to someone under surveillance can lead — and does lead today in places like Gaza and Iran — to drone strikes assassinating you. The metadata ends up being incredibly important, more important than the message.
In the early days of cryptography, cryptographers realized this. David Chaum asked himself: “Is there a way we could remove metadata from messages?” Chaum’s solution is elegant: you take a bunch of encrypted messages from different people, send them to the same server, mix them up, and then send them out. As the messages are all encrypted, they should be indistinguishable. In addition, this mixing process breaks the metadata of timing and correlation and provides a property called sender unlinkability: the sender of a message is unlinkable to the recipient of a message.
When David Chaum was designing these systems – in 1979 through 1981 – the internet was in its infancy and not widespread. Nearly a decade later, the early cypherpunks became very interested in anonymous email when the cypherpunks started leaking secrets such as the confidential manuals of the Scientology cult; they subsequently came under tremendous assault by the Church of Scientology’s lawyers. The engineering question was then asked by the cypherpunks: “Is there a way we could hide the origin and destination of these messages?” Due to this pressing need, cypherpunks such as Len Sassaman – recently accused of being Satoshi Nakamoto – built early anonymizing email systems based on Chaum’s original design called Mixmaster.
This Mixmaster system was I believe the first real-world deployed mixnet. Mixmaster operated a chain of servers. It would take messages from different people, encrypt them, place them in a server, scramble them – and importantly – send them to another server, scramble them again, and finally release them. This cascade of servers worked pretty well for email. However, one problem was that it didn’t work reliably for general internet traffic, as a single server going down could take out the entire system. In terms of users, e-mail could reasonably be delayed as many people didn’t really care if there was a delay of even a few minutes for an e-mail to arrive. This kind of delay is an inevitable by-product of mixnets, as encrypting messages takes time and intentionally delaying the messages to mix them with other messages takes even more time.
One of the problems we wanted to solve with Nym was how to build a mixnet that could handle general-purpose internet access – so I can send a packet at any time and have it anonymized, similar to Tor and any other VPN. In the original Chaumian mixnet, messages are sent in a batch: you send a message, I send a message, and many other people send messages; they’re combined, shuffled like a deck of cards, and sent out at once. But there’s a problem: you might not have enough messages for the mixing to provide enough anonymity. How many messages do you need to make a message anonymous? That’s an open question in the original mixnet designs like Mixmaster. Also, you don’t want to wait too long for new messages to arrive. How do you make mixing quicker and more predictable?
At Nym, we designed a mixnet based on Ania Piotrowska’s PhD thesis on continuous-time mixnets, which is in the Loopix paper. This means I can send messages at any time. They will get mixed by a group of servers. These servers are called “mix nodes”, and rather than waiting for a batch of messages to come in, they will basically delay each message independently using a random amount of time based on an exponential distribution. We can estimate for messages when they should be delivered on average, and if it’s not delivered, we resend the message. Anonymity is maximized in Loopix as there are not “batches” of messages, and a given message may never be sent out of a mix node, in which case you just resend after a long time. But you can never predict or know exactly what messages are coming out of any given server. If not enough messages are sent, you can send fake messages, which are indistinguishable from real messages, and so can be thought of as a form of noise.
The most insurmountable problem facing mixnets is not technical, but social: who’s going to run these servers, and why? Cypherpunks ran them on a volunteer basis, same as Tor. But if you want a scalable anonymous communication system, you need hundreds of mix nodes. They have to be run by different people, because if a single organization ran all the mix nodes, they could possibly monitor all your messages en route, and so defeat the purpose of the mixnet, or at least do better attacks on the metadata even if the encryption held so only the intended user could decrypt the message.
To solve this social problem, we decided to involve a blockchain of all things. The blockchain maintains the topology – the map of all the mix nodes. If a mix node drops a message or doesn’t deliver a message, it gets punished. If a node delivers messages correctly, that mix node gets rewarded in NYM tokens, so that the NYM token is actually a form of reputation. That reputation is fungible to money if you exchange $NYM, so that people with good reputations can buy better servers to deliver more messages with a higher quality of service. People who aren’t good at delivering messages or are possibly hostile will eventually get kicked out of the system as the system will not reward them in $NYM and eventually push their servers out of the topology. Because the system is fully decentralized — anyone can join using the blockchain — and nodes are rewarded based on performance, this makes our system, I would say, the most advanced anonymous communication system currently existing. There are other interesting research papers, but none of them have been built, and none, I think, are as decentralized or resilient as Nym.
DIFF: Could you contrast it to something like Mullvad VPN?
HH: To contrast Nym with Mullvad or any centralized VPN: a VPN just runs one or multiple servers, all controlled by the same person. You’re not really anonymous to the VPN provider. The person that runs your VPN knows what websites you’re going to because your packets are going through their computers, including your IP address, the destination IP address, and DNS requests. That’s quite dangerous. You could fake being like Nym by adding a few extra servers pretending to be run by a separate organization, but as long as something is centralized in a social sense, you have to put all your trust in the person or persons running the VPN. I personally think the founder of Mullvad is a trustworthy guy, but many other VPNs are not trustworthy.
For example, it’s well known that ExpressVPN is controlled by Kape Technologies, which started as an Israeli malware firm. Even Mullvad – could Mullvad really withstand severe pressure? I had a good friend who ran a VPN server aimed at activists – I’m not going to name which one because I don’t want to get him in trouble – but he told me: “We give services to human rights activists. But push comes to shove, I have kids. If someone comes by and threatens to take my kids away, I’m going to give them whatever they want. ” I don’t personally reproach VPN operators that would give in and install a backdoor or bug if their lives were in danger. This means the solution is not to find more virtuous people to run these servers. The solution is to make it impossible for a single person, or even an organization of many people, to give away all your data.
That’s the main goal of decentralization. Bitcoin and other technologies apply this principle to finance. We apply it to internet anonymity on the internet scale. Also, VPNs do not mix your data. Even if the VPN operator is completely honest, someone can watch all the messaging going into VPN servers and all the messages going out of a VPN server and correlate that data to easily get your metadata and so de-anonymize you. This kind of attack would also apply to Tor if the enemy was watching multiple Tor entry nodes and exit nodes. So a decentralized mixnet is the way to go, as it both mixes up packets to prevent these kinds of correlation attacks over short periods of time, although even mixnets are not perfect.
There are a lot of subtle technical tricks we deploy at Nym to make communication anonymous. For example, we use the Sphinx packet format, invented by our former colleague and current Nym advisor George Danezis. This packet format makes it so that even if you look at the bytes of a packet at each hop in the network before mixing, the packet bytes are rerandomized, which means that the ones and zeros coming into the server are completely different than the ones and zeros coming out of the server. That prevents many different kinds of attacks, such as where the enemy flips a bit on a packet to track it throughout the network.
Ultimately, this is why our network is so powerful: a combination of decentralization to reduce reliance on centralized authorities that could subvert the network, mixing to prevent attacks on the metadata (timing, volume, who’s talking to whom), and incentives that require tokenized blockchain technology. While there have been many scams in the token space, the NYM token is a reputation token that’s fungible for money. This concept for mixnets existed long before the invention of Bitcoin – you can see it in the early days of Ross Anderson’s Eternity Service or the work of Roger Dingledine and Nick Mathewson on the Free Haven Project. We’re finally bringing these concepts to fruition. The reason the Tor network doesn’t use incentives is because when they built it, they didn’t know how, and therefore they had to be dependent on government grants. Even if we started as a research project, we at Nym don’t want to become dependent on government grants. We want to be completely autonomous, and therefore minting our own token that can be exchanged for money – which I do think is a very anarchist act in some sense – is what we decided to do to solve that issue.
DIFF: We live in the age of surveillance capitalism (Zuboff), attention merchant (Wu), and the transparency society (Chul-Han). Your work at Nym tackles the foundational architecture of the internet – packet routing, metadata protection, and network-level anonymity. But surveillance today extends far beyond the web. Companies like Palantir, Clearview AI, Oracle, Axon, and Microsoft are building infrastructure for total identification: not just facial recognition, but whole-body recognition, gait analysis, browser fingerprinting, even thermal signature tracking. Drones can identify you by how you walk or radiate heat. In a world where surveillance is physically ubiquitous and technologically inescapable, how does Nym’s philosophy — and your technical work — respond to a reality where anonymity isn’t just about hiding data, but hiding your very presence?
HH: I think it’s important to realize that this state of affairs is very dangerous and runs much deeper than people expect. If our minds and lives are fundamentally extended into the internet – if we can’t imagine surviving without it – then the internet is core to our very being. My co-founder, Alexis Roussel, argues that just as we have the right to physical and mental integrity over our bodies, we should have the right to digital integrity over our digital selves – the right to be free from surveillance and to achieve freedom and autonomy in the digital sphere.
There are issues I have with how people characterize this era – Zuboff characterizes it as a mutation within capitalism. It’s true that surveillance is a major motor behind contemporary capitalism. But it’s not a mutation; it’s the destiny of the system. I had the good fortune of studying some Marx under Fredric Jameson and Michael Hardt in my younger years. What we face now is a crisis of a declining rate of profit in the face of a generalized crisis of innovation. The internet has made huge leaps and bounds, but we’ve been frozen in the rest of the economy until the advent of AI. Rather than create new kinds of markets or productive forces, historically the internet essentially cannibalized pre-internet forms of capitalism – particularly the attention economy. So Google destroyed newspapers, Facebook destroyed other social spaces. We’ve had this continual cannibalization and reabsorption of pre-internet forms in a new guise. By itself, this isn’t fundamentally innovative.
Due to this lack of innovation and productivity gains in sectors outside the Internet – building houses, manufacturing shoes, so on and so forth, where everything has remained effectively the same for the last fifty years – there’s a generalised crisis within capitalism as globalized logistics created a “race to the bottom” in terms of the production of commodities. As a result, there is very little way to make a profit in a traditional capitalist sense across most sectors. Consequently, you have huge amounts of increased profiteering against the backdrop of secular stagnation. Increased competition over the same global audience has led to massive data collection. Interestingly enough, this data has not been easily monetizable, at least up until now. What makes AI interesting is that it has finally found a way to take all that surveillance-based data and monetize it into general-purpose products.
I don’t think surveillance is a mutation or limit within capitalism. I think it’s a continuation of capitalism in a period of crisis and monopoly. Therefore, we shouldn’t think of this state of affairs as something that government regulation can turn back. We need to produce tools such as code that fundamentally challenges this system and results in actual innovation. By innovation, I simply mean more powerful philosophies, more prosperity, a good life, and – most importantly – more freedom for ordinary people. We’re not going to get that through reformist petitions to any government. We need revolution in the form of code in order to ultimately change the direction of technology and our society, as our society is increasingly technical.
This is a revolutionary approach, but in a different sense than prior revolutions. We shouldn’t be begging governments for rights in the digital era, because governments have become entirely corrupt and are impossible to reform. But neither should we expect to try to seize control of these governments. Instead, we should understand that these governments are going to go through periods of crisis and collapse themselves, and we should build tools so we can survive and even flourish within their collapse. We will gain our rights not via a transcendental right bestowed upon us by governments, but immanently, in the here and now, by creating code based on hard cryptography.
That’s a very different vision of the future from other critics. Folks like Timothy Wu are very nice people, but their analysis is liberal and reformist – they believe that some form of regulation or enlightened technocracy can change the system. Or the pure Heideggerian negativity from many philosophers of technology like Byung-Chul Han – while they may have a critique of the situation, they don’t actually know enough about technology to put forward a positive vision of how humans and technology can survive in the future.
We shouldn’t be technological determinists. Nym by itself cannot save the world. For technology is not just tools, technology is the world in which we live – the milieu we inhabit, as Stiegler would say – and it’s fundamentally social. Thus, technology is a terrain of struggle: which technologies to build, how to build them, who gets to build them, who controls them, how they redistribute or centralize power. As the political is intertwined with the technical, technologies are necessary for social change. You will not have successful social movements, much less democratic contestation, or any future worth living for whatsoever if technology simply allows everyone to be surveilled, controlled, captured, and killed.
We’re seeing this with drone warfare, which has reached new frenzied heights in Ukraine. I saw this earlier in Syria, where one of the inspirations for creating Nym was knowing our friends were getting captured and killed due to lack of VPNs and peer-to-peer messaging technology. The next generation we’re going to hand this world to needs tools to at least withdraw from the current system and create new forms of life, and, at most, possibly challenge the dominant reigning order – the corrupt, degenerate pedophilic ruling class that runs much of our world – whose impunity leads to the kind of grotesque predation we saw in the Epstein files. Not to be a conspiracy theorist, but it can be true that our ruling class tends towards domination and exploitation due to the structure of monopoly capitalism and the nation-state, and simultaneously true that this selfsame ruling class is also demented and perverse psychopaths. The next generation can challenge this order, but not if technologists waste time on get-rich-quick schemes, surveillance-based advertising, or work on AI companies that aren’t making anyone’s lives better in a meaningful sense.
Since the Snowden era, we’ve seen a fundamental transition: the powers of mass surveillance, previously centralized in the state apparatus (the NSA and its equivalents in Russia and China, etc.), have now been privatized. I’ve talked to NSA agents like Bill Binney, and he said one of the main problems was that the NSA couldn’t pay people enough. Anyone intelligent would go work at Google, because they could make ten times the salary that they could at the NSA. Also, the government isn’t very good at running infrastructure for storing and manipulating big data. Now a company like Palantir can pay their programmers tremendous amounts and do much faster and more efficient infrastructure for AI and data analysis than the NSA.
This privatized approach to surveillance is more effective than traditional state-based secret services for mass surveillance. So we’ve seen a conglomeration of various monopolies – Palantir, Clearview AI, Oracle, Axon, Amazon, Microsoft – brands that previously sold books or word processors but are now deeply engaged in government surveillance contracts. Each of these companies is taking over parts of the AI logistical chain, competing a bit even, but in the end making surveillance incredibly more efficient. These nightmare scenarios of complete totalizing surveillance and control become more possible than in the past.
Technologies built on grand ideals, such as the Semantic Web’s vision to combine and spread human knowledge, are now being used in the form of graph databases for more effective repression of individuals. It’s interesting how even the term “ontology” has been appropriated by Palantir – they recently claimed their product Ontology made the term “ontology” itself popular, completely memory-holing the rich history behind ontology in philosophy. I have a paper on this called “Semantic Enclosures” about how the Semantic Web vision succeeded, but in the form of proprietary databases that use formal ontologies to make data analysis more efficient – not for spreading human knowledge, but for maximizing profit, surveillance, and behavior control.
This is dangerous. This trajectory of AI and big data is dangerous. AI has started with primarily linguistic data – ChatGPT is very good with language due to language models. Now AI is moving to video and then to entire models of the world. We’re going to see mobile phones constantly scan the environment around you, deliver 3-D world models back to data centers, and so they know exactly what you’re doing, who you’re with, where you are at any given time in real-time. On some level, you need a refusal – turn these tools off – and create new tools that make you more effective without being put under personal surveillance. I’m quite supportive of alternatives such as local, decentralized, privacy-enhanced AI. Still, given the speed at which surveillance is being constructed, we will have to look at solutions for living in a more and more surveilled world. Even as a small example, I’m excited by the new fashion line from M.I.A. with clothing that prevents camera tracking and surveillance.
We shouldn’t give up. These new technologies are always an arms race – just like in evolution where different organisms are in constant cooperation and competition. If something becomes too powerful, it becomes stagnant. Surveillance technologies are becoming too powerful, but they are also becoming more stagnant. While you can always gather more surveillance data, we have a huge unexplored realm of technology to stop that gathering in the form of privacy-enhancing technologies. Of course, they are more difficult to build; that’s why we need more people working on anonymizing technologies. They present the only realistic way out of this society of control we’ve entered and could enable much more innovation and freedom than our current society allows. And it’s not just American technologists – for example, in China, companies like Qihoo 360 provide surveillance technology. Surveillance is becoming a universal problem, and it will soon spread over the most distant ends of the earth. Still, a universal resistance is possible, for it’s in the best interest of humanity as a whole to create a world of freedom.

DIFF: With Nym addressing underlying issues with Tor, do you see renewed interest in privacy-preserving overlay networks like Tor, I2P, Freenet? In your “Anti-Palantir Manifesto”, you talk about building the code. Do we need to plug back into those projects?
HH: One thing to remember is that this is fundamentally an arms race, and the places where your enemy can attack you are varied. Solutions like Nym or even Tor only defend data in transit. You also need to defend data at rest. Most of us are using operating systems built by the enemy – Microsoft or Apple – with very little support except in the form of GrapheneOS for truly secure operating systems that aren’t created and operated by surveillance-based adversaries. Securing data at rest is incredibly important – that’s the next place we at Nym need to focus our energies, looking at ways to prevent zero-day spyware, particularly new versions of the Pegasus family of software.
Nym only protects the internet level right now, and it assumes the hardware and the operating system is secure. When you turn on a phone, you have a proprietary SIM card communicating with cell phone networks. There are numerous attacks on cell phone networks regardless of whether you’re using a VPN or anonymous overlay like Nym that defends your internet traffic: stingrays (IMSI catchers), silent text message attacks (SS7 attacks), and encryption downgrades are very much possible and while they won’t de-anonymize your NymVPN connection, they can be used to intercept your cellular connection. Some software to fix this exists, but it’s not easy to find and not usable by most humans.
Then you have the hardware chain – your encryption and data privacy can be undermined by proprietary hardware that could have backdoors in it. Long ago, Snowden revealed that the NSA put a backdoor in Juniper routers. Hilariously, recently the U.S. government banned Chinese routers because they believe the Chinese government is putting backdoors in Chinese routers sold in the USA. And the entire U.S. government recently announced Salt Typhoon and Volt Typhoon, revealing that the Chinese government had officially penetrated most of the telco infrastructure of the United States and so could intercept any normal unencrypted text message. There’s huge exposure to attacks with your cell phone, operating system, and the rest of your hardware. We can use free and open source software to rebuild these stacks and fix them, but it’s almost unimaginable work, and we won’t get more people working on it unless there’s increased demand. That demand should come from people who purchase privacy-enhancing technologies like NymVPN or donate to free ones like Signal or Tor.
Regarding the older technologies from the first peer-to-peer wave at the end of the 1990s – Tor, Freenet, and so on – I’m not sure how useful all of them will be in the future. Some like Tor certainly will remain useful, and there’s probably a place for each for some use-case. Nym is interesting because Nym explicitly has as its threat model a global passive adversary – someone watching the entire network, gathering every possible data point on your communication. That’s very difficult to defend against completely, but we at Nym have made it more difficult.
Technologies with weaker threat models should probably not be used as newer technology matures and is battle-hardened. If you have a choice between a technology that defends you against someone who can watch the entire internet versus one that defends you only against your local ISP or a relatively weak government, I’d be using the technology that can defend against a government like the United States or Palantir that can watch all my activities. That being said, I’m American, so I’m concerned about the American government and Palantir. Someone in China or Russia might be satisfied with a less powerful threat model. Rather than divide these people, it’s better to have everyone work together using the best possible models and software. Mixnets are a good step in the right direction – and probably not the end. Other technology will be developed over time that could be even more powerful than mixnets.
At Nym, the goal is always to support the most powerful anonymizing technologies usable by ordinary humans, and where possible support the whole stack: open hardware, open telephony, free software. In the future, this could be combined with obscuring and adding noise to other kinds of data streams – glasses, microphones, etc. Mixnets in theory can be used in any technology. There’s a fundamental theorem in AI called Cybenko’s Universal Approximation Theorem: for any given signal – the underlying function – with any amount of noise, we can produce a neural network which can distinguish that signal from the noise.
It’s important to note that this theorem is an existence proof – the theorem doesn’t tell you how to do it, but that it can be done. I’d like to invert that theorem: for any given data I’m trying to hide, there should be some amount of noise that we can add to a system which makes it impossible for a neural network of arbitrary power to de-anonymize that data. Anonymization technology is fundamentally about being hidden within a crowd, not being distinguishable. Adding noise is one way to do that – it’s what Nym uses. There are ways to add noise across a wide variety of technologies. Even things that don’t appear to be adding noise or entropy to a data stream actually are – for example, mixing packets, even without adding fake packets, adds noise to the time signatures of when those packets are sent.
I’m very disappointed that so many people work in AI and so many philosophers and cultural critics talk about AI, but very few talk about privacy and anonymity. These technologies are the natural way we’re going to have to defend ourselves against the increasing power of centralized, state-driven AI.
DIFF: You have emphasized how noise and entropy underpin the architecture of Nym. I’ve been experimenting with the EFF’s browser fingerprinting test, and it got me thinking: maximizing entropy seems good for privacy in theory, but if your browser produces too many unique bits, you actually become more identifiable. So is there a Goldilocks principle for entropy in anti-fingerprinting – not so little that you blend in with a trivial configuration, but not so much that you stick out as unique?
HH: It depends on the adversary. The Great Firewall of China, if it sees high entropy bytes coming through that appear completely random, will automatically block them because it assumes someone is trying to sneak through the firewall with such suspicious traffic. Avoiding censors is not a simple game of maximizing entropy. You can’t just add as many fake bits as your channel can support. You want to add just enough entropy so that you’re actually indistinguishable from other people. For example, to break though the Great Firewall of China, you can make VPN connection look like a normal internet web browsing connection – which is what NymVPN does by disguising a Nym connection as a connection using HTTP/3, so it looks just like a normal website browsing session. In a recent study with researchers at EPFL, we found that to prevent someone from using machine learning to de-anonymize who is visiting websites and who is using the Nym mixnet, it was quite effective to make the traffic bursty – because when people use the internet normally, their traffic tends to be naturally bursty. The more bursty our traffic looks at Nym, the more it blends in with other people’s traffic and so does not stand out to censors.
You can never think of entropy or anonymity as an individual affair. Anonymity must be a collective affair. Anonymity is always about blending your behavior with other people’s behavior – not copying them per se, but blending in just enough such that an outside observer cannot distinguish you from other people. That’s why anonymity is a hard problem. It’s not purely mathematical – a purely mathematical approach would do simple-minded things that are easily detectable, such as maximize the individual’s entropy over some communication in an absolute sense, as mentioned before. A better approach is a group-based approach, wherein anonymity conceals one within a group, where the entropy is relative to some group given the prior knowledge of the adversary. Anonymity is a problem of engineering but also a sociotechnical problem, because these groups are engaged in labor and other social activities. Anonymizing technologies have to enable those activities while using entropy-based techniques to make them ultimately more indistinguishable in an adversarial environment.
DIFF: How do zk-nym credentials enable anonymous authentication for digital services like NymVPN or e-cash, ensuring that users can prove access rights without revealing their identity or linking activity to personal data? Furthermore, how is the broader adoption of zero-knowledge proofs (ZKPs) transforming privacy-preserving architectures across different domains?
HH: Nym has a lot of complex technologies inside it. We use the mixnet and can measure it using entropy – our chief scientist Claudia Diaz invented that technique in 2001. We’ve built simulators, so some choices that appear arbitrary, like why we have three mix nodes rather than two, are made because we’ve empirically shown via simulation that this produces maximum entropy and thus maximum anonymity for a given user.
Regarding token incentives: one problem with incentive schemes is that they centralize power. Bitcoin’s proof-of-work and other proof-of-stake techniques essentially make the rich get richer. Nym’s token economics are built to maintain a minimum level of decentralization. After you gain so much reputation, you can’t gain anymore, and that reputation needs to accrue to new actors who have yet to reach their maximum capacity. We use techniques from game theory, and we have various proofs and papers on reward sharing for mixnets, which have been published [e.g., “Reward Sharing for Mixnets” by Claudia Diaz, Harry Halpin, and Aggelos Kiayias], with a new version coming soon.
To be clear: when you buy a Nym subscription, you pay for it, that payment is converted into NYM tokens, those NYM tokens are then sent to a smart contract, and then that smart contract rewards the nodes that handle the traffic at the end of an epoch. There’s a limit to rewards an individual node can get, because we want to encourage decentralization, in particular more nodes in different geographical locations. It doesn’t make sense for all servers to be in the United States – that’s a jurisdictional risk – or run by a small group of entities. Nodes should be spread across different countries and run by a diversity of actors. As given in our “Reward sharing for mixnets” paper, we use game-theoretic math to reach a harmonized economic equilibrium distributing the nodes. The more people that use the network, the more money in the system, the more rewards, which should increase the number and performance of servers in the system.
One of the harder problems any VPN – or really any payment-based system – faces is how people can pay anonymously. We use another technique from David Chaum called “anonymous credentials,” but our version is decentralized unlike his original idea. When you pay for Nym (with credit card, Monero, shielded Zcash, Bitcoin – we don’t care), we convert that to NYM tokens, which is used to reward and keep the entire system economically sustainable and solvent as outlined before. Yet how do we keep the payment itself anonymous? The fact that you paid is then returned to you in the form of a very simple anonymous credential, which we call a “zk-nym,” and it’s effectively a simple zero-knowledge proof of payment. This approach is similar to E-cash as invented by David Chaum.
The zk-nym is essentially a receipt that you paid, and it’s re-randomizable – in cryptographic terms, the anonymous credential can be blinded. You get a receipt back, and that receipt has a certain number of zeros and ones in a particular order. You can take that receipt and do a simple operation on it, currently just an exponentiation, and all the zeros and ones in that receipt change. So it’s not linkable to your previous receipt, but the receipt is still valid and can be verified by any node in the Nym network. You can then present that receipt to a server to enter the Nym network. The server knows that you paid, but doesn’t know who you are. Even an adversary that’s watching the original payment to the smart contract and watching you get the receipt, and then watching that receipt get spent for access at a server cannot distinguish that it is the same receipt, because that receipt is blinded (rerandomized) in between every usage of the Nym network. This means the payment does not reveal your identity and, importantly, the payment cannot be linked to your use of NymVPN or the mixnet.
This is incredibly important because the main problem with VPNs – even Mullvad – is that an adversary can come in and say: “We believe there was suspicious activity on the network. If we discover the user through your VPN, we can figure out how they paid – credit card details, identity, location – and then bust down their doors and put them in jail.” We avoid that because we cannot link your payment to your usage of the network. You pay, and that’s separated from your usage. I think we’re the only project in the world right now that does this. People have wanted this since the early days of anonymous networking – the Freedom Network, a pre-Tor commercial mixnet, imagined this kind of anonymous credential payment, but they never implemented it at scale. At Nym, we’re the first to pull it off.
Nym is built so that eventually we can link arbitrary zero-knowledge proofs to our anonymous credentials. We don’t do that right now – we only use zk-nyms as proof of payment. Yet zero-knowledge proofs are very powerful, and we can imagine that they will be used for all sorts of things in the future. That being said, I am rather worried that people will pretend something is zero-knowledge when it’s not actually private, or use zero-knowledge proof technology to make otherwise reprehensible technology like Worldcoin (the orb that collects your biometrics for “proof of humanity”) to normalize these kinds of surveillance technologies. I am firmly against zero-knowledge whitewashing. But at the same time, zero-knowledge technologies are powerful, and we should have the right to transact privately, speak privately, and form private smart contracts. This is being pioneered by Zcash for transactions for cryptocurrency, and in new smart contract languages in Midnight. Over time, more and more of this technology could be embedded in everyday life, giving us all the benefits of most of modern technology while making surveillance difficult, if not impossible.
DIFF: Another project you are realizing is Outfox which is described as “a simplified variant of Sphinx tailored for mixnets with fixed-length routes and designed for post-quantum security.” Can you demystify the nature of post-quantum attacks and how present cryptography that virtually all of our modern public-key cryptography relies upon? (RSA, ECC, and Diffie-Hellman)
HH: Cryptography was built assuming that our cryptography would hold, even if the adversary has an incredibly powerful computer and can run it for thousands of years, perhaps even until the heat death of the universe. That’s why cryptography is powerful – cryptography should work no matter how powerful your adversary is. Continuous-time mixnets are harder because they don’t give cryptographic guarantees for metadata protection – only statistical guarantees – but we’re working on making those better.
What cryptographers were not expecting was real-world quantum computing. Quantum computers, by using Shor’s algorithm and Grover’s algorithm, can indeed break normal cryptographic algorithms – RSA and elliptic curve cryptography with modern key sizes. Luckily, cryptographers – including those we work with at Nym like Karthik Bhargavan and Daniel J. Bernstein – have created new forms of post-quantum cryptography, which are very powerful. We’re now in the process of putting that technology into our network. We do not believe that these post-quantum cryptographic algorithms are easily cracked by governments.
To take one example, we’re interested in using McEliece code-based cryptography – a very old form of cryptography, older than elliptic curve cryptography. McEliece cryptography has some advantages and some disadvantages such as a large key size, but we know that the NSA tried to prevent McEliece from being public domain for a very long time. We believe some of these new algorithms do work, in particular right now we are deploying lattice-based cryptography such as ML-KEM. The problem is that they’re less battle-tested than older algorithms. So we’re building a protocol, which we call the Lewes Protocol, where we use one or two different kinds of cryptography in combination – if one breaks, the other will still hold. We’re inserting this at every level of the system.
First, at the level of the initial connection and the VPN connection (network layer encryption), which is the Lewes Protocol. Then we’re also taking the Sphinx packet format and using post-quantum encryption inside it, creating a new packet format called Outfox, which gives post-quantum security. We have just launched the Lewes protocol, and we’ll be launching Outfox by late this year. Thus, by the end of the year, we do expect Nym to be fully post-quantum. If you’re not using post-quantum cryptography, an adversary can capture your data now and do a “harvest now, decrypt later” attack. This means if they can’t decrypt it now, they can decrypt it later when a government – most likely the U.S. – gets a sufficiently powerful quantum computer. Of course, Nym wants to resist these attacks. Tor doesn’t use post-quantum cryptography, but Signal recently upgraded their software to use post-quantum security.
DIFF: Looking at Nym’s roadmap, are there any other projects or technologies on the horizon you’re working on or incubating?
HH: Nym is a never-ending project, in the same way human freedom is a never-ending project. We’re looking to get better at censorship resistance. Entire countries now turn off their internet, so we’re going to enable more and more censorship-resistant techniques over time. That’s going to be a major focus for the end of the year. Another way you defeat censorship is by having more servers. So we want to make it easier for people not just to use NymVPN, but to contribute their bandwidth back to the network and earn rewards themselves. My hope is that this should be launched by the end of the year. So every Nym user can also be a Nym server and gain NYM rewards while, more importantly, helping other people access the internet in a free manner without censorship. These are large, important changes that will make our network usable by more and more people throughout the world. The more people that use the network, the more anonymous everyone is. Even if what you’re doing over Nym isn’t particularly in need of anonymity, just by using it, you’re hiding those that may need the ability to be anonymous the most.
The most significant technical challenge is that we don’t have enough people working on Nym. If we had twice as many people, we could get more done – though adding more programmers can sometimes slow things down, as noted by my undergraduate teacher Frederick Brooks’s book “The Mythical Man-Month”. We need more people working on Nym and, importantly, need more people outside Nym working on all the other issues I mentioned: how to disguise your gait from a surveillance camera, how to have a mobile phone which is resistant to attacks from cell phone infrastructure and zero-days, how to use AI in a privacy-enhanced, local, and decentralized way without handing all your data to Anthropic, OpenAI or OpenClaw. We need more people in general, not just coders, working on anonymizing technologies, in areas such as usability, design, and even advertising. Our collective goal is that we want to build something so simple that when people turn on their device – which they have with them all day and night ubiquitously – they preserve their fundamental rights and freedoms from the moment it turns on until the day they die. We must pass that legacy of freedom on to future generations.
Ultimately, there will probably be a conflict between the forces of rising authoritarianism and the rest of humanity, likely triggered by climate change. Insofar as there will be scarce resources, this will lead to increased wars, increased migration, and a decreasing standard of living for most of humanity. The real danger is that the ruling elites decide that the only way for humanity to survive is to exterminate large sections of the population. We’re already seeing this in Kurdistan and Gaza – the return of the specter of genocide. The new waves of extermination are not just on the ethnic level, but on the level of individual humans, where someone may be eliminated not because they belong to a certain ethnicity, but just because they’re not working hard enough or they’re expressing dissident thoughts. It’s only a matter of time before new forms of control and extermination are introduced.
We have to build these technologies before that time horizon closes, and then we have to use them to change our existing social order. Humanity has done this several times in the past – particularly the transition from feudalism to industrial capitalism. The new paradigms being formed using internet-based technologies will have to supersede this current dying system. If this does not happen quickly enough, we are faced with the old motto: revolution or death. I’m on the side of decentralization and revolution, as the alternative is centralization and authoritarianism.
Very few people have made the explicit choice to guide their technical work, aesthetic work, social labor – their life energy – in this direction. The more people that take a side and see what’s at stake, doing what’s in their best interest to make a revolution for the 21st century, the more of us will survive and prosper in the end.
REFERENCES
Assange, J. (2006). State and terrorist conspiracies. IQ.org. http://cryptome.org/0002/ja-conspiracies.pdf
Chaum, D. L. (1981). Untraceable electronic mail, return addresses, and digital pseudonyms. Communications of the ACM, 24(2), 84–88. https://doi.org/10.1145/358549.358563
Diaz, C., Halpin, H., & Kiayias, A. (2021). The Nym Network. Nym.com. https://nym.com/nym-whitepaper.pdf
Diaz, C., Halpin, H., & Kiayias, A. (2022). Reward sharing for mixnets. Cryptoeconomic Systems, 2(1). https://cryptoeconomicsystems.pubpub.org/pub/diaz-reward-sharing-mixnets/release/2?readingCollection=082fed82
GrapheneOS. GrapheneOS Project. (2019–present). GrapheneOS: A privacy and security focused mobile OS [Software]. https://grapheneos.org/
Halpin, H. (2012). The Philosophy of Anonymous: Ontological Politics without Identity. Radical Philosophy, 176. https://www.radicalphilosophyarchive.com/article/the-philosophy-of-anonymous/
Halpin, H. (2023). The hidden history of the Like button: From decentralized data to semantic enclosure. Social Media + Society, 9(3). https://doi.org/10.1177/20563051231195542
[“Semantic Enclosures” – the paper that analyses how open Semantic Web standards were enclosed by corporate platforms]
Halpin, H. (2026). The Anti‑Palantir Manifesto. Nym Technologies Blog. https://nym.com/blog/anti-palantir-manifesto
[“Anti‑Palantir Manifesto” – Harry Halpin’s direct response to Alex Karp]
Han, B.-C. (2015). The transparency society (E. Butler, Trans.). Stanford University Press. (Original work published 2012).
Piotrowska, A., Hayes, J., Elahi, T., Meiser, S., & Danezis, G. (2017). The Loopix Anonymity System. In 26th USENIX Security Symposium, pp. 1199–1216. https://dl.acm.org/doi/10.5555/3241189.3241283
Rial, A., Piotrowska, A., & Halpin, H. (2025). Outfox: a Postquantum Packet Format for Layered Mixnets. In Proceedings of the 24th Workshop on Privacy in the Electronic Society (pp. 42–54). https://arxiv.org/abs/2412.19937
Wu, T. (2016). The attention merchants: The epic scramble to get inside our heads. Knopf.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
