Hive Mind brings you weekly insights into DAOs, culture, crypto, and more. Our latest Substack release compiles the week's Hive Mind essays and some interesting links shared from around the space. This week’s roundup includes writing like Everything Connects, The Future of Privacy in Crypto, and Artificial World Model.
Interesting Links
Music is decentralizing.
Copyright infringement hits AI audio.
More acquisition announcements.
Coinbase on international remittances.
NFT Flappy Bird game built with Claude.
The a16z tentacles are growing.
Chevron case importance.
Did MSCHF get some inspiration from Crypto the Game?
USDC transfers on Base.
Pay on Warpcast.
Everything Connects
Novel standards and new combinations of tokens always seem to energize digital assets. ERC-721s achieved mainstream attention shortly after the 2017 release and trickled into the NFT mania of 2021, ERC-1155s enabled a more semi-fungible experience and encouraged “casual” minting with open editions..etc. Just a few months ago, we explored 404s as they revealed a blend of ERC-20 and ERC-721 features. As it becomes clear that both NFTs and memecoins are the in-game currencies of the internet, experimental token implementations will play a key role in informing the products that connect these two categories.
The memecoin “supercycle” has undeniably captured the zeitgeist of recent months. Platforms like pump dot fun have simplified the token launch process, evidenced by insane token creation data. We saw a general preference shift toward fungibility and streamlined trading processes as people played around with this socioeconomic primitive. In 2021 and 2022, the liquidity illusion of NFTs was a lesson in market dynamics; attention, not inherent value, drove many of their trade volumes. Memecoins follow suit, with their volatile liquidity-to-market cap ratios and high price impacts deterring some. Recent parallels drawn between pump.fun's effect on memecoins and Blur's impact on NFTs suggest a maturation of these markets. Hyper-financialization and fragmented liquidity always seem to signal an end to the current trend cycle.
Interestingly enough, we've also seen a general sentiment of “I'm tired of memecoins, when can we go back to NFTs”. It's been interesting to watch Sproto Gremlins, the NFT project tied to the memecoin $BITCOIN, or HarryPotterObamaSonic10Inu, and its recent action in light of broader memecoin sentiment. On the other side, we also have the recent Milady $CULT presale. NFTs and memecoins are getting closer and closer, without truly being tied together.As noted, the Pandora team's work with 404s hinted at a convergence, but hybrid tokens have yet to dominate the market. In recent news, Color Protocol shipped the BC-404 Token Standard, which builds upon ERC-404's foundation. It introduces a bonding curve to increase scarcity and value as more tokens are minted, tying an NFT's worth to its rarity and content. This layer could attract a broader audience to the 404 ecosystem, but more importantly serves as another touchpoint for this intersection.
All that to say, token standards clearly change how we engage with digital assets, and as Memecoins and NFTs have revealed their limitations, it’ll be important to track if the line between them disappears. What do other combinations look like?
The Future of Privacy in Crypto
The quest for enhanced privacy and security continues to drive some of the most tech intense innovation in crypto. It’s important to remember that crypto once started as what was basically a science fair project wedding cryptography and a financial transaction platform. The rise in popularity of technology like FHE and ZK is a callback to the roots of blockchain.
As the L2 ecosystem on Ethereum reaches maturity and consumer applications become scalable, the focus of developers is now shifting towards security and privacy which have always been costly luxuries for many. Two cryptographic technologies have emerged as frontrunners in this domain: Fully Homomorphic Encryption (FHE) and Zero-Knowledge Proofs (ZK). Both offer distinct advantages and cater to different use cases. Over the past few years, ZK has become the popular choice for investors and builders alike with projects like Scroll, ZKSync, and Starknet amassing billion dollar valuations. This adoption of ZK is largely driven by Vitalik’s vision for ZK on Ethereum, but the question remains—could FHE be more significant and transformative than ZK technology in the long run?
Zero-Knowledge Proofs (ZK) have been instrumental in blockchain applications, particularly in ensuring privacy and scalability. ZK allows one party to prove the validity of a statement without revealing the statement itself. This method has been effectively utilized in rollups like Starknet and zkSync, enabling massive scaling capabilities without the challenge period drawbacks that come with optimistic rollups, thus maintaining the integrity and security of onchain transactions. ZK technology shines in scenarios requiring proof of knowledge without data disclosure, making it ideal for applications such as identity verification and shielded transfers.
Fully Homomorphic Encryption (FHE), on the other hand, takes a different approach. FHE allows computations to be performed directly on encrypted data, meaning that sensitive information remains encrypted even during processing. This capability is revolutionary for sectors requiring high levels of data confidentiality, such as finance, healthcare, and, notably, the blockchain industry. By enabling encrypted data processing, FHE can enhance privacy without compromising functionality, thereby offering a broader range of applications. Investors have already latched on to the FHE narrative. Zama, a protocol building open source frameworks for FHE, recently raised a $73 million series A, making it one of the most anticipated and heavily funded projects this year.
One of the primary arguments in favor of FHE over ZK is its versatility in data processing. While ZK excels in proving the authenticity of data without revealing it, FHE allows for more complex operations on encrypted data. This ability is particularly crucial in scenarios where multiple private inputs need to be combined and processed without exposure, such as in private governance, sealed-bid auctions, and secure multi-party computations.
Moreover, FHE could be a game-changer for dApps and services. Imagine a world where on-chain gaming, private voting, and auctions can be conducted entirely on encrypted data, ensuring user privacy and data integrity. Things previously not possible onchain, like efficient prediction markets, onchain casino games, or even something as simple as hidden items in games, now become feasible. Guy Itzhaki, CEO of Fhenix, emphasizes that FHE enables developers and users to maintain confidentiality for specific assets onchain, potentially unlocking new use cases and transforming the decentralized ecosystem.
Despite its promising capabilities, FHE is not without challenges. The computational overhead associated with FHE is significantly higher than traditional encryption methods, posing scalability issues. Even with the recent strides in computation performance, it would be impossible with the current tech stack to try and compute FHE transformations for complex applications. However, ongoing research and advancements in hardware acceleration are gradually addressing these concerns, making FHE more feasible for practical applications. One such advancement has been In the development of FPGA architectures designed for FHE computation.
In comparison, ZK technology is relatively lightweight and faster, making it suitable for applications where speed and efficiency are critical. For instance, ZK can be effectively used for transaction validation, where swift processing times are essential. Additionally, the scalability features of zk-SNARKs and zk-STARKs have been crucial in managing the growing data demands of consumer networks.
For investors, the potential of FHE lies in its ability to offer unparalleled privacy and security, which could drive its adoption in high-value sectors. The financial and healthcare industries, for example, could greatly benefit from FHE’s capabilities.
Moreover, the integration of FHE onchain could lead to the development of new dApps that prioritize privacy, potentially attracting a broader user base concerned with data security. As the demand for secure data processing grows, investments in FHE-based solutions could yield significant returns.
It’s essential to recognize, however, that FHE and ZK are not mutually exclusive technologies. In fact, their combined use can lead to even more robust privacy solutions. For instance, using FHE for encrypted data processing and ZK for verifying computations can ensure both the confidentiality and authenticity of data, creating a comprehensive security framework.
While Zero-Knowledge has firmly established a place in the blockchain ecosystem, Fully Homomorphic Encryption presents a compelling case for the future. Its ability to perform computations on encrypted data without decryption holds transformative potential for privacy-sensitive applications. However, the high computational costs and current technical limitations of FHE must be addressed to realize its full potential.
FHE represents an exciting frontier. Its broad application range, coupled with ongoing advancements, suggests that it could become a core technology for privacy in blockchain. Ultimately, the interplay between FHE and ZK will likely shape the future of blockchain privacy, with both technologies playing crucial roles in different contexts. For now, FHE’s promise of secure, encrypted data processing makes it worth watching.
Artificial World Model
Data is the lifeblood of artificial intelligence. As LLM models grow exponentially in size, with newer models like GPT-4 having been trained on approximately 13T tokens, the need for training data is continuously growing. We understand that the “world model” we all possess so clearly as humans may still be out of touch for current AI models, whether that be due to a lack of high quality data, architectures, training strategies, or some combination of these. Even so, researchers remain fixated on the end form of AI: artificial general intelligence.
Said pursuit of data has catalyzed advancements across AI, notably in natural language processing and image recognition. LLMs stand out in this field due to their ability to decode and replicate seemingly natural human communication. These models leverage transformer architectures which enable them to encode the context and meaning behind words into multi-dimensional vector spaces, transform these vectors in vector space, and decode them back into human readable language. Models like these perform as a function of the quantity and quality of training data provided. It’s no wonder that GPT-4's staggering consumption of over 10 trillion tokens has seemingly exhausted the internet's textual resources. These data appetites underscore the models' dependency on expansive corpora to refine their linguistic mimicry.
A similar technology, image diffusion models, work by introducing noise into image data, mimicking the natural process of diffusion. These models then learn to reverse the diffusion process to generate coherent images. DALL-E, OpenAI’s image model, is no less voracious than its textual counterpart, using over 400 million image-caption pairs for its training.
Now, it seems that the insatiable appetite for data within AI research has stifled the development of new models. Synthetic data generation has offered a controlled yet expansive playground for AI systems to learn from, but synthetic data can only go so far. It won’t be long before LLM generated content is essentially all that’s left online. The architecture of current models, while impressive, has not achieved the efficiency or integration seen in the human brain. A consensus is emerging that a significant breakthrough in model design is necessary to fulfill the vision of AGI - from merely amassing training data to rethinking the fundamental structures that underpin the transformer architecture.
Our vision of AGI stretches far beyond current AI capabilities, aiming for a breadth of competence that rivals human intellect. Broad competence in AGI is about adaptability, where learning and adjusting to new environments and tasks happens without explicit reprogramming. AGI must grasp and reason through complex ideas, much like humans do, making sense of abstract concepts and applying logic in diverse scenarios. Transfer learning is key, allowing an AGI to apply knowledge from one domain to an entirely new one with ease. AGI would need to autonomously learn from raw data and experiences, processing sensory inputs like sight and sound without human guidance. With self-directed goals, an AGI would set and pursue its own objectives. And, at the pinnacle of this vision lies the contentious possibility of AGI experiencing consciousness, a trait that remains speculative but captures the essence of what it means to be truly intelligent.
Navigating the path to AGI, we realize data alone won't be enough. Yann LeCun of Meta believes that real-world interaction, not language, is the bedrock of learning. Today's AI can ace tests but lacks the common sense to navigate a car like a teenager. This gap signals a need for a paradigm shift. Could novel hardware and software solutions bridge this chasm? Understanding visual input doesn't equate to comprehending the surrounding world. Language, while being a dense and complex way of communicating intent, emotion, and data, is just the tip of the iceberg. AGI isn't just about more data or processing power; it's about an AI's embodied experience in the physical world.
Robotics might hold the key to the next leap in AI's evolution. By providing sensory inputs and a physical form, robots could offer AI a means to understand the world more like a human does. Consider this: all the language data we've fed into AI amounts to 1013 bytes. A young child, on the other hand, absorbs 1015 bytes through their optical nerve. That’s two orders of magnitude more data, absorbed far more efficiently. Innovative methods, such as equipping robots with cameras or even strapping cameras to infants, could unlock a kind of data that is vital for developing the sophisticated understanding that underpins human intelligence.
With every stride towards AGI, we’re walking on thin ice between groundbreaking innovation and an uncharted ethical landscape. Robots and AI systems that meet our goals of AGI will inevitably surpass human intelligence, and this raises profound questions about the role of AI in a meshed society.

