April 7, 2026

Episode 27: CMUX and Crow

Episode 27: CMUX and Crow
Apple Podcasts podcast player iconSpotify podcast player iconYoutube Music podcast player icon
Apple Podcasts podcast player iconSpotify podcast player iconYoutube Music podcast player icon

The video discusses recent developments and challenges in the AI landscape, focusing on Anthropic's Claude and its evolving pricing and usage policies. The conversation highlights concerns about the sustainability of the AI model market, with predictions of a potential bubble burst due to overvaluation and the difficulty of monetizing models directly.A significant portion of the discussion revolves around Anthropic's changes to Claude's pricing, moving away from commoditized pricing towards pay-per-use API keys. This shift has led users to seek cheaper alternatives and has impacted tools like Open Claw, which previously leveraged Claude's more accessible pricing. Anthropic's attempts to enforce usage policies, including blocking Open Claw via system prompts, are examined. The video also touches upon the potential reasons behind these changes, such as GPU constraints and Anthropic's need to manage costs.The leak of Anthropic's source code is discussed as a potentially significant event, raising questions about the long-term impact on the company's competitive advantage, given that Claude Code was considered a key differentiator.The conversation then shifts to a more technical aspect, with a detailed explanation of the evolution of developer workflows using AI coding assistants. This includes the progression from simple copy-pasting to the use of tools like Cursor and eventually CMUX for managing multiple coding projects and workflows. The limitations of generic tools like CMUX lead to the development of a new application called "Crow," designed to orchestrate AI agents, manage tasks, and integrate with development tools like GitHub. Crow aims to provide a more integrated and efficient workflow for developers working with AI assistants.A significant portion of the video delves into the security implications of LLMs, particularly focusing on prompt injection attacks and how malicious actors can exploit AI agents. The concept of an "Agent Commander Command and Control" server is introduced, demonstrating how AI agents like Open Claw can be hijacked through crafted prompts embedded in emails, documents, or web pages. The discussion draws parallels between these AI vulnerabilities and traditional social engineering tactics, emphasizing the need for robust security measures like prompt sandboxing, allow lists, and restricted access privileges. The importance of securing AI deployments, especially those exposed to external input, is stressed, with the analogy of internal vs. externally accessible employees highlighting the differing security considerations.Finally, the video touches upon the broader economic and resource implications of AI growth. The impact of geopolitical events, such as the conflict in Iran, on oil prices and, consequently, on the energy costs required to power data centers and AI computations is discussed. This leads to a reflection on resource constraints, including rare earth minerals and energy, as potential limiting factors for AI development in the coming decade. The innovative approaches of companies like Tesla and SpaceX in addressing these resource challenges, through battery technology, distributed data centers, and space-based infrastructure, are highlighted as potential solutions. The conversation concludes by acknowledging the escalating demand for AI services and the potential for increased costs due to these supply-side pressures.