In the BBC’s podcast The Digital Human Aleks Krotoski explores the digital world and how it affects humanity. Recently they launched a spin-off podcast, The Artificial Human, where Krotoski and Kevin Fong answer the questions we're all asking about AI. Their more recent episode Can we stop saying AI can think inspired this collection of links:
• What is thought and how does thinking manifest in the brain? - We can describe different kinds of thought and how they arise, to some extent, but the relationship between neural activity and the nature of what we are thinking isn't well understood. (Scientific American)
• Frozen human brain tissue was successfully revived for the first time - In a groundbreaking development, scientists have discovered a new technique that allows human brain tissue to be frozen and thawed while maintaining its normal function. (BGR)
• Scientists Imaged and Mapped a Tiny Piece of Human Brain. Here’s What They Found - With the help of an artificial intelligence algorithm, the researchers produced 1.4 million gigabytes of data from a cubic millimeter of brain tissue. (Smithsonian Magazine)
• Designing a Workflow For Thinking - We’re living in a golden age of tools for thought. But with so many options, it’s important to carve out time every year or two for a “creative inventory” of how you discover and organize your ideas. (Steven Johnson’s Adjacent Possible Substack)
• Neuralink’s First User Is ‘Constantly Multitasking’ With His Brain Implant - Noland Arbaugh is the first to get Elon Musk’s brain device. The 30-year-old speaks to WIRED about what it’s like to use a computer with his mind—and gain a new sense of independence. (Wired)
• Single brain implant restores bilingual communication to paralyzed man - Bilingual AI brain implant helps stroke survivor communicate in Spanish and English. The implant uses a form of AI to turn the man's brain activity into sentences, allowing him to participate in a bilingual conversation and "switch between languages." (Are Technica)
• Mapping the Mind of a Large Language Model - Anthropic announced a new research breakthrough in understanding the black box of how AI models work. This is the first ever detailed look inside a modern, production-grade large language model. This interpretability discovery could, in future, help us make AI models safer. (Anthropic)
• Google Eats Rocks, a Win for A.I. Interpretability and Safety Vibe Check - “Pass me the nontoxic glue and a couple of rocks, because it’s time to whip up a meal with Google’s new A.I. Overviews.” Josh Batson, a researcher at the A.I. startup Anthropic, joins the Hard Fork podcast to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. (Hard Fork)
• Google’s broken link to the web - With AI search results coming to the masses, the human-powered web recedes further into the background. (Platformer)
• Why ChatGPT feels more “intelligent” than Google Search - Artificial intelligence caught the public’s imagination when OpenAI released its GPT-4-powered chatbot in 2023. For many users, ChatGPT feels like a true AI compared to other tools such as Google Search. In this op-ed, Philip L, the creator of the AI Explained YouTube channel, explains why he thinks new video interfaces will give people a sense of AI as a “thing” rather than a “tool.” (Big Think)
• With spatial intelligence, AI will understand the real world - In the beginning of the universe, all was darkness — until the first organisms developed sight, which ushered in an explosion of life, learning and progress. AI pioneer Fei-Fei Li says a similar moment is about to happen for computers and robots. She shows how machines are gaining "spatial intelligence" — the ability to process visual data, make predictions and act upon those predictions — and shares how this could enable AI to interact with humans in the real world. (TED)
• AI is cracking a hard problem – giving computers a sense of smell - Research on machine olfaction faces a formidable challenge due to the complexity of the human sense of smell. Whereas human vision mainly relies on receptor cells in the retina – rods and three types of cones – smell is experienced through about 400 types of receptor cells in the nose. (The Conversation)
• The Cloud Under The Sea - The internet is carried around the world by hundreds of thousands of miles of slender cables that sit at the bottom of the ocean. These fragile wires are constantly breaking - a precarious system on which everything from banks to goverments to TikTok depends. But thanks to a secretive global network of ships on standby, every broken cable is quickly fixed. This is the story of the people who repair the world’s most important infrastructure. (The Verge)
• What ideas in computer science are universally considered good? - Programmers love arguing for their favorite technologies. C++ vs Rust. Mac vs PC. These arguments overshadow the victories of Computer Science — the ideas that we all agree on. (Daniel Hooper)
• The Ambling Mind - Meditations on the virtues of walking. Kierkegaard “walked himself into a state of well-being”; Nietzsche felt that “all truly great thoughts are conceived by walking”. Travel writer Nick Hunt, reflecting on his walk from the Netherlands to Istanbul, noted that walking turned the world into a continuum. “One thing merges into the next: cultures are not separate things but points along a spectrum.” (The Convivial Society)