Ben Brubaker in Quanta:
Today, even seasoned researchers find understanding in short supply when they confront the central open question in theoretical computer science, known as the P versus NP problem. In essence, that question asks whether many computational problems long considered extremely difficult can actually be solved easily (via a secret shortcut we haven’t discovered yet), or whether, as most researchers suspect, they truly are hard. At stake is nothing less than the nature of what’s knowable.
Despite decades of effort by researchers in the field of computational complexity theory — the study of such questions about the intrinsic difficulty of different problems — a resolution to the P versus NP question has remained elusive. And it’s not even clear where a would-be proof should start.
More here.

Arthur Conan Doyle secretly hated his creation Sherlock Holmes and blamed the cerebral detective character for denying him recognition as the author of highbrow historical fiction, according to the historian Lucy Worsley.
By the time Amanda Stern was in her mid-40s, she no longer suffered from clinical depression. And her panic attacks, which had started in childhood, were mostly gone. But instead of feeling happier, she said, “I felt wallpapered in an endless, flat sadness.” Confused, she turned to her therapist, who suggested that she had dysthymia, a mild version of persistent depressive disorder, or P.D.D. Ms. Stern, an author based in New York City, often writes about mental health, but she had never heard the term. She soon realized that she had experienced dysthymia on and off for decades. “I am not suffering from it right now,” she added, “but I imagine I’ll live with it again.” She decided to write about it in her newsletter,
An autonomous system that combines robotics with artificial intelligence (AI) to create entirely new materials has released its first trove of discoveries. The system, known as the A-Lab, devises recipes for materials, including some that might find uses in batteries or solar cells. Then, it carries out the synthesis and analyses the products — all without human intervention. Meanwhile, another AI system has predicted the existence of hundreds of thousands of stable materials, giving the A-Lab plenty of candidates to strive for in future.
As we head into COP28, the annual global meeting on climate change underway in Dubai, there are two dominating schools of thought, both of which are wrong. One says the future is hopeless and our grandchildren are doomed to suffer on a burning planet. The other says we’re all going to be fine because we already have everything we need to solve climate change.
At around 11:30 a.m. on the Friday before Thanksgiving, Microsoft’s chief executive, Satya Nadella, was having his weekly meeting with senior leaders when a panicked colleague told him to pick up the phone. An executive from OpenAI, an artificial-intelligence startup into which Microsoft had invested a reported thirteen billion dollars, was calling to explain that within the next twenty minutes the company’s board would announce that it had fired Sam Altman, OpenAI’s C.E.O. and co-founder. It was the start of a five-day crisis that some people at Microsoft began calling the Turkey-Shoot Clusterfuck.
Noisy. Hysterical. Brash. The textual version of junk food. The selfie of grammar. The exclamation point attracts enormous (and undue) amounts of flak for its unabashed claim to presence in the name of emotion which some unkind souls interpret as egotistical attention-seeking. We’ve grown suspicious of feelings, particularly the big ones needing the eruption of a ! to relieve ourselves. This trend started sometime around 1900 when modernity began to mean functionality and clean straight lines (witness the sensible boxes of a Bauhaus building), rather than the “extra” mood of Victorian sensitivity or frilly playful Renaissance decorations.
In
T
What doesn’t kill you might make you stronger. When it comes to nature’s toxins, they might even save your life (or at least blunt the sting of its finality). The distinction, as the evolutionary biologist Noah Whiteman explores in “Most Delicious Poison,” is all in the dosage.
Teaching algorithms to mimic humans typically requires hundreds or thousands of examples. But a new AI from Google DeepMind can pick up new skills from human demonstrators on the fly. One of humanity’s greatest tricks is our ability to acquire knowledge rapidly and efficiently from each other. This kind of social learning, often referred to as cultural transmission, is what allows us to show a colleague how to use a new tool or teach our children nursery rhymes. It’s no surprise that researchers have tried to replicate the process in machines. Imitation learning, in which AI watches a human complete a task and then tries to mimic their behavior, has long been a popular approach for training robots. But even today’s most advanced deep learning algorithms typically need to see many examples before they can successfully copy their trainers.