Eric Hoel in The Intrinsic Perspective:
The hardest question in neuroscience [is] a very simple one, and devastating in its simplicity. If even small artificial neural networks are mathematical black boxes, such that the people who build them don’t know how to control them, nor why they work, nor what their true capabilities are… why do you think that the brain, a higher-parameter and more messy biological neural network, is not similarly a black box?
And if the answer is that it is; indeed, that we should expect even greater opacity from the brain compared to artificial neural networks, then why does every neuroscience textbook not start with that? Because it means neuroscience is not just hard, it’s closing in on impossibly hard for intelligible mathematical reasons. AI researchers struggle with these reasons every day, and that’s despite having perfect access; meanwhile neuroscientists are always three steps removed, their choice of study locked behind blood and bone, with only partial views and poor access. Why is the assumption that you can just fumble at the brain, and that, if enough people are fumbling all together, the truth will reveal itself?
More here.