8 Comments
Feb 12·edited Feb 12Liked by Richard Y Chappell

I think you can look to computer science and especially machine learning research to see what "publish then filter" looks like in practice in a huge and well-funded area of science.

To give a concrete recent example: there is a new paper giving a technique called "Mamba" which is an alternative to the self-attention mechanisms in transformer neural networks. It's a highly-regarded piece of work that already has a lot of hype and publicity. As far as anyone can tell, it seems to have just been *rejected* from a top computer science publication venue. But the paper and peer reviews are all public, so people are free to argue that this was bad, and it has not stopped the technique from being influential (it has 43 citations in the 2 months so far since preprint) and gathering followup work before even being formally published.

https://openreview.net/forum?id=AL1fq05o7H

The system generally works very okay. Peer review in these fields is widely regarded as low quality and unreliable, but I believe this freewheeling culture, complemented by open source software, is a big reason why progress in machine learning has still been so rapid in the last decade.

On the other hand: it definitely does not disincentivize low quality papers. There is a lot of dreck.

And even when you have important, high-quality research, the quality of the actual *paper*, as a written product explaining & arguing, is observably much lower in these fields than in others with the traditional journal system. I think this is probably a good tradeoff for scientific fields where the paper is just a description of the real contribution. I think it might be bad for philosophy, where to some extent the argument itself is the contribution.

Expand full comment
author

Interesting! What do you think explains why the papers are worse? Insufficient incentive to revise papers in light of post-publication comments?

Expand full comment
Feb 12Liked by Richard Y Chappell

I think that's basically right, although I probably shouldn't claim that the peer review culture is the complete causal explanation.

The bar for release becomes "good enough to be taken seriously at all" rather than "good enough to get through tough peer review". And then once you've written and published a paper, you're rarely going to do more than make incremental improvements -- a "gut renovation", so to speak, is too costly, and due to other incentives writing more new papers is going to be more valuable.

Expand full comment
author

That makes sense, and could be a downside, though it depends a bit on whether authors are right to see "writing more new papers" as "more valuable". Presumably we want incentives to track philosophical value here as closely as possible, so it's worth reflecting on how best to achieve that.

For what it's worth, I was thinking that more investment in revisions would be rational for papers that had a shot at being more widely-appreciated if further improved. But many papers don't have much chance at being "game-changers" to begin with, and may not be especially worth polishing once they pass a minimal bar.

Then again, it may be that papers at that minimal bar aren't even especially worth writing, and so it isn't especially worth writing *more* papers at that level. If so, it would be important for professional incentives to reflect this. One possibility is the "slow philosophy" idea that tenure decisions should be based on just the author's four best papers:

https://dailynous.com/2015/12/31/a-modest-proposal-slow-philosophy-jennifer-whiting/

Expand full comment
Apr 20Liked by Richard Y Chappell

I assume you saw this announcement, but in case not, this blog post could be a good fit, or you could write something specifically for this competition: https://forum.effectivealtruism.org/posts/XxBwnSt4BcBhpJJyX/essay-competition-on-the-automation-of-wisdom-and-philosophy

Expand full comment
author

Thanks for the suggestion!

Expand full comment
Mar 15Liked by Richard Y Chappell

I'm a huge believer in this. I have a close to finished website that is trying to enable exactly the same transition for mathematics.

I basically think someone just needs to go out there and add an evaluative/comment functionality on top of Phil papers to get this started. Sure maybe it won't be that great first time out but nothing prompts others to action like trying to fix what they think you did wrong.

If you ever start putting together a group to actually get this kind of thing off the ground I'm happy to donate labor, programming etc..

Expand full comment
Feb 12Liked by Richard Y Chappell

You might like this interview I did with Nick Hadsell on whether philosophy journals should publish AI generated papers https://youtu.be/VgimwsL4Wek?si=zDFT1q6Rs84Y0Gkb

Expand full comment