Scholarship

Best and Worst Ancient Philosophy Journal Experiences

Waiting for months to get a desk rejection or a couple of brief and dismissive comments is frustrating, especially for job market candidates and early career faculty who have much riding on getting published quickly. To help ancient philosophy scholars considering which journals to submit to, I thought it would be good to highlight what recent public surveys submitted to the APA Journal Surveys project indicate about the editorial experience at journals that specialize in ancient philosophy and the history of philosophy. I am not including generalist journals that also publish some articles in ancient philosophy, both because there are a large number of these and because the survey aggregates may diverge from the experience of those submitting ancient philosophy papers (there’s no way to filter experiences based on topic).

First, a couple of caveats: 1) many ancient philosophy journals from my journals listing are not included because they have no submitted surveys  2) even for those that are represented, there are a limited number of data points, especially for some journals (e.g. Classical Quarterly) and 3) we do not know how representative those submitting entries are in comparison to all authors submitting papers to these journals. However, the reported response times and comments numbers do closely track publicly available data where available (for example, with the Journal of the History of Philosophy and the British Journal for the History of Philosophy). These statistics also fit with the experiences that I have heard from others in the ancient philosophy community. They will also hopefully get more accurate as more people submit to the APA Journal Surveys project, and I will revisit this topic as the data warrants. Also, if you are part of the editorial staff at one of these journals and think that the data about your journal is misleading, please contact me. I would be happy to share more complete information about submission statistics with readers.

JHP and BJHP, which are among the most transparent about their editorial practices, lead the way in editor experience scores. The APA journal surveys site asks respondents to rate the overall editorial experience from 1-5 and these two are the only ones with ratings in the 4.5-5 range, with JHP averaging 4.71 and BJHP averaging 4.6. Unsurprisingly, they also have some of the quickest turnaround times with BJHP averaging around 3 months and JHP averaging under 2 months. However, it’s also worth noting that JHP has the highest overall rating, even though at least 1/3 of the authors reporting were rejected without being sent out for review (usually within less than a month). Authors don’t seem to mind desk rejections if they’re really quick and allow them to move on to the next journal. The other two journals that did well on editorial experience, with scores in the 4-4.5 range, are Classical Quarterly and Phronesis with 4.33 and 4.05 respectively. These journals are also among the quickest, with Phronesis averaging a little over 2 months and Classical Quarterly under 4 months.

At the other end, the two journals with overall editorial experience scores under 3 are Oxford Studies in Ancient Philosophy and Apeiron: A Journal for Ancient Philosophy and Science. Time under review had a big influence here too. These two were the slowest of all the journals considered, with average time to decision of almost 8 months (OSAP) and 8.5 months (Apeiron). A large number of submissions also had to wait even longer, with 9 of the 15 reported submissions at OSAP having to wait at least 8 months for a decision and 6 out of the 20 reported submissions to Apeiron taking a year or more to receive a decision. OSAP‘s low score was also affected by the fact that 3 of the 4 submissions that did not receive reviewer comments still had to wait for at least 6 months. Desk rejection with a long wait is the worst combination.

The chart below lists the relevant journal surveys statistics for all the ancient and history of philosophy journals with at least 4 reviews. Where a decent number of surveys were available I used surveys from 2015 on, since the most important thing for potential authors is the current editorial situation. Where there were not as many surveys available, I went back to 2011. The full spreadsheet with all the records assembled from the publicly available APA Journal Surveys data is available here.

Journal name Number of Surveys Comment count Comment quality Editor experience Response time
(months)
Time range
Ancient
Philosophy
22 1.24 2.73 3.50 4.52 2011 on
Apeiron 20 0.84 2.65 2.63 8.50 2011 on
Archiv für
Geschichte der Philosophie
4 2.00 4.00 3.00 6.00 2018 on
British Journal for the
History of Philosophy
38 2.00 3.77 4.60 2.96 2015 on
Classical Quarterly 12 1.09 3.50 4.33 3.88 2011 on
History of Philosophy Quarterly 18 2.00 4.00 3.94 5.58 2015 on
Journal of the
History of Philosophy
21 0.95 3.17 4.71 1.60 2015 on
Oxford Studies in
Ancient Philosophy
15 1.20 3.27 2.40 7.66 2015 on
Phronesis 20 0.95 3.26 4.05 2.17 2015 on

Full disclosure: At the time of posting, I have personally submitted to, published at, and/or refereed for all of these journals except for Classical Quarterly. The basis for this post is not, however, my personal experiences, which do not always match the overall trends. For example, my last submission at Apeiron was looked at within two months and received excellent comments (leading to acceptance after revision and resubmission) and my last submission at OSAP got two sets of helpful comments within a reasonable timeframe (though it was rejected).

11 Comments

  • djr

    My first response to these reports was: wow, much of that doesn’t fit my experience at all. So I thought it might be worth sharing some of my own relevant experiences in order to illustrate that reports like these do not necessarily predict the sort of response you will receive if you choose to submit to these journals.

    Take Apeiron first. I published an article in Apeiron in 2016. I did not have to wait 8 months for a decision; it was more like 3 months from submission to acceptance. I have also reviewed for Apeiron numerous times, and though I cannot say how much time had elapsed between submission and my receipt of the articles or between my submission of reports and the communication of a decision, I have on each occasion returned a report within a month, as the editors request. I have also known several others who have published papers in Apeiron, and I do not recall their having to wait nearly so long for a decision. Similarly, I received a decision from CQ in under a month, while I once received a response from Ancient Philosophy only after waiting for many months and contacting the editor for an update. So the relative ranking of AP and Apeiron in these surveys does not fit my experience at all, even though it reflects the experience of 20+ others.

    In short, I suspect that individual experiences differ widely and I would discourage people from taking these survey numbers as an accurate prediction of the experience they’ll have if they choose to submit to one of these journals. The one exception: pretty much everybody I’ve ever head from reports that OSAP takes forever.

    • Caleb Cohoe

      Yes, this information is provisional and may be a more or less accurate representation. As I mentioned at the end of the post, my own previous experience at Apeiron took significantly less time and was quite positive. But I wanted to summarize what’s in the surveys, not what my own experiences have been. It would be much better, of course, to have all journals provide the sort of data on submissions and processing that JHP offers. My hope is highlighting the survey information that is available might encourage more journals to be transparent. I would also say that some journals are more variable than others. One reason that JHP gets high ratings is that they are very consistent: desk rejections are always given in under a month and refereed papers almost always get a decision within 3 months. Being able to reasonably predict around when you’ll get a decision is quite helpful.

      • djr

        I think I might have written my earlier response too hastily and so inadvertently gave the impression that I was critical of your post. I meant, instead, to be adding to your own anecdotal evidence that the survey data might not accurately represent normal experience with these journals, adding further weight to your point that the survey data might not be very representative. But the survey data is, of course, useful in any case, even if only as yet more anecdota. I do wonder whether these surveys are more likely to be filled out by people who have had unusually negative or positive experiences. Online product reviews often seem that way — unusually satisfied and unusually dissatisfied people write strong positive and negative reviews, but the boringly content majority write nothing. APA surveys are not Yelp reviews, but a similar sort of bias could be a factor. However that may be, it’s helpful to make the data, however limited, more publicly visible. Your point about consistency is right on, too — editors can only do so much to ensure consistency, but it would help to have a reliable idea of how long things will take.

  • Caleb Cohoe

    No worries! I take your point about the possibility of extremes being overrepresented. That would not surprise me. I do, however, know that some people get into the habit of submitting a survey whenever they hear back from a journal. If enough people develop that habit, we’ll get more representative data.

  • Clerk

    I would also be interested to know at which journals editors review (and desk-reject) non-anonymized papers. I think at least one journal on your list does this.

    • Manu

      Interesting point of view. I’ve being several times surprised about many articles published in some of these journals that, from my point of view, didn’t contribute to anything new, nor were they brilliant explications of something already known. Furthermore, some articles seem to me as poor research. But then you see that Mr X is already known in the field and this might be like a kind of free-pass to publish whatever they want in order to fulfill publishing requirements. I mean I’m not speaking about a profound critic of such articles, but I’ve found basic logical faults (e.g. an ex silentio argument that guided the whole article) that make me think that the editors didn’t read the paper or they just didn’t care about it (I have a list of some articles published that contain structural and important flaws). To me the system is biased in order to favour the already-known-kind-of-my-friend scholars. Still, many articles are great pieces of work. But not all of them.

Leave a Reply

Your email address will not be published. Required fields are marked *