There are many people who have been asking what happens now that the CodeMash CFP is closed. The TLDR; version is that the content committee spends every free moment between now and the end of September reviewing all of the abstracts with an eye towards assembling the best content lineup possible for the upcoming conference. If you care about the longer version, keep reading.
This year we once again saw a record-breaking number of submissions. Our total this year is 1,237 – approximately 100 more than last year’s number and 500 more than two years ago. While we are thrilled to have this much interest, it is a bit overwhelming and certainly not a task that one person could perform. To that end, we have an amazing content review committee made up of 14 different people from a broad spectrum of technology backgrounds. Each reviewer is assigned two different categories to review based on their expertise. This allows us to have at least two reviewers per track and per session.
The review committee is currently working through phase 1 of the reviews. During this phase, they review each abstract and rate the proposed talk on its merits (Is the abstract clear, coherent? Is it obvious what attendees will learn/how they will benefit? Does it fit in the selected category? Does the speaker have the background to support their knowledge of the topic?). They also spend time making notes as to the general themes present within the category and formulating their plan for what themes they want to hit with what force.
As soon as phase 1 is complete, we start (you guessed it…) phase 2 (I spent a lot of time planning that out). In phase 2, each reviewer builds a voting roster of the talks in their two categories. The idea is, if you could only get 1 talk from this category into the conference, which would it be? What about 2? Etc. This sorting is done both on the quality of the talks (from phase 1) but also the theme distribution. For example, if the security track is going to have 13 total talks – a full track – you wouldn’t necessarily want three of those talks to be on threat modeling – you would want to diversify so the attendees get an appropriately broad set of content.
Once these selections have been made, we combine the results of the two or three reviewers from each category into a master list for that category. We then combine this into the master “budget” (how many talk slots are allocated to each category) and then send around a proposed cut list to the committee members. The committee has a chance to fight for various talks within their topic to get more/different talks that they feel particularly passionate about and then the list is finalized. About this time, September is over and we start sending out selection notices.
We understand that there are infinite ways to adjust this process and many ways to improve it. We are always looking at how things worked the prior year with an eye towards how we can improve the following.
This Year’s Statistics
For those of you interested, here are some statistics from this year’s CFP:
· General Sessions: 1,069 submissions. Projected Acceptance Rate: 18%
· Pre-Compilers: 107 submissions. Projected Acceptance Rate: 37%
· Kidz Mash: 60 submissions. Projected Acceptance Rate: 25%
· Individual Prospective Speakers: 484
The following charts stand on their own and are presented without further comment: