PAPER REVIEW PROCESS
Register as a Paper Reviewer
Login here to review papers. If you already have an existing Microsoft Conference Management Toolkit (CMT) account, you can use your existing account. Otherwise, if you are new to CMT, please click on register.
When you login the first time as a reviewer for MLSP 2023, please select 1 primary subject area and up to 5 secondary subject areas. In order to help us ensure sufficient reviewers for submitted papers, please choose a broad selection of secondary subjects (up to 5) for which you are willing to review papers.
Paper Evaluation Process
The Evaluation Process in Short
- We conduct a double-blind review process. Each paper will be evaluated anonymously by three independent experts in accordance with the Code of Ethics and Policies. Reviewers will be selected automatically by matching their expert profiles with the topic of the submitted paper. It will be defined as conflict of interest when authors and reviewers are from the same institution, or when the authors are affiliated with an institution where the reviewer indicates to have a conflict of interest due to collaboration, family relations, or the like.
- The reviewers are going to indicate their familiarity with the paper’s subject, evaluate the paper along four evaluation criteria (see below), and provide comments for the authors.
- The program chairs will consider the review scores; provide a careful examination of reviewers’ comments; provide a careful examination of all aspects of borderline papers; and will carefully consider reviewers’ comments on high-ranked papers. Finally, they produce a list of accepted papers with respect to the overall nominal acceptance rate of 45 ± 3% with an absolute cap of 50%.
Reviewer’s Familiarity with the Paper’s Subject
The familiarity is scored as:
- excellent: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature.
- very good: The reviewer is confident but not absolutely certain that the evaluation is correct. It is unlikely but conceivable that the reviewer did not understand certain parts of the paper, or that the reviewer was unfamiliar with a piece of relevant literature.
- good: The reviewer is fairly confident that the evaluation is correct. It is possible that the reviewer did not understand certain parts of the paper, or that the reviewer was unfamiliar with a piece of relevant literature. Mathematics and other details were not carefully checked.
- fair: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper.
- poor: The reviewer’s evaluation is an educated guess. Either the paper is not in the reviewer’s area, or it was extremely difficult to understand.
Evaluation Criteria
A quality paper is defined as a paper with high scores along the following criteria. The criteria reflect independent aspects of the paper’s quality and are hence also scored independently. Each criterion has an acceptance threshold and extended description of the issues to be evaluated. All marks indicated by green color are above the threshold. The interpretation of the scores is:
- excellent: This paper is of outstanding quality and in the top 10% of accepted papers
- very good: The paper is of very good quality and in the top 25% of accepted papers
- good: The paper is of average quality and in top 50% of accepted papers
- fair: The paper is of fair quality but below acceptable threshold
- poor: The paper is of poor quality and should definitely be rejected
Criterion 1: Relevance to Conference Call and to which Degree the Paper is a Timely Contribution
Score: Excellent, very good, good, fair, poor
Interpretation: Is the paper within the scope of the workshop. Are the results important and timely?
Criterion 2: Scientific/Technical Originality and Potential Impact
Score: Excellent, very good, good, fair, poor
Interpretation: Are the problems or approaches new? Where possible, reviewers should identify submissions that are very similar (or identical) to versions that have been previously published. Is this a novel combination of familiar techniques? Is it clear how this work differs from previous contributions? Are other people (practitioners, researchers or the commercial sector) likely to use these ideas or build on them? Are the results likely to have an impact on the research community or commercial sector?
Criterion 3: Scientific/Technical Content and Advances Beyond the State-of-the-Art
Score: Excellent, very good, good, fair, poor
Interpretation: Is the paper technically sound? Is related work adequately referenced? Are claims well-supported by theoretical analysis or experimental results? Is this a complete piece of work, or merely a position paper? Are the authors careful and honest about evaluating both the strengths and weaknesses of the work? Does the paper address a difficult problem in a better way than previous research? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions on existing data, or a unique theoretical or pragmatic approach?
Criterion 4: Quality and Clarity of the Presentation
Score: Excellent, very good, good, fair, poor
Interpretation: Is the paper clearly written? Is it well-organized? Does it adequately inform the reader? A superbly written paper provides enough information for the expert reader to reproduce its results and may be assisted by cited supplementary material such as detailed explanations, derivations, code, and data.
Criterion 5: Comments for the Authors
Interpretation: Provides an overall summary and detailed comments related to every evaluation criterion. If appropriate, suggestions to improve the work is included. It should be made sure that high marks are reflected by positive comments and low marks by negative comments. Offending comments and anything which could reveal reviewers identity should be avoided.