ICCV 2025 Reviewer Guidelines
Thank you for your time to review for ICCV 2025! To maintain a high-quality technical program, we rely very much on the time and expertise of our reviewers. This document explains what is expected of all members of the Reviewing Committee for ICCV 2025.
Contents
- What’s new for reviewers for ICCV 2025
- Reviewing Process
- Reviewing Timeline
- Responsible Reviewing Policy
- Reviewing Deadline Policy
- How to Write Good Reviews
- What Reviewers Should Look Out For in Papers
- Ethics for Reviewing Papers
- FAQs for Reviewing Papers
What’s New for Reviewers at ICCV 2025 ?
To improve the review process and uphold the conference’s high standards, ICCV 2025 will strictly enforce a Responsible Reviewing Policy and a Reviewing Deadline Policy. Any reviewer whose review is deemed to be “highly irresponsible” will face a desk rejection of all papers on which they are an author per the discretion of the PCs. In addition, any reviewer who fails to submit their assigned reviews by the deadline is subject to desk rejection of all papers on which they are an author. Please see the sections (below) on Responsible Reviewing Policy and Reviewing Deadline Policy for more details.
Reviewing process
There are three groups of people involved in the reviewing process: Program Chairs (PCs), Area Chairs (ACs) and Reviewers. At ICCV 2025, we have 6 PCs, ~ 500 ACs and ~8000 reviewers.
ICCV reviewing is double blind, in that authors do not know the names of the area chairs or reviewers for their papers, and the area chairs/reviewers are not told the names of the authors. PCs have visibility to the entire process including the names of authors, reviewers, and ACs for each submission. Note that PCs at ICCV 2025 cannot submit a paper at ICCV 2025.
There are several steps in the reviewing process:
- Papers are assigned to area chairs ACs, usually not more than 30 papers per AC.
- ACs suggest multiple reviewers per paper, with the help of OpenReview Matching.
- Papers are assigned to reviewers (3 per paper) using an optimization algorithm that takes into account AC suggestions, paper load and conflict constraints.
- Reviewers submit initial reviews, typically handling 5-8 papers each.
- ACs check the quality of reviews and assign emergency reviewers as necessary.
- Papers authored by reviewers, who have not submitted reviews on time or who have submitted reviews that are deemed to be highly irresponsible, will be desk rejected as per the discretion of the PCs.
- Authors receive reviews, and have the option of submitting a rebuttal to address any concerns the reviewers may have.
- Discussion among reviewers and ACs, based on all reviews, rebuttal, and paper.
- Reviewers update their ratings and justification.
- ACs draft meta-reviews.
- ACs discuss their meta-reviews and tentative decisions with two other ACs in an “AC Triplet”. They discuss borderline papers and check meta-reviews for their quality. In addition to accept/reject decisions, AC triplets provide nominations for spotlights, orals and awards to the PCs.
- PCs make the final determination for accepting/rejecting papers, assigning papers to the oral/spotlight/poster categories and drafting a list of nominations to be sent to the award committee.
Reviewing Timeline
Paper submission deadline: |
Mar 7, 2025 |
Papers assigned to reviewers: |
Mar 27, 2025 |
Reviews due (strict deadline): |
Apr 23, 2025 |
Reviews released to authors: |
May 9, 2025 |
Rebuttal deadline: |
May 16, 2025 |
Final ratings due by reviewers after discussions: |
May 27, 2025 |
Final AC decisions due: |
Jun 18, 2025 |
Decisions released to authors: |
Jun 20, 2025 |
Responsible Reviewing Policy
At ICCV 2025, reviewers are expected to provide fair and thoughtful reviews. Examples of highly irresponsible behavior include: one-sentence reviews, reviews generated by Large Language Models (LLMs), reviews that are irrelevant to the paper, or reviews that overlook substantial portions of the paper, but not cases where reviewers merely exhibit misunderstandings, miss small details of the paper, or hold differing opinions from other reviewers or the AC. If a review is flagged as highly irresponsible, it will undergo an oversight process managed by the Program Chairs (PCs). Any reviewer whose review is deemed to be “highly irresponsible” will face a desk rejection of all papers on which they are an author per the discretion of the PCs.
Reviewing Deadline Policy
At ICCV 2025, reviewers are also expected to provide timely reviews. Historically, previous ICCV conferences (and CVPR, ECCV and NeurIPS, amongst others) faced challenges with some reviewers failing to meet the review submission deadlines. It came to be accepted that there was an unofficial grace period after the reviewing deadline. In some cases, reviewers failed to respond to multiple reminders and did not submit their reviews at all. This necessitated Area Chairs (ACs) to follow up diligently with late reviewers, as well as assign emergency reviewers to ensure each paper received the minimum of three reviews. Over the years, this has added to the already large workload and stress of the conference organizers. To improve the review process and uphold the conference’s high standards, ICCV 2025 will strictly enforce the reviewing deadline. Any reviewer who fails to submit their assigned reviews by the deadline will face a desk rejection of all papers on which they are an author per the discretion of the PCs. This policy aims to ensure fairness and accountability across the reviewing process while reducing the burden on ACs and other conference organizers.
There will be multiple emails sent out to all reviewers as a reminder to submit reviews in a timely manner. Additionally, the co-authors of reviewers who have not submitted their reviews will also be notified that their submission may be desk rejected if all authors do not submit their reviews in time.
How to Write Good Reviews
Check your papers
As soon as you get your reviewing assignment, please go through all the papers to make sure that (a) you have no obvious conflict of interest (see “Avoid Conflicts of Interest” below) and (b) you feel comfortable reviewing the paper assigned. If issues with either of these points arise, please contact the Area Chair right away as instructed in the detailed emails you will receive during the process.
Know the policies
Please read the Author Guidelines carefully to familiarize yourself with all official policies the authors are expected to follow. If you come to believe that a paper may be in violation of any of these policies, please contact the Chairs. In the meantime, proceed to review the paper assuming no violation has taken place.
Be Mindful
Each paper that is accepted should be technically sound and make a contribution to the field. Look for what is good or stimulating in the paper, and what knowledge advance it has made. We recommend that you embrace novel, brave concepts, even if they have not been tested on many datasets. For example, the fact that a proposed method does not exceed the state-of-the-art accuracy on an existing benchmark dataset is not grounds for rejection by itself. Rather, it is important to weigh both the novelty and potential impact of the work alongside the reported performance. Minor flaws that can be easily corrected should not be a reason to reject a paper.
Be Detailed
Take the time to write good reviews. Ideally, you should read a paper and then think about it over the course of several days before you write your review.
While length does not make a review good, short reviews tend to be enigmatic and so unhelpful to authors, other reviewers, and Area Chairs. If you have agreed to review a paper, you should take enough time to write a thoughtful and detailed review.
Your main critique of the paper should be written in terms of a list of strengths and weaknesses. You can use bullet points here, but also explain your arguments. Bullet lists with one short phrase per bullet are NOT a detailed review. Your detailed review, more than your score, will help the authors, fellow reviewers, and Area Chairs understand the basis for your recommendation, so please be thorough.
Be Specific
Be specific about novelty. Claims in a review that the submitted work “has been done before” MUST be backed up with specific references and an explanation of how closely they are related. At the same time, for a positive review, be sure to summarize what novel aspects are most interesting in the Strengths section.
Be specific when you suggest that the writing needs to be improved. If there is a particular section that is unclear, point it out and give suggestions for how it can be clarified.
In the discussion of related work and references, simply saying "this is well known" or "this has been common practice in the industry for years" is not sufficient: cite specific publications, including books or public disclosures of techniques.
Do not reject papers solely because they are missing citations or comparisons to prior work that has only been published without review (e.g., arXiv or technical reports). Refer to the FAQ below for more details on handling arXiv prior art.
Give Feedback to Improve Submissions
Please include specific feedback on ways the authors can improve their papers. Be generous about giving the authors new ideas for how they can improve their work. You might suggest a new technical tool that could help, a dataset that could be tried, an application area that might benefit from their work, or a way to generalize their idea to increase its impact.
If you think the paper is out of scope for ICCV's subject areas, clearly explain why in the review. Then suggest other publication possibilities (journals, conferences, workshops) that would be a better match for the paper. However, unless the area mismatch is extreme, you should keep an open mind, because we want a diverse set of good papers at the conference.
Be Mindful of Your Tone
The tone of your review is important. A harshly written review will be resented by the authors, regardless of whether your criticisms are true. If you take care, it is always possible to word your review constructively while staying true to your thoughts about the paper.
Avoid referring to the authors in the second person (“you”). It is best to avoid the term “the authors” as well, because you are reviewing their work and not the person. Instead, use the third person (“the paper”). Referring to the authors as “you” can be perceived as being confrontational, even though you may not mean it this way.
Finally, keep in mind that a thoughtful review not only benefits the authors, but also yourself. Your reviews are read by other reviewers and especially the Area Chairs, in addition to the authors. Unlike the authors, the Area Chairs know your identity; in addition, your identity will be made visible to the other reviewers of the paper after the paper decision. Being a helpful reviewer will generate good will towards you in the research community – and may even help you to win an Outstanding Reviewer award.
What Reviewers Should Look Out For in Papers
Check for Reproducibility
To improve reproducibility in AI research, we highly encourage authors to voluntarily submit their code as part of supplementary material, especially if they plan to release it upon acceptance. Reviewers may optionally check this code to ensure the paper's results are reproducible and trustworthy, but are not required to. All code/data should be reviewed confidentially and kept private, and deleted after the review process is complete. We expect (but do not require) that the accompanying code will be submitted with accepted papers.
Check for Data Contribution
Datasets are a significant part of Computer Vision research. If a paper is claiming a dataset release as one of its scientific contributions, it is expected that the dataset will be made publicly available no later than the camera-ready deadline, should it be accepted.
Check for Attribution of Data Assets
Authors are advised that they need to cite data assets used (eg, datasets or code) much like papers. As a reviewer, please carefully check if a paper has adequately cited data assets used in the paper, and comment in the corresponding field in the review form.
Check for Use of Personal Data and Human Subjects
If a paper is using personal data or data from human subjects, the authors must have an ethics clearance from an institutional review board (IRB, or equivalent) or clearly describe that ethical principles have been followed. If there is no description of how ethical principles were ensured or GLARING violations of ethics (regardless of whether discussed or not), please inform the Area Chairs and the Program Chairs, who will follow on each specific case. Reviewers shall avoid dealing with such issues by themselves directly.
IRB reviews for the US or the appropriate local ethics approvals are typically required for new datasets in most countries. It is the dataset creators' responsibility to obtain them. If the authors use an existing, published dataset, we encourage, but do not require them to check how data was collected and whether consent was obtained. Our goal is to raise awareness of possible issues that might be ingrained in our community. Thus we would like to encourage dataset creators to provide this information to the public.
In this regard, if a paper uses an existing public dataset that is released by other researchers/research organizations, we encourage, but not require them to include a discussion of IRB related issues in the paper. Reviewers hence should not penalize a paper if such a discussion is NOT included.
Check for Discussion of Negative Societal Impact
The ICCV community has not put as much emphasis on the awareness of possible negative societal impact as other AI communities so far, but this is an important issue. We aim to raise awareness without introducing a formal policy (yet). As a result, authors are encouraged to include a discussion on potential negative societal impact. Reviewers should weigh the inclusion of a meaningful discussion POSITIVELY. Reviewers should NOT reject a paper solely based on that the paper has not included such a discussion as we do not have a formal policy requiring that.
Check for Discussion of Limitations
Discussing limitations used to be commonplace in our community, but seems to be increasingly lost. We point out the importance of discussing limitations especially to new authors. Therefore, authors are encouraged to explicitly and honestly discuss limitations. Reviewers should weigh the inclusion of an honest discussion POSITIVELY, instead of penalizing the papers for including it. We note that a paper is not required to have a separate section to discuss limitations, so it cannot be a sole factor for rejection.
Ethics for Reviewing Papers
Respect anonymity in the review process
Our Author Guidelines have instructed authors to make reasonable efforts to hide their identities, including omitting their names, affiliations, and acknowledgments. This information will of course be included in the final published version of the manuscript. Reviewers should not take active steps to discover the identity of the authors, and make all efforts to keep their own identity invisible to the authors.
With the increase in popularity of arXiv preprints, sometimes the authors of a paper may already be known to the reviewer. Posting to arXiv is NOT considered a violation of anonymity on the part of the authors, and in most cases, reviewers who happen to know (or suspect) the authors’ identity can still review the paper as long as they feel that they can do an impartial job. An important general principle is to make every effort to treat papers fairly whether or not you know (or suspect) who wrote them. If you do not know the identity of the authors at the start of the process, DO NOT attempt to find it out by searching the Web for preprints.
Protect Ideas
As a reviewer for ICCV, you have the responsibility to protect the confidentiality of the ideas represented in the papers you review. ICCV submissions are not published documents. The work is considered new or proprietary by the authors; otherwise they would not have submitted it. Of course, their intention is to ultimately publish to the world, but most of the submitted papers will not appear in the ICCV proceedings. Thus, it is likely that the paper you have in your hands will be refined further and submitted to some other journal or conference. Sometimes the work is still considered confidential by the authors' employers. These organizations do not consider sending a paper to ICCV for review to constitute a public disclosure. Protection of the ideas in the papers you receive means:
- You should not show the paper to anyone else, including colleagues or students unless you have permission from the program chairs.
- You should not show any results or videos/images or any of the supplementary material to non-reviewers.
- You should not use ideas from papers you review to develop your own ideas.
- After the review process, you should destroy all copies of papers and videos and erase any implementations you have written to evaluate the ideas in the papers, as well as any results of those implementations.
Avoid Conflict of Interest
As a reviewer of an ICCV paper, it is important for you to avoid any conflict of interest. There should be absolutely no question about the impartiality of any review. Thus, if you are assigned a paper where your review would create a possible conflict of interest, you should return the paper and not submit a review. Conflicts of interest include (but are not limited to) situations in which:
- You work at the same institution as one of the authors.
- You have been directly involved in the work and will be receiving credit in some way. If you're a member of the author's thesis committee, and the paper is about his or her thesis work, then you were involved.
- You suspect that others might perceive a conflict of interest in your involvement.
- You have collaborated with one of the authors in the past three years (more or less). Collaboration is usually defined as having written a paper or grant proposal together, although you should use your judgment.
- You were the MS/PhD advisor or student of one of the authors, or you studied under the same advisor. Most funding agencies and publications typically consider these to represent a lifetime conflict of interest. ICCV has traditionally been more flexible than this, but you should think carefully before reviewing a paper you know to be written by a former advisor or advisee, especially a recent one.
While the organizers make every effort to avoid such conflicts in the review assignments, they may nevertheless occasionally arise. If you recognize the work or the author and feel it could present a conflict of interest, contact your Area Chair as soon as possible so they can find someone else to review it.
Large Language Model (LLM) Ethics
ICCV 2025 does not allow the use of Large Language Models or online chatbots such as ChatGPT in any part of the reviewing process. There are two main reasons: (a) Reviewers must provide comments that faithfully represent their original opinions on the papers being reviewed. It is unethical to resort to Large Language Models (eg, an offline system) to automatically generate reviewing comments that do not originate from the reviewer's own opinions; (b) Online chatbots such as ChatGPT collect conversation history to improve their models. Therefore their use in any part of the reviewing process would violate the ICCV confidentiality policy.
Be Professional
Belittling or sarcastic comments have no place in the reviewing process. The most valuable comments in a review are those that help the authors understand the shortcomings of their work and how they might improve it. Write a courteous, informative, incisive, and helpful review that you would be proud to sign with your name (were it not anonymous).
FAQs for Reviewing Papers
Q. Is there a minimum number of papers I should accept or reject?
A. No. Each paper should be evaluated in its own right. If you feel that most of the papers assigned to you have value, you should accept them. It is unlikely that most papers are bad enough to justify rejecting them all. However, if that is the case, provide clear and very specific comments in each review. Do NOT assume that your stack of papers necessarily should have the same acceptance rate as the entire conference ultimately will.
Q. Can I review a paper I already saw on arXiv and hence know who the authors are?
A. In general, yes, unless you are conflicted with one of the authors. See next question below for guidelines.
Q. How should I treat papers for which I know the authors?
A. Reviewers should make every effort to treat each paper impartially, whether or not they know who wrote the paper. For example: It is not OK for a reviewer to read a paper, think “I know who wrote this; it's on arXiv; they are usually quite good” and accept the paper based on that reasoning. Conversely, it is also not OK for a reviewer to read a paper, think “I know who wrote this; it's on arXiv; they are no good” and reject the paper based on that reasoning.
Q. There are well-established conferences with a rigorous peer review process, like ICIP and ICASSP, that publish proceedings with four-page papers. Suppose that a reviewer identifies a prior ICASSP or ICIP paper that has substantial overlap with an ICCV submission. Should this ICASSP/ICIP paper not be considered a "previous publication" under the ICCV'25 Dual Submission policy?
A. If the prior paper is within four pages, the ICCV submission in question is not in violation of the Dual Submission policy and will not be administratively rejected. However, the reviewer still needs to use their judgment to determine whether the submission offers enough additional value to warrant acceptance at ICCV. If it does not add enough value over prior work, it can still be rejected on substantive, not on policy grounds. Additionally, depending on the specifics of the case, it may be relevant whether the authors of the prior ICASSP/ICIP paper are the same or different from those of the current ICCV submission. If there is a significant possibility that the authors of the present submission might be different from those of the prior paper, in which case this could be an instance of plagiarism, then the reviewer should contact the AC to investigate this possibility.
Q. Should authors be expected to cite related arXiv papers or compare to their results?
A. Consistent with good academic practice, the authors should cite all sources that inspired and informed their work. This said, asking authors to thoroughly compare their work with arXiv reports that appeared shortly before the submission deadline imposes an unreasonable burden. We also do not wish to discourage the publication of similar ideas that have been developed independently and concurrently. Reviewers should keep the following guidelines in mind:
Authors are not required to discuss and compare their work with recent arXiv reports, although they should properly acknowledge those that directly and obviously inspired them.
Failing to cite an arXiv paper or failing to beat its performance SHOULD NOT be SOLE grounds for rejection.
Reviewers SHOULD NOT reject a paper solely because another paper with a similar idea has already appeared on arXiv. If the reviewer suspects plagiarism or academic dishonesty, they are encouraged to bring these concerns to the attention of the Program Chairs.
It is acceptable for a reviewer to suggest that an author should acknowledge or be aware of something on arXiv.
Q. How should I treat the supplementary material?
A. The supplementary material is intended to provide details of derivations and results that do not fit within the paper format or space limit. Ideally, the paper should indicate when to refer to the supplementary material, and you need to consult the supplementary material only if you think it is helpful in understanding the paper and its contribution. According to the Author Guidelines, the supplementary material MAY NOT include results obtained with an improved version of the method (e.g., following additional parameter tuning or training), or an updated or corrected version of the submission PDF. If you find that the supplementary material violates these guidelines, please contact the Area Chair.
Q. Can I request additional experiments in the author’s rebuttal? How should I treat additional experiments reported by authors in the rebuttal?
A. In your review, you may request clarifications, additional illustrations, or small experiments that could be reasonably run within the rebuttal phase with academic resources.. Per a passed 2018 PAMI-TC motion, reviewers should not request substantial additional experiments for the rebuttal, or penalize for lack of additional experiments. “Substantial” refers to what would be needed in a major revision of a paper. However, papers should also not be penalized for supplying extra results. The rebuttal may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers.
Q. If a social media post shared information on an ICCV submission without the authors being involved, does that signal a violation?
A. No, it does not. A violation occurs only when authors are proactively doing so.
Q. A paper is using a withdrawn dataset, such as DukeMTMC-ReID or MS-Celeb-1M. How should I handle this?
A. Reviewers are advised that the choice to use a withdrawn dataset, while not in itself grounds for rejection, should invite very close scrutiny. Reviewers should flag such cases in the review form for further consideration by ACs and PCs. Consider questions such as: Do the authors explain why they had to do this? Is this explanation compelling? Is there really no alternative dataset that could have been used? Remember, authors might simply not know the dataset had been withdrawn. If you believe the paper could be accepted without the authors’ use of a withdrawn dataset, then it is natural to advise the authors to remove the experiments associated with this dataset.
Q. If a paper did not evaluate on a withdrawn dataset, can I request authors that they do?
A. It is a violation of policy for a reviewer or Area Chair to require comparison on a dataset that has been withdrawn.
Q. A paper is claiming a dataset as one of its contributions. How should I evaluate this claim?
A. If a paper submission is claiming a dataset as one of its contributions, there should be a reasonable expectation that the dataset will be made publicly available upon publication. You should use your judgment to evaluate the dataset claim accordingly. Note that this does NOT imply that all datasets used in ICCV submissions must be public, or that papers relying on non-public datasets must be rejected. The use of private or otherwise restricted datasets (e.g. for training or experimentation) DOES NOT constitute grounds for rejection. However, private or otherwise restricted datasets cannot be claimed as contributions in their own right, and you must evaluate the papers based on their other technical merits.
Q. Can reviews request comparison to closed source?
A. Per a passed 2024 PAMI-TC motion: whenever a comparison of published research without publicly available code / data / pretrained models is requested (i.e., requiring re-implementation), it should be appropriately justified if used as a basis for a paper decision. This is unless this is a somewhat minor change to an already implemented method for which the code / data is available or re-implementing a method based on details provided by a publication is common practice in a sub-field. In any case, comparisons should only be requested if the publication and / or code has been available sufficiently ahead of the submission deadline.
Q. What is the LLM Policy for referees in ICCV 2025?
A. Large language models (LLMs) are NOT allowed to be used to write reviews or meta-reviews, whether it is run locally or via an API. Specifically,
- You cannot use an LLM to generate content for you. The review needs to be based on your own judgment.
- You cannot share substantial content from the paper or your review with an LLM. This means that, for example, you cannot use an LLM to translate a review.
- You can use an LLM to do background research or to check short phrases for clarity. The use of grammar checker software based on LLMs is allowed.
Enforcement: Reviews and meta-reviews will be checked for LLM policy violations. If a review is flagged as a possible violation, the review will enter the oversight process for irresponsible review violations. If it is determined the the review violates this policy, the papers submitted by the reviewer will be desk rejected per discretion of the PCs.