Article count:10350 Read by:146647018

Account Entry

The existence of social media and arXiv makes double-blind review meaningless | Reddit hot discussion

Latest update time:2020-06-20
    Reads:
Bai Jiao sent from Aofei Temple
Quantum Bit Report | Public Account QbitAI

Is the double-blind review mechanism really double-blind?

Recently, there was a discussion on Reddit: Have social media and arXiv damaged the double-blind mechanism of top conferences?

13 hours, the heat exceeded 300...

It all started when a researcher shared his experience:

After the NeurIPS 2020 conference submission deadline, he saw someone sharing their arXiv research results on Twitter and received good feedback.

The key is that these people are generally big Vs on Twitter, and most of them are researchers from famous institutions such as Google and Facebook.

Once it is released, it will receive a lot of responses and be liked and forwarded by everyone.

For example, like this Facebook AI researcher.

There are even some organizations that directly tweeted - DeepMind's new research, the self-supervised model BYOL refreshed the ImageNet record.

At this time, the review work of these conferences has not yet been completed. Doing so will not only put tremendous pressure on the double-blind reviewers, but also damage the double-blind review mechanism of top conferences.

Putting it on arXiv and social media before it has passed peer review, isn’t Double Blind just kidding?

Which is more important, influence or reception?

Let’s not discuss whether the double-blind mechanism is truly double-blind.

For a research team, every year when the top conference results are released, will the papers be accepted? How many papers will be accepted? These are all issues they care about.

After all, this means how influential your team is in the field, and it will also be beneficial for subsequent research.

Just like what this member from a small research lab said, a few years ago, the recognition in the field of ML and CV was almost zero . After being accepted by these conferences, the originally inconspicuous lab has gained a certain influence.

From this perspective, influence and paper acceptance are not actually in conflict.

But for some teams, there is a question of who is in front and who is behind, especially for those who already have a certain influence in the field.

Being recognized, discussed and applied by more people seems to be more important than being accepted by a conference.

As this netizen mentioned:

The reality is that social media promotion is far more important to the success of a paper than whether it is accepted by a conference.

By the time the conference results are released, most of the papers are outdated and no longer of interest. It's just to make your resume look better.

But for ordinary teams, should they publish it on arXiv in advance or even publish it on social media?

Some people, as a bystander, think that they have learned a lot of great ideas from arXiv. When a paper is still under review, they have already read it. Whether it will eventually be accepted by the conference has nothing to do with us.

From this perspective, arXiv can indeed provide more extensive communication opportunities.

However, since the forest is big, there will be all kinds of papers. It has been a long time since the arXiv papers have been mixed with good and bad ones.

As for some less influential teams, they are worried that their ideas will be stolen and they probably won’t receive that much attention, so they won’t publish them on arXiv in advance, let alone on Twitter.

Peer review has also been controversial.

The peer review system of the conference has also been controversial. The most important reason is its opacity.

How is the review done? How many people review it? What are the identities of these reviewers?

Can we judge the quality of a paper just based on the review opinions of a few people or even only 3-4 people?

Ian Goodfellow , the inventor of GAN and now the head of AI at Apple, once criticized the peer review mechanism, believing that it is this system that has led to the decline in papers published at AI conferences. The main reason is that the quality of reviewers varies, and exaggerated papers are selected, while truly good papers are buried.

So how to solve this problem?

Given that arXiv and social media are here to stay, and reviewing will continue, how do we solve this problem?

Previously, there was a conference to solve this problem. KDD and ACL had required that papers should not be published on arXiv before the results were released .

Here are some netizens’ suggestions:

  • arXiv can add an anonymous mode to papers under review, publishing them without revealing their identities.

  • Modify conference rules to disqualify papers whose authors can be identified by reasonable means (e.g., an Internet search).

But what should you do if you already know who the author of the paper is during the review process?

Here is a suggestion:

Start a Reddit thread about the paper, or ask friends what they think of the arxiv version of the paper.

If you already know the authors of a paper on Twitter, why not go a step further and use Twitter to find flaws in the paper—this will actually offset the bias of knowing that the author is famous.

What do you think? Do you have any better suggestions? Welcome to share with us~

Reference link:
https://www.reddit.com/r/MachineLearning/comments/hbzd5o/d_on_the_public_advertising_of_neurips/

The author is a contracted author of NetEase News and NetEase "Each has its own attitude"

-over-

The "Database" series of open courses is now open, come and sign up for free!

In the second live broadcast on June 23, Qiao Xin, general manager of the database product line of Inspur Information , shared "Data platform upgrade under the traditional enterprise Internet", and will talk about technical issues such as the technical principles, optimization solutions and development and deployment outline of HTAP database, so as to provide some forward-looking guidance to the wide audience.

Scan the QR code to sign up and join the live exchange group. You can also get the live replay of the series of courses and share PPT:


Quantum Bit QbitAI · Toutiao signed author


Tracking new trends in AI technology and products


If you like it, click "Watching"!



Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号