Article count:10350 Read by:146647018

Account Entry

Ultraman admitted the mysterious gpt2! Harvard MIT tour continues, full version of Stanford speech released

Latest update time:2024-05-03
    Reads:
The dreamy west wind comes from the Aofei Temple
Qubit | Official account QbitAI

Ultraman Stanford's explosive speech , the full video has been released!

This was just the first stop, and he was found again at Harvard and MIT .

Especially at Harvard, he also admitted in disguise that the mysterious gpt2-chatbot is indeed related to OpenAI, but not GPT-4.5 .

The fact that we can make progress on the behavior and functionality of all models simultaneously I think is a miracle.

He also mentioned that "every college student should learn to train GPT-2... This is not the most important thing, but I bet it is something every Harvard freshman must do in two years."

Could it be that it is really the GPT-2 1.5B Plus Pro Max Q* powerful annual collection deluxe edition ? ? ?

Stanford University is located near Silicon Valley, while Harvard University and MIT are located on the other side of the country on the East Coast in Boston.

Altman's event at Harvard was also packed, with students reportedly submitting more than 2,000 questions, but only a few got the chance to ask questions live.

This is a replica of Ultraman's world tour after the release of GPT-4 last year, with another American college tour rhythm.

More detailed information comes from the Harvard University newspaper and MIT Technology Review.

At Harvard, Altman discussed the use of AI in academics with teachers and students.

Does it matter whether the paper was written by a student or by ChatGPT?

Altman gave the example that in the past, people were worried that calculators and search engines would ruin education, but this did not happen, and ChatGPT is like a "word calculator . "

"What must be constantly developed are norms." Altman supports that ChatGPT can be used not only for science and engineering writing, but also for humanities.

Writing papers the old-fashioned way no longer works. Use ChatGPT to do the best in discovering, expressing, and exchanging ideas. I think this is the future direction of development.

At MIT, Altman talked about Agent being the killer application of AI.

Like a super-competent colleague who knows everything about my life, every email I have, every conversation I have.

In Altman's view, Agent's new paradigm of AI can help us outside the chat interface and get rid of real-world tasks, but this does not necessarily require separate hardware.

I'm very interested in new technology consumer hardware, but I'm just an amateur and far from my expertise.

When asked "Do you already know when GPT-5 is expected to be released?" Altman calmly said "yes" with a smile and said nothing else.

After the full video of his speech at Stanford was released last week, several clips sparked heated discussions:

  • GPT-4 will be the stupidest model any of you will ever have to use again.

  • I don't care if we burn $50 billion a year, we're building AGI and it will be worth it.

Altman's explosive speech at Stanford

The following is the complete video with bilingual subtitles and its content.

Ultraman's college career

Host: If you could describe your experience as a Stanford undergraduate in three words, what three words would you use?

Ultraman: Excited, optimistic, curious.

Host: What three words would you use to describe now?

Ultraman: I guess it's the same.

Host: That's great. Although the world has changed a lot in the past 19 years, these changes may pale in comparison to the changes that are coming in the next 19 years. So I want to ask you a question: If you woke up tomorrow and suddenly found yourself 19 years old again, with all the knowledge you have now, what would you do? Will you feel happy?

Altman: I feel that I am in a moment of great historical significance. The world is undergoing great changes. At the same time, I also see opportunities to participate in it and have a profound impact, such as starting a business and engaging in AI research.

I think now is the best time to start a company since the Internet era, and maybe even in the history of technology. With the advancement of AI, more miracles will be produced every year. It is at this moment that great companies are born, and the most influential products are also conceived at this moment. So I feel incredibly lucky and determined to make the most of this opportunity. I will clarify the direction of my contribution and put it into practice.

Moderator: Do you have any preference for the field in which you will contribute? Do you want to remain a student? If so, what major will you major in?

Altman: I wouldn't continue to be a student, but just because I haven't done it before, and I feel like it's reasonable to assume that people might make the same decisions that they did again. I think being a Stanford student is a great thing. It's just that this may not be what I want.

Host: What would you do?

Ultraman: I think I still choose the direction I like, which is not surprising because people usually do what they want. I think I will devote myself to artificial intelligence research.

Host: Where might you do it? Academia or private industry?

Altman: I think, obviously, I'm biased towards open applications, but I think wherever meaningful AI research can be done, I'd be very excited. But I have to say sadly that the reality is that I will choose to enter the industry. I think it really needs to be in a place where computing resources are extremely abundant.

Start a business or work?

Host: We had Qasar Younis last week, and he strongly advocated not becoming a founder, but joining an existing company to learn relevant skills. What advice would you give to students who are wondering whether they should start their own business at the age of 19 or 20, or join other startups?

Altman: Since he gave the reasons for joining other companies, let me tell you another point of view. I think there’s a lot to learn from starting your own company. If this is what you want to do, Paul Graham has a great saying that I think is very true: Entrepreneurship doesn’t have a premed level like medicine does, and you only learn how to manage a company by actually running a startup. If you're convinced that this is what you want to do, then you should probably just dive right in and do it.

Moderator: If someone wants to start a company and engage in the field of AI, what do you think is the most suitable short-term challenge for entrepreneurship in the current field of AI? To be clear, what are the problems that you think need to be addressed as a priority but that OpenAI won't be able to solve in the next three years?

奥特曼: 从某种意义上说,这个问题是非常合理的,但我不会直接回答它。因为我认为你永远不应该从任何人那里接受这种关于如何开始创业的建议。

当某个领域已经如此显而易见,以至于我或者其他人站在这里都可以指出它时,那它可能就不是一个好的创业方向了。我完全理解,我还记得我当初也会问别人,“我该创办什么样的公司?”。

但我认为拥有一份有影响力的职业最重要的原则之一是必须要走自己的路。如果你正在思考的事情是其他人也会去做的,或者是很多人都会去做的,那么你应该对其保持一点怀疑。

我认为我们需要培养的一个重要的能力是提出那些非显而易见的想法。我不知道现在最重要的想法是什么,但我确信这个房间里的某个人知道答案。我认为学会信任自己,提出自己的想法并勇敢去做那些不被广泛认同的事情非常重要。

比如我们刚开始创办OpenAI那会儿,这件事并没有得到很多人的认同,但现在它已经成为非常显而易见的事情了。现在我只是因为自己身处其中,才会对这个方向有比较明确的想法,但我相信你们会有其他的看法。

主持人: 那么换个方式问,我不知道这样问是否公平。你正在思考但其他人并没有谈论的问题是什么呢?

奥特曼: 如何建造真正大型的计算机。

我想,其他人也在讨论这个问题,但我们可能从别人无法想象的角度来看待它。我们正在努力解决的问题不仅是开发小学或中学水平的智能,还包括博士水平及更高层次的智能,并将其以最佳方式应用于产品,最大限度地对社会和人们的生活产生积极影响。我们目前还不知道答案,但我认为这是一个需要弄清楚的重要问题。

主持人: 如果我们继续探讨如何建造大型计算机的问题,你能分享一下你的愿景吗?我知道有很多猜测和传闻,关于你正在开展的半导体代工厂项目。这个愿景与目前的做法有什么不同?

奥特曼: 代工厂只是其中的一部分。我们越来越相信人工智能基础设施将成为未来最重要的投入之一,是每个人都会需要的资源,其中包括能源、数据中心、芯片、芯片设计和新型网络。我们需要从整体上看待整个生态系统,并设法在这些方面做得更多。仅关注某个部分是行不通的,我们必须全面考虑。

我认为这就是人类科技发展史的轨迹:不断构建更大、更复杂的系统。

不在乎烧钱,为了AGI都值得

主持人: 至于计算成本方面,我听说训练ChatGPT模型花费了1亿美元,其参数量为1750亿。GPT-4的成本是4亿美元,参数量是前者的10倍。成本几乎增加了4倍,但参数量却增加了10倍。请纠正我,如果我有误的话。

奥特曼: 我知道,但我想……

主持人: 好的。即使你不想纠正实际数字,但如果方向上是对的,你认为每次更新的成本是否会继续增长?

奥特曼: 是的。

主持人: 这种增长会呈倍数增加吗?

奥特曼: 大概吧,我的意思是。

主持人: 那么问题就变成了,我们该如何为此筹集资金?

奥特曼: 我认为给人们提供真正强大的工具,让他们自己去探索如何用这些工具来构建未来,是非常有价值的。我非常愿意相信你们和世界上其他人的创造力,可以找到应对这一问题的方法。所以,OpenAI中可能有比我更具商业头脑的人担心我们花了多少,但我并不在意。

主持人: OpenAI、ChatGPT以及其他所有模型都非常出色,去年烧掉了5.2亿美元,这不会让你担心它的商业模式吗?盈利来源在哪里?

奥特曼: 首先,谢谢你这么说,但ChatGPT还远称不上出色,顶多算是勉强合格。GPT-4是大家未来可能用到的最笨的模型了。但是,重要的是早早开始并不断发布,我们相信迭代式发布。

如果我们在地下室开发通用人工智能,然后世界浑然不觉地盲目前行,我不认为这会让我们成为好的邻居。因此,考虑到我们对未来的看法,我觉得重要的是表达我们的观点。

不过,更重要的是,将产品交到用户手中,让社会与技术共同进化。让社会告诉我们,集体和个人从技术中想要什么,如何将其产品化以便于使用。这个模型在哪些方面效果好,哪些方面效果差,让我们的领导者和机构有时间做出反应。给人们时间将其融入生活,学会使用这项工具。

一些人可能会用它作弊,但有一些人也可能会用它做非常了不起的事。每一代人的发展都会有所扩展,这意味着我们发布了不完美的产品,但有一个非常紧密的反馈循环,我们可以学习并变得更好。

发布让你感到尴尬的产品的确有点糟糕,但比起其他选择,这是更好的方式。在这个特别的情况下,我们真的应该向社会迭代发布。

我们了解到AI和惊喜不相容。人们不想受到惊吓,他们想要逐步推进并有能力影响这些系统。这就是我们的做法。

将来或许会有一些情况让我们认为迭代发布不是一个好的策略,但这似乎是目前最好的方法。我认为通过这样做,我们已经学到了很多东西。希望更广泛的世界也从中受益。

不论我们每年花费5亿美元、50亿美元还是500亿美元,我都不在乎。只要我们能够持续创造比这更多的社会价值,并能够找到支付账单的方式。我们正在开发AGI,这会很昂贵,但绝对值得。

主持人: 那么,你有一个2030年的愿景吗?如果现在是2030年,你做到了。在你眼中,世界会是什么样子?

奥特曼: 在某些非常重要的方面,或许并没有太大区别。

我们还会回到这里,会有新的一批学生。我们会谈论初创公司是多么重要,科技是多么酷。我们会拥有这个世界上新的伟大工具。

如果我们今天能够传送到六年前,并拥有这个在许多学科上比人类更聪明的东西,能够为我们完成这些复杂的任务,那将感觉非常棒。你知道,我们可以编写复杂的程序,完成这项研究或开始这项业务。

然而,太阳仍然东升西落,人们继续上演他们的人类戏剧,生活继续。所以,从某种意义上来说非常不同,因为我们现在有丰富的智能,但从另一个意义上来说,又没什么不同。

主持人: 你提到了通用人工智能。在之前的采访中,你将其定义为能够模拟一个普通人类在各种任务中表现的软件。你觉得什么时候会实现这个目标?你能给出一个大致的时间或范围吗?

奥特曼: 我认为我们需要对AGI有一个更精确的定义,以解决时间的问题。因为在这一点上,即使是你刚刚给出的定义,也是合理的,这就是你的定义。

主持人: 我是在重复你之前在采访中说过的话。

奥特曼: 我要批评我自己。这个定义太过宽泛,容易被误解。

所以我认为真正有用或能让人们满意的标准是:当人们问“AGI的时间表是什么”时,他们其实想知道的是世界什么时候会发生巨大变化,变化的速度什么时候会大幅加快,经济运作的方式什么时候会发生巨大变化,我的生活什么时候会改变。由于很多原因,这个时间点可能与我们想象的很不一样。

我完全可以想象这样的世界:我们在任何领域都能开发出具备博士水平的智能,可以大幅提升研究人员的生产力,甚至可以实现一些自主研究。从某种意义上说,这听起来似乎会对世界产生很大的影响,但也可能我们已经做到了这些后,却发现全球GDP增长在随后的几年里并没有发生变化。想想这种情况还是很奇怪的。这最初并不是我对整个过程的直觉。

所以,我无法给出一个具体的时间来说明我们何时能达到人们所关心的里程碑,但是在未来的一年以及之后的每一年,我们都会拥有比现在强大得多的系统,我认为这是关键。所以,我已经放弃了预测AGI的时间表。

主持人: 你能否谈谈你对AGI危险性的看法?具体来说,你认为AGI最大的危险会是来自一场轰动各大媒体的灾难性事件,还是更为隐蔽和有害的东西,就像现在大家因为使用TikTok而注意力严重分散一样。或者两者都不是?

奥特曼: 我更担心隐蔽的危险,因为我们更容易忽视它们。

很多人都在谈论灾难性的危险,并对此保持警惕。我不想轻视这些危险,我认为它们的确很严重且真实存在。但至少我们知道要关注这一点,并会花费大量精力。就像你提到的大家因为使用TikTok而注意力严重分散的例子,我们不需要去关注最终结果。这是一个真正棘手的问题,那些未知的东西真的很难预测,因此我更担心这些,尽管两者我都担心。

主持人: 会是未知因素吗?你能说出你特别担心的因素吗?

奥特曼: 嗯,那它们就会被归为未知因素。

尽管我认为短期内的变化会比我们想象的要少,就像其他重大技术一样。但从长远来看,我认为变化会超出我们的预期。我担心社会适应这种全新事物的速度,以及我们花多长时间去找到新的社会契约与我们能够用多长时间做到这一点,我对此感到担忧。

奥特曼的优点和最危险的弱点

主持人: 随着事物的快速变化,我们正尝试将恢复力 (resilience) 作为课程的核心内容之一,而恢复力的基石是自我意识。所以,我想知道你在踏上这段旅程时,是否清楚自己的驱动力。

奥特曼: 首先,我相信恢复力是可以被教的,恢复力一直是最重要的生活技能之一。在未来的几十年里,恢复力和适应能力会变得更重要,所以我觉得这个观点很好。至于自我意识的问题,我觉得自己是有自我意识的,但就像每个人都认为自己有自我意识一样,我是否真的有,很难从自身角度来评判。

主持人: 我能问你我们在自我意识入门课程中经常问的问题吗?

奥特曼: 当然。

主持人: 这就像彼得·德鲁克的框架,Sam,你认为自己最大的优点是什么?

奥特曼: 我认为我在许多事情上都不是特别出色,但也在很多方面都还不错。而且我认为在这个世界,广泛的技能被低估了。每个人都在过度专精,所以如果你擅长很多事情,就可以在其中找到联系。我认为这样你就能提出不同于其他人的想法,而不是仅仅成为某个领域的专家。

主持人: 你最危险的弱点是什么?

奥特曼: 最危险的弱点,这是个有趣的思考。我倾向于偏向支持技术,可能因为我很好奇,想看看技术的发展方向,而且我相信总体而言,技术是件好事。

我认为这种世界观总体上对我和其他人都很有利,因此得到了很多积极的反馈。然而,这并不总是对的,而且当它不对时,对许多人来说会产生非常不好的影响。

主持人: 哈佛大学心理学家戴维·麦克利兰提出了一个框架,即所有领导者都被三种原始需求之一驱动:归属需求,即被喜欢的需求;成就需求;以及权力需求。如果必须对它们进行排序,你会怎么排?

奥特曼: 在我的职业生涯中,不同时期都有这些需求。我认为人们会经历不同的阶段。而目前,我觉得驱使我前行的是想做一些有意义和有趣的事情。我之前肯定也经历过追求金钱、权力和地位的阶段。

主持人: 你对即将推出的ChatGPT-5最感到兴奋的是什么?

我还不知道,这个答案听起来有点敷衍。但我认为关于GPT-5或任何我们将其命名的版本,最重要的是它将会更聪明。

听起来像在逃避,但我觉得这是人类历史上最显著的事实之一:我们能做点什么,并且现在可以以高度科学的确定性说,GPT-5会比GPT-4更聪明得多,GPT-6会比GPT-5更聪明得多。我们还没有到达这个曲线的顶端,我们大致知道该怎么做。它不会只在某一个领域变得更好,也不是总会在这次评估、这个学科或这种模式上表现更好,而是整体上会变得更聪明。我认为这一事实的重大意义仍被低估了。

观众提问环节

最后,我们也摘录了一些观众提问环节的精彩内容。

提问1: 随着你们越来越接近AGI,你们打算如何负责任地部署它,以防止抑制人类创新并继续推动创新?

奥特曼: 我并不担心AGI会抑制人类创新。我真的深信人们会用更好的工具做出更棒的成就。历史都显示,如果给人们更多的杠杆,他们就能做出更神奇的事情。这对我们所有人来说都是一件好事。

但我确实越来越担心如何负责任地做这一切。随着模型变得更加强大,我们面临的标准也会越来越高。我们已经做了很多事情,比如红队测试和外部审计。这些都很好。但我认为随着模型变得更强大,我们需要更加渐进地部署,并保持更紧密的反馈循环,关注它们的使用情况和发挥效果的领域。

We used to be able to release a major model update every few years, but now we may need to find ways to increase the granularity of deployment and iteratively deploy more frequently. Exactly how to do this is less clear, but it will be key to responsible deployment.

Additionally, the way we have all stakeholders negotiate AI rules will become increasingly complex over time.

Question 2: You mentioned before that every year we will have more powerful AI systems. Many parts of the world do not have the infrastructure to build these data centers or mainframe computers. How will global innovation be affected?

Ultraman: Regarding this issue, I would like to talk about it in two parts.

First, I think equitable global access to computers for training and inference is extremely important, regardless of where they are built. One of the core parts of our mission is to make ChatGPT available to as many people as possible who want to use it, where we may not be able or for good reasons don't want to operate. How we think about making training computing more available to the world will become increasingly important. I do think we're going to move into a world where we think it's a human right to have access to a certain amount of computing power. We have to figure out how to distribute this power to people around the world.

However, there is a second point, which is that I think countries will increasingly realize the importance of having their own AI infrastructure. We wanted to find a way, and we're now spending a lot of time traveling around the world helping a lot of countries that want to build these facilities. I hope we can play some small role in that.

Question 3: What role do you think artificial intelligence will play in future space exploration or colonization?

Ultraman: I think space is obviously not friendly to biological life. So it seems easier if we can send bots.

Question 4: How do you know a point of view is non-consensus? How to verify whether your idea has gained consensus in the scientific and technological community?

Ultraman: First of all, what you really want is to be right. Holding an opposing view but being wrong is still wrong.

If you predicted 17 of the past two recessions, you might have been contrarian only on the two that you were right. This may not necessarily be the case. But you were wrong the other 15 times. So I think it's easy to get too excited about being a contrarian. Again, the most important thing is to get it right. The crowd is usually right. But value is greatest when you hold opposing views and are right at the same time, and that doesn’t always happen in an either/or way. Like everyone here can probably agree that AI is the right field to start a business in. If one person in the room figures out the right company to start, and then executes it successfully, and everyone else thinks that's not the best thing you can do, that's all that matters.

As for how to do this, I think it's important to build the right peer group around yourself, and it's also important to find original thinkers. But you kind of have to do it alone, or at least part of it alone, or with a couple of people who are going to be your co-founders or whatever.

I think once you get too far into the question of how to find the right peer group, you're already in the wrong frame. Learn to trust yourself and your own intuition and your own thought process, which will become much easier over time. No matter what they say, I don't think anyone is really very good at it when they start out. Because you haven't built muscle yet, the social pressures and evolutionary pressures on you are working against that. Therefore, as time goes by, you will get better and better, don't ask too much of yourself too early.

Question 5: I'd love to know your thoughts on how energy demand will change over the next few decades and how we get to a future where renewables are 1 cent per kilowatt-hour.

Ultraman: That day may come, but I'm not sure... My guess is that eventually nuclear fusion will dominate the production of electricity on Earth. I think it will be the cheapest, most abundant, most reliable, most energy dense energy source. I could be wrong about this, or it could be solar plus storage. You know, my best guess is that ultimately it's probably going to be one of those two ways, and there will be some cases where one is better than the other, but these look like really global scale, per- Two main options for energy costs of one cent per kilowatt hour.

Question 6: What did you learn from what happened at OpenAI last year, and what makes you come back?

Altman: The best lesson I've learned is that we have a really great team that is more than capable of running the company without me, and they've actually run it without me for a few years. sky. As we progress toward artificial general intelligence (AGI), some crazy things may happen, and there may even be more crazy things happening among us. Because different parts of the world are increasingly responding to our emotions, the stakes are rising. I used to think that under a lot of pressure, teams would do well, but you never really know until you have a chance to experiment. We had the opportunity to run this experiment, and I learned that the team was very resilient and ready to run the company to some extent.

As for why I came back, you know, initially when the board called me the next morning and asked me if I would consider coming back, I said no, I was angry. Then, I thought about it, and I realized how much I love OpenAI, how much I love these people, the culture we've built, and our mission. I kind of want to do this with everyone.

Question 7: Can you talk about the structure of OpenAI, a Russian matryoshka doll?

Ultraman: This structure was formed gradually. If we could do it over again, this would not be the solution I would choose. But when we started, we didn't expect to have a product. We're just going to be an artificial intelligence research lab. We don't even know, we have no idea about language models, APIs or ChatGPT.

So, if you're going to start a company, you have to have some theory that you're going to sell a product someday, and we didn't have that in mind at the time. We didn't realize we would need so much money for computing, and we didn't realize we would have such a great business. When OpenAI was founded, it was only intended to promote artificial intelligence research.

Question 8: Does it scare you to create something smarter than humans?

Ultraman: Of course it scares me. Humans are getting smarter and more capable over time. You can do more than your grandparents did, not because individuals eat better or get more health care, but because society's infrastructural advances, such as the Internet and iPhone, put a wealth of knowledge at your fingertips.

Society is an AGI system that cannot be controlled by one person's brain. It is built brick by brick by everyone to create higher achievements for those who come after you. Your children will have tools that you do not have.

It's always a little scary. But I think there's a lot more good than bad. People in the future will be able to use these new tools to solve more problems,

One More Thing

It is unknown which colleges Ultraman will visit in the next few days, but the entire trip may end before May 9.

It can be seen from the leaked documents that OpenAI set up the search.chatgpt.com subdomain name.

The AI ​​search function is expected to be released on May 9.

At present, the front-end code and setting interface of relevant functional web pages have been leaked.

May include image search, widgets (weather, calculator, sports, stocks, time zone calculation).

Different models such as GPT-4 Lite, GPT-4, and GPT3.5 are available.

Reference links:
[1]
https://www.youtube.com/watch?v=GLKoDkbS1Cg
[2] https://news.harvard.edu/gazette/story/2024/05/did-student-or-chatgpt- write-that-paper-does-it-matter/
[3] https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become -ais-killer-function
[4] https://twitter.com/RishabJainK/status/1785807873626579183
[5] https://twitter.com/moreisdifferent/status/1785759129056743632

-over-

Click here ???? Follow me and remember to star~

Three consecutive clicks of "Share", "Like" and "Watching"

Advances in cutting-edge science and technology are seen every day ~


Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号