Claude completely surpasses ChatGPT in MLIR code analysis and performs amazingly 0x0. Introduction 0x1. PR Introduction 0x2. Comparison of specific implementations 0x3. Summary Conclusion 2

Publisher:哈哈哈33Latest update time:2023-04-24 Source: GiantPandaCVAuthor: Lemontree Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Claude completely surpasses ChatGPT in MLIR code analysis and performs amazingly. Please read the full article or register to experience its power. Conclusion: In this article's task, Claude > ChatGPT >> NewBing

0x0. Introduction

Here, we will take a Codegen task in the Oneflow IR part (the goal is to support Oneflow stream in the mlir codegen, and replace the self-generated stream in the pass with Oneflow stream, the PR link is: https://github.com/Oneflow-Inc/oneflow/pull/10149) as an example to compare the understanding of mlir by newibing (chatgpt) and claude. Claude is a chat similar to chatgpt launched by Anthroc, which is one of the biggest competitors of OpenAI because the founder of this company is also a former employee of OpenAI. Then Claude refers to this issue: https://www.zhihu.com/queson/594115372/answer/2988759047 and adds it directly to slack for conversation.

0x1. PR Introduction

The PR link is: https://github.com/Oneflow-Inc/oneflow/pull/10149

This PR implements 3 passes (defined in OneFlowPasses.td ), namely:

defEliminaAllocOpsPass:Pass<"eliminate-alloc-ops","ModuleOp">{
letsummary="";
letconstructor="mlir::createEliminateAllocOpsPass()";
letdependentDialects=["pdl_interp::PDLInterpDialect","pdl::PDLDialect"];
}

defAppendOneFlowStreamPass:Pass<"append-ofstream","ModuleOp">{
letsummary="appendoneflowstreamtofunctionarguments";
letconstructor="mlir::createAppendOneFlowStreamPass()";
}

defMgpuToOneFlowStreamPass:Pass<"mgpu-to-ofstream","ModuleOp">{
letsummary="convertmlirabiaboutmgputooneflowstream,thispassshouldbeinvokedafterappend-ofstreampass";
letconstructor="mlir::createMgpuToOneFlowStreamPass()";
}

EliminateAllocOpsPass is used to eliminate invalid memref.alloc instructions in IR. AppendOneFlowStreamPass adds stream parameters required by GPU startup kernel to GPU-related functions. MgpuToOneFlowStreamPass occurs after AppendOneFlowStreamPass is executed (it generates stream parameters) and replaces mgpu-related stream abi with oneflow stream abi.

We use newbing and claude to analyze OneFlowPasses.td the Pass intent defined in these lines:

newbing:

insert image description here

Newbing simply couldn't understand it. Actually, I feel that Claude probably couldn't understand it either, so I asked with a skeptical attitude.

It’s crazy, Claude not only understood the code of the td file, but also listed the MLIR concepts involved in this code for us. Does it feel like the training data takes into account MLIR-related expectations? Next, let’s compare the implemented Pass code.

0x2. Compare the specific implementation

The PR link is: https://github.com/Oneflow-Inc/oneflow/pull/10149

0x2.1 EliminateAllocOpsPass

EliminateAllocOpsPass uses the PDL language provided by MLIR to complete pattern matching and rewriting, which is specifically implemented in oneflow/ir/lib/OneFlow/PDLL/AllocEliminationPatterns.pdll :

#include"OneFlow/OneFlowOps.td"

ConstraintIsFuncArguments(value:Value)[{
returnsuccess(llvm::dyn_cast(value));
}];

Pattern{
letalloc=op();
letcopy=op(alloc.0,arg:IsFuncArguments);

rewriteallocwith{
erasecopy;
replaceallocwitharg;
};
}

Next, we compare the analysis results of newbing and chatgpt respectively.

insert image description here

Newbing cannot parse this code as MLIR's PDL language, and of course cannot understand the code content. We can try using Claude again.

insert image description here

insert image description here

I personally feel that this explanation is very powerful and accurate, and Claude's answer is amazing.

0x2.2 AppendOneFlowStreamPass

Next, let's take a look at the implementation of AppendOneFlowStreamPass, which is in oneflow/ir/lib/OneFlow/Transform/OneFlowStream.cpp this file. The specific code is as follows:

structAppendOneFlowStreamPatternfinal:publicOpRewritePattern{
public:
explicitAppendOneFlowStreamPattern(mlir::MLIRContext*context)
:OpRewritePattern(context,/*benefit=*/0){}
mlir::alResultmatchAndRewrite(func::FuncOpop,
mlir::PatternRewriter&rewriter)constoverride{
autoptr_type=LLVM::get(rewriter.getContext(),8));
if(llvm::dyn_cast(op.getFunctionType().getInputs().back()))
returnsuccess();

llvm::SmallVectornew_operand_type;
for(autotype:op.getFunctionType().getInputs()){new_operand_type.push_back(type);}
new_operand_type.push_back(ptr_type);
autofunction_type=
rewriter.getFunctionType(new_operand_type,op.getFunctionType().getResults());

autofunc=rewriter.create(op.getLoc(),op.getName(),function_type);
for(autopair:op->getDialectAttrs()){func->setAttr(pair.getName(),pair.getValue());}
op.getBody().dArgument(ptr_type,func->getLoc());
IRMappingbvm;
op.getRegion().cloneInto(&func.getRegion(),bvm);
rewriter.eraseOp(op);
returnsuccess();
}
};

The C++ code newbing (chatgpt) should be understandable, so let's analyze it:

insert image description here

I asked chatgpt directly, but it still didn't understand this code. I manually prompted it to say that this code defines an mlir pattern, and then it first repeated my words and gave an answer. Then it just talked nonsense, which was a poor performance in this example. Next, let's interrogate Claude:

Let's continue to ask some details in the C++ code:

Very powerful, the explanations are mostly accurate, and it seems that Claude really fully understands the logic of this code. We should note that this code was just written by a colleague today, and the model generalizes really well.

MgpuToOneFlowStreamPass

Let's finally analyze the implementation of MgpuToOneFlowStreamPass.

structMgpuToOneFlowStreamPatternfinal:publicOpRewritePattern{
public:
explicitMgpuToOneFlowStreamPattern(mlir::MLIRContext*context)
:OpRewritePattern(context,/*benefit=*/0){}
mlir::LogicalResultmatchAndRewrite(LLVM::CallOpop,
mlir::PatternRewriter&rewriter)constoverride{
autoptr_type=LLVM::get(rewriter.getContext(),8));
autofunc=op->getParentOfType();
autocallee=op.getCallee();
if(!func||!callee)returnfailure();
Valuestream=func.getArguments().back();
if(stream.getType()!=ptr_type){
LOG(ERROR)<< "failedtofindstreaminllvm.funcblockarguments";
returnfailure();
}

DenseMapstd::pair<std::function<bool(LLVM::CallOp&,Value&)>,
std::function<void(mlir::PatternRewriter&,LLVM::CallOp&,Value&)>>>
oneflow_abi={
{"mgpuStreamCreate",
{[](LLVM::CallOp&op,Value&stream){returntrue;},
[](mlir::PatternRewriter&rewriter,LLVM::CallOp&op,Value&stream){
rewriter.replaceOp(op,{stream});
}}},
{"mgpuLaunchKernel",
{[](LLVM::CallOp&op,Value&stream){
unsignedidx=op->getNumOperands();
returnop.getOperand(idx-3)!=stream;
},
[](mlir::PatternRewriter&rewriter,LLVM::CallOp&op,Value&stream){
unsignedidx=op->getNumOperands();
autotarget=op.getOperand(idx-3).getDefiningOp();
rewriter.replaceOp(target,{stream});
}}},
{"mgpuStreynchronize",
{[](LLVM::CallOp&op,Value&stream){returntrue;},
[](mlir::PatternRewriter&rewriter,LLVM::CallOp&op,Value&stream){
rewriter.eraseOp(op);
}}},
{"mgpuStreestroy",
{[](LLVM::CallOp&op,Value&stream){returntrue;},
[](mlir::PatternRewriter&rewriter,LLVM::CallOp&op,Value&stream){
rewriter.eraseOp(op);
}}},
};
autoout=oneflow_abi.find(callee.value().str());
if(out!=oneflow_abi.end()&&out->getSecond().fit(op,stream)){
out->getSecond().second(rewriter,op,stream);
}
returnsuccess();
}
};

Let chatgpt analyze it first:

insert image description here

The answer is still quite ambiguous, and what is certain is that chatgpt did not understand this code at all.

Next, use Claude to test it:

insert image description here

What shocked me here is that it not only understood this code, but also knew that this code is just a Pattern rule in MLIR. If you want to apply this rule, you need to build another Pass in MLIR. Finally, let Claude give us some review comments:

insert image description here

The fourth tip here made me feel a little confused, so I asked my colleagues for advice and asked them to add some comments.

insert image description here

In general, Claude is already quite good at reading MLIR code, and is ahead of Newbing (Chatgpt) in all aspects. I feel that I can use Claude to assist in reviewing IR related code in my daily life.

0x3. Summary

I compared ChatGpt and Claude on a task in MLIR, and I felt the power of Calude. Although I haven't evaluated other tasks yet, I have been shocked by the code analysis capabilities of Calude. We can even use Claude as an entry-level tool for AI compilers.

--------------------------------Dividing line-------------------------------------

Some friends in the comment area pointed out that some functions of NewBing are limited and are not equivalent to ChatGPT3.5. I borrowed an official ChatGPT account to retest it. Here are the test results:

insert image description here



In this example, chatgpt's explanation is not as detailed as Claude's. Claude's result is indeed better than chatgpt's, but chatgpt does know that this is an MLIR Pass and is not as restricted as newbing.

EliminateAllocOpsPass

Next, let’s ask about the implementation of EliminateAllocOpsPass:



We can compare the results of Calude above, and we feel that ChatGPT's description and understanding of this problem is not as natural as Claude's. From this answer, we cannot see that ChatGPT understands the principle of this implementation, while Claude fully understands it.

insert image description here

AppendOneFlowStreamPattern

Compare this to Claude:

insert image description here

It can be seen that Claude's analysis is much better than ChatGPT. It clearly knows that this line of code is to check whether the current function already has a Stream parameter, while ChatGPT's answer does not know that this pointer type parameter represents a Stream. if (llvm::dyn_cast (op.getFunctionType().getInputs().back()))

Next comes the detailed analysis.

Compare with Claude

Claude’s explanation beats ChatGPT again

Compare with Claude

We can see that Claude's result is obviously better. It not only explains all the details for us but also lists the MLIR related properties and interfaces used.

MgpuToOneFlowStreamPass

Let's finally analyze the implementation of MgpuToOneFlowStreamPass.

Compare to Claude

Claude's result is also significantly better than ChatGPT's, and we can see that ChatGPT's answer also misses a mgpuStreamSynchronize ABI. Finally, we asked ChatGPT if it could give some suggestions for modification.

It feels similar to Claude.

Conclusion 2

Overall, in this Review MLIR code task, Claude > ChatGPT >> NewBing

Reviewing Editor: Li Qian


Reference address:Claude completely surpasses ChatGPT in MLIR code analysis and performs amazingly 0x0. Introduction 0x1. PR Introduction 0x2. Comparison of specific implementations 0x3. Summary Conclusion 2

Previous article:Development of a robotic system disguised as a honeycomb panel
Next article:A brief discussion on anti-collision zones and robot axis overload protection

Latest robot Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号