923 views|2 replies

39

Posts

0

Resources
The OP
 

#AI Challenge Camp Second Stop# Model conversion and verification based on RKNN toolkit (continued, including code) [Copy link]

This post was last edited by hollyedward on 2024-5-30 03:37

Continuing from the previous article, maybe the CPU of the server I used does not support certain operations. Since rknn does not have an open source SDK, I don’t know how to debug it, so I directly rent another server.

First, install the environment. The steps are self-explanatory.

Exporting the model

First, export the script

# filename: onnx2rknn.py


import numpy as np
from rknn.api import RKNN


if __name__ == '__main__':

  # 模型部署平台
  platform = 'rv1106'
  #训练模拟时输入图片大小
  Width = 28
  Height = 28
  # 此处改为自己的模型地址
  MODEL_PATH = '/home/ljl/mnist/mnist_cnn_model.onnx'
  # 导出模型地址
  RKNN_MODEL_PATH = '/home/ljl/mnist/mnist_cnn_model.rknn'
  # 创建RKNN对象并在屏幕打印详细的日志信息
  rknn = RKNN(verbose=True)
  # 模型配置
  # mean_values: 输入图像像素均值
  # std_values: 输入图像像素标准差
  # target_platform: 目标部署平台
  # 本模型训练时输入图象为单通道
  rknn.config(mean_values=[0], std_values=[255], target_platform=platform)
  # 模型加载
  print('--> Loading model')
  ret = rknn.load_onnx(MODEL_PATH) 
  if ret != 0:
      print('load model failed!')
      exit(ret)
  print('done')
  # 构建 RKNN 模型
  print('--> Building model')
  #do_quantization:是否对模型进行量化。默认值为 True
  ret = rknn.build(do_quantization=True, dataset="./data.txt")
  if ret != 0:
     print('build model failed.')
     exit(ret)
  print('done')

  # 导出模型
  ret = rknn.export_rknn(RKNN_MODEL_PATH)
  #释放RKNN模型
  rknn.release()

Then put the test pictures in the folder and write the path in the data.txt file

First, we need to get the mnist picture for testing

The export script is provided here

# filename: generate_data.py


import torch
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torch.utils.data import DataLoader
import cv2
import os
import numpy as np

# Test set
test_set = datasets.MNIST('dataset/', train=False, transform=transforms.ToTensor(), download=True)
test_loader = DataLoader(dataset=test_set, batch_size=1, shuffle=True)


def mnist_save_png():
    for data, i in test_loader:
        with torch.no_grad():
            image = data.squeeze().numpy()  # Remove unnecessary transpose

            # Optional: If you need to move channel dimension to the last position
            # image = np.transpose(image, (1, 2, 0))

            image = cv2.GaussianBlur(image, (9, 9), 0)
            # image *= 255  # Scale image to 0-255 range

            index = i.numpy()[0]

            if not os.path.exists('./mnist_image/'):
                os.mkdir('./mnist_image/')
                
            # 每张图片只保存一次
            if not os.path.exists('./mnist_image/' + str(index) + '.png'):
                cv2.imwrite('./mnist_image/' + str(index) + '.png', image)


if __name__ == '__main__':
    mnist_save_png()

The exported result has a resolution of 28 * 28, and the code also corresponds to the previous article

Here only 10 pictures are saved. If you want to test more pictures, you need to modify the code

Once you are ready, run the script

The process of converting models for printing.

I feel that rknn is at least better than the Allwinner system (which is not open source and does not allow personal use), although it is not open source.

I rknn-toolkit2 version: 2.0.0b0+9bab5682
--> Loading model
I It is recommended onnx opset 19, but your onnx model opset is 17!
I Model converted from pytorch, 'opset_version' should be set 19 in torch.onnx.export for successful convert!
I Loading : 100%|██████████████████████████████████████████████████| 5/5 [00:00<00:00, 24966.10it/s]
done
--> Building model
D base_optimize ...
D base_optimize done.
D 
D fold_constant ...
D fold_constant done.
D 
D correct_ops ...
D correct_ops done.
D 
D fuse_ops ...
W build: Can not find 'idx' to insert, default insert to 0!
D fuse_ops results:
D     replace_reshape_gemm_by_conv: remove node = ['/Reshape', '/fc1/Gemm'], add node = ['/fc1/Gemm_2conv', '/fc1/Gemm_2conv_reshape']
D     swap_reshape_relu: remove node = ['/fc1/Gemm_2conv_reshape', '/Relu'], add node = ['/Relu', '/fc1/Gemm_2conv_reshape']
D     convert_gemm_by_conv: remove node = ['/fc2/Gemm'], add node = ['/fc2/Gemm_2conv_reshape1', '/fc2/Gemm_2conv', '/fc2/Gemm_2conv_reshape2']
D     fuse_two_reshape: remove node = ['/fc1/Gemm_2conv_reshape']
D     remove_invalid_reshape: remove node = ['/fc2/Gemm_2conv_reshape1']
D     fold_constant ...
D     fold_constant done.
D fuse_ops done.
D 
D sparse_weight ...
D sparse_weight done.
D 
I GraphPreparing : 100%|████████████████████████████████████████████| 4/4 [00:00<00:00, 5403.29it/s]
I Quantizating : 100%|███████████████████████████████████████████████| 4/4 [00:00<00:00, 283.34it/s]
D 
D quant_optimizer ...
D quant_optimizer results:
D     adjust_relu: ['/Relu']
D quant_optimizer done.
D 
W build: The default input dtype of 'onnx::Reshape_0' is changed from 'float32' to 'int8' in rknn model for performance!
                       Please take care of this change when deploy rknn model with Runtime API!
W build: The default output dtype of '15' is changed from 'float32' to 'int8' in rknn model for performance!
                      Please take care of this change when deploy rknn model with Runtime API!
I rknn building ...
I RKNN: [00:09:32.440] compress = 0, conv_eltwise_activation_fuse = 1, global_fuse = 1, multi-core-model-mode = 7, output_optimize = 1, layout_match = 1, enable_argb_group = 0
I RKNN: librknnc version: 2.0.0b0 (35a6907d79@2024-03-24T02:34:11)
D RKNN: [00:09:32.440] RKNN is invoked
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNExtractCustomOpAttrs
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNExtractCustomOpAttrs
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNSetOpTargetPass
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNSetOpTargetPass
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNBindNorm
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNBindNorm
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNAddFirstConv
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNAddFirstConv
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNEliminateQATDataConvert
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNEliminateQATDataConvert
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNTileGroupConv
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNTileGroupConv
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNTileFcBatchFuse
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNTileFcBatchFuse
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNAddConvBias
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNAddConvBias
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNTileChannel
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNTileChannel
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNPerChannelPrep
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNPerChannelPrep
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNBnQuant
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNBnQuant
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNFuseOptimizerPass
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNFuseOptimizerPass
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNTurnAutoPad
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNTurnAutoPad
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNInitRNNConst
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNInitRNNConst
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNInitCastConst
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNInitCastConst
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNMultiSurfacePass
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNMultiSurfacePass
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNReplaceConstantTensorPass
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNReplaceConstantTensorPass
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNSubgraphManager
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNSubgraphManager
D RKNN: [00:09:32.443] >>>>>> start: OpEmit
D RKNN: [00:09:32.443] <<<<<<<< end: OpEmit
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNLayoutMatchPass
I RKNN: [00:09:32.443] AppointLayout: t->setNativeLayout(64), tname:[/fc1/Gemm_output_0_new]
I RKNN: [00:09:32.443] AppointLayout: t->setNativeLayout(64), tname:[15_conv]
I RKNN: [00:09:32.443] AppointLayout: t->setNativeLayout(0), tname:[15]
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNLayoutMatchPass
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNAddSecondaryNode
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNAddSecondaryNode
D RKNN: [00:09:32.443] >>>>>> start: OpEmit
D RKNN: [00:09:32.443] finish initComputeZoneMap
D RKNN: [00:09:32.443] <<<<<<<< end: OpEmit
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNProfileAnalysisPass
D RKNN: [00:09:32.443] node: Reshape:/fc2/Gemm_2conv_reshape2, Target: NPU
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNProfileAnalysisPass
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNOperatorIdGenPass
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNOperatorIdGenPass
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNWeightTransposePass
W RKNN: [00:09:32.444] Warning: Tensor /fc2/Gemm_2conv_reshape2_shape need paramter qtype, type is set to float16 by default!
W RKNN: [00:09:32.444] Warning: Tensor /fc2/Gemm_2conv_reshape2_shape need paramter qtype, type is set to float16 by default!
D RKNN: [00:09:32.444] <<<<<<<< end: rknn::RKNNWeightTransposePass
D RKNN: [00:09:32.444] >>>>>> start: rknn::RKNNCPUWeightTransposePass
D RKNN: [00:09:32.444] <<<<<<<< end: rknn::RKNNCPUWeightTransposePass
D RKNN: [00:09:32.444] >>>>>> start: rknn::RKNNModelBuildPass
D RKNN: [00:09:32.446] <<<<<<<< end: rknn::RKNNModelBuildPass
D RKNN: [00:09:32.446] >>>>>> start: rknn::RKNNModelRegCmdbuildPass
D RKNN: [00:09:32.446] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446]                                                         Network Layer Information Table                                                     
D RKNN: [00:09:32.446] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446] ID   OpType           DataType Target InputShape                               OutputShape            Cycles(DDR/NPU/Total)    RW(KB)       FullName        
D RKNN: [00:09:32.446] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446] 0    InputOperator    INT8     CPU    \                                        (1,1,28,28)            0/0/0                    0            InputOperator:onnx::Reshape_0
D RKNN: [00:09:32.446] 1    ConvRelu         INT8     NPU    (1,1,28,28),(50,1,28,28),(50)            (1,50,1,1)             6585/12544/12544         39           Conv:/fc1/Gemm_2conv
D RKNN: [00:09:32.446] 2    Conv             INT8     NPU    (1,50,1,1),(10,50,1,1),(10)              (1,10,1,1)             138/64/138               0            Conv:/fc2/Gemm_2conv
D RKNN: [00:09:32.446] 3    Reshape          INT8     NPU    (1,10,1,1),(2)                           (1,10)                 7/0/7                    0            Reshape:/fc2/Gemm_2conv_reshape2
D RKNN: [00:09:32.446] 4    OutputOperator   INT8     CPU    (1,10)                                   \                      0/0/0                    0            OutputOperator:15
D RKNN: [00:09:32.446] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446] <<<<<<<< end: rknn::RKNNModelRegCmdbuildPass
D RKNN: [00:09:32.446] >>>>>> start: rknn::RKNNFlatcModelBuildPass
D RKNN: [00:09:32.446] Export Mini RKNN model to /tmp/tmpkbgrb68z/check.rknn
D RKNN: [00:09:32.446] >>>>>> end: rknn::RKNNFlatcModelBuildPass
D RKNN: [00:09:32.446] >>>>>> start: rknn::RKNNMemStatisticsPass
D RKNN: [00:09:32.446] ------------------------------------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446]                                           Feature Tensor Information Table                               
D RKNN: [00:09:32.446] --------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] ID  User           Tensor                 DataType  DataFormat   OrigShape    NativeShape   |     [Start       End)       Size
D RKNN: [00:09:32.446] --------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] 1   ConvRelu       onnx::Reshape_0        INT8      NC1HWC2      (1,1,28,28)  (1,1,28,28,1) | 0x00027500 0x00027880 0x00000380
D RKNN: [00:09:32.446] 2   Conv           /fc1/Gemm_output_0_new INT8      NC1HWC2      (1,50,1,1)   (1,4,1,1,16)  | 0x00027880 0x000278c0 0x00000040
D RKNN: [00:09:32.446] 3   Reshape        15_conv                INT8      NC1HWC2      (1,10,1,1)   (1,1,1,1,16)  | 0x00027500 0x00027510 0x00000010
D RKNN: [00:09:32.446] 4   OutputOperator 15                     INT8      UNDEFINED    (1,10)       (1,10)        | 0x00027580 0x000275c0 0x00000040
D RKNN: [00:09:32.446] --------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] -----------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446]                                  Const Tensor Information Table                    
D RKNN: [00:09:32.446] -------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] ID  User     Tensor                         DataType  OrigShape    |     [Start       End)       Size
D RKNN: [00:09:32.446] -------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] 1   ConvRelu fc1.weight                     INT8      (50,1,28,28) | 0x00000000 0x00026480 0x00026480
D RKNN: [00:09:32.446] 1   ConvRelu fc1.bias                       INT32     (50)         | 0x00026480 0x00026680 0x00000200
D RKNN: [00:09:32.446] 2   Conv     fc2.weight                     INT8      (10,50,1,1)  | 0x00026680 0x00026900 0x00000280
D RKNN: [00:09:32.446] 2   Conv     fc2.bias                       INT32     (10)         | 0x00026900 0x00026980 0x00000080
D RKNN: [00:09:32.446] 3   Reshape  /fc2/Gemm_2conv_reshape2_shape INT64     (2)          | 0x00026980*0x000269c0 0x00000040
D RKNN: [00:09:32.446] -------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] ----------------------------------------
D RKNN: [00:09:32.446] Total Internal Memory Size: 0.9375KB
D RKNN: [00:09:32.446] Total Weight Memory Size: 154.438KB
D RKNN: [00:09:32.446] ----------------------------------------
D RKNN: [00:09:32.446] <<<<<<<< end: rknn::RKNNMemStatisticsPass
I rknn buiding done.
done

Model Validation

We need to check whether the output of the model is correct.

It seems that the simulator does not support rknn model verification

Error log:

The official demo tutorial also uses load_onnx, https://wiki.luckfox.com/zh/Luckfox-Pico/Luckfox-Pico-RKNN-Test

So in fact, when exporting the rknn model, you can also verify the model

The script is as follows

# filename: rknn_mnist_test.py
import numpy as np
import cv2
from rknn.api import RKNN

# Model conversion parameters
MODEL_PATH = '/root/test/my_model.onnx'  # Path to the ONNX model
RKNN_MODEL_PATH = '/root/test/my_model.rknn'  # Path to save the RKNN model

# Model inference parameters
input_size = (28, 28)  # Define the input size (same as your model's input)
data_file = 'data.txt'  # Path to the data file (containing image paths and labels)

rknn = RKNN(verbose=True)  # Create RKNN object with verbose logging
rknn.config(mean_values=[0], std_values=[255], target_platform='rv1106')  # Set configuration parameters
ret = rknn.load_onnx(MODEL_PATH)
if ret != 0:
    print('Load ONNX model failed!')
    exit(ret)
print('done')

print('--> Building RKNN model')

ret = rknn.build(do_quantization=True, dataset="./data.txt")
if ret != 0:
    print('Build model failed.')
    exit(ret)
print('done')

# Model export (optional)  #导出rknn模型
ret = rknn.export_rknn(RKNN_MODEL_PATH)


# Model inference
print('--> Performing inference on data')
rknn.init_runtime()  # Initialize RKNN runtime
with open(data_file, 'r') as f:
    lines = f.readlines()

    for line in lines:
        # Get image path and label
        image_path = line.strip()
        
        # Read the image
        image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)

        # Preprocess the image
        image = image.astype(np.float32)
        
        # image 这时候读取出来是 (28, 28), 需要增加维度
        image = np.expand_dims(image, axis=[0,1])
        
        # Run inference
        outputs = rknn.inference([image], data_format = 'nchw') #(1, 1, 28, 28) 对应nchw,批次,通道,长,宽
        print(f"Ineference Output: {outputs}")
        # Check inference results
        if outputs is not None:
            predicted_label = np.argmax(outputs)
            print(f"Image: {image_path}")
            print(f"Predicted label: {predicted_label}")
        else:
            print(f"Inference failed for image: {image_path}")

# Release RKNN resources
rknn.release()

Printing results

It can be seen that the result of the inference is correct

images.rar

5.79 KB, downloads: 0

售价: 5 分芯积分  [记录]  [购买]

用于测试的mnist图片文件

my_model.onnx

155.95 KB, downloads: 0

售价: 5 分芯积分  [记录]  [购买]

my_model.rknn

162.7 KB, downloads: 0

售价: 5 分芯积分  [记录]  [购买]

This post is from ARM Technology

Latest reply

You can install a virtual machine on your computer. You don't have to use a Linux server.   Details Published on 2024-5-30 18:30
 

6593

Posts

0

Resources
2
 

KNN is at least better than Quanzhi, but it is not open source and personal use is not allowed. Although it is not open source, I feel it is OK.

This post is from ARM Technology
 
 
 

6788

Posts

2

Resources
3
 

You can install a virtual machine on your computer. You don't have to use a Linux server.

This post is from ARM Technology
 
 
 

Just looking around
Find a datasheet?

EEWorld Datasheet Technical Support

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号
快速回复 返回顶部 Return list