在线观看www成人影院-在线观看www日本免费网站-在线观看www视频-在线观看操-欧美18在线-欧美1级

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評(píng)論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識(shí)你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

【飛騰派4G版免費(fèi)試用】第五章:使用C++部署tflite模型到飛騰派

Red Linux ? 來源:Red Linux ? 作者:Red Linux ? 2023-12-28 09:08 ? 次閱讀

使用C++部署tflite模型到飛騰派

前面幾章完成了佩奇檢測(cè)模型的訓(xùn)練和測(cè)試,并轉(zhuǎn)換為了 tflite 格式,并分別在 PC 和飛騰派上使用Python和C++完成了簡(jiǎn)單tflite模型推理的測(cè)試。而本章記錄下使用 C++ 進(jìn)行佩奇檢測(cè)模型推理的過程,本篇分為兩個(gè)部分。首先是在 PC 端使用 C++ 加載 tflite 模型進(jìn)行測(cè)試,然后再交叉編譯到飛騰派上。

參考資料:

  • [Real-Time Pose Detection in C++ using Machine Learning with TensorFlow Lite]
  • [Tensorflow 1 vs Tensorflow 2 C-API]

工作流程

代碼的開發(fā)主要是在 minimal 工程的基礎(chǔ)上進(jìn)行。整個(gè)代碼的工作流程主要是:

  1. 加載模型
  2. 修改輸入 tensor 的 shape
  3. 填充輸入數(shù)據(jù)
  4. 進(jìn)行推理
  5. 提取輸出數(shù)據(jù)

基礎(chǔ)概念

Inference:推理就是給模型輸入新的數(shù)據(jù),讓模型完成預(yù)測(cè)的過程
Tensor:張量,在模型中表示一個(gè)多維數(shù)據(jù)的數(shù)據(jù)結(jié)構(gòu),在tflite中用結(jié)構(gòu)體 TfLiteTensor 表示
Shape:對(duì)應(yīng) Tensor 的維度,是 TfLiteTensor 中的 TfLiteIntArray* dims 成員,這里 TfLiteIntArray 中含有維度,和具體每一維的形狀

關(guān)鍵步驟

我在實(shí)際開發(fā)的過程中,主要的步驟有三個(gè):

  1. 修改模型輸入 tensor 的維度:
    這里為什么要修改輸入維度呢?因?yàn)樵嫉?tensor 維度是 [1,-1,-1,3] ,測(cè)試部分代碼如下,圖像的 -1, -1 表示對(duì)應(yīng)的圖片的寬和高是未知的。3表示圖像的通道是3,即RGB。
auto a_input = interpreter- >inputs()[0];
auto a_input_batch_size = interpreter- >tensor(a_input)- >dims_signature- >data[0];
auto a_input_height = interpreter- >tensor(a_input)- >dims_signature- >data[1];
auto a_input_width = interpreter- >tensor(a_input)- >dims_signature- >data[2];
auto a_input_channels = interpreter- >tensor(a_input)- >dims_signature- >data[3];
std::cout < < "The input tensor has the following dimensions: ["
           < < a_input_batch_size < < ","
           < < a_input_height < < ","
           < < a_input_width < < ","
           < < a_input_channels < < "]" < < std::endl;

為了明確輸入圖像的大小,這里設(shè)置的是200*200, 所以使用下述代碼強(qiáng)制修改輸入 tensor 的shape 為 {1,200,200,3} 。

// 強(qiáng)制修改 tensor 的 shape
std::vector< int > peppa_jpg = {1,200,200,3};
interpreter- >ResizeInputTensor(0, peppa_jpg);

這里限定了輸入圖片的維度后,就方便后面使用數(shù)據(jù)進(jìn)行填充測(cè)試了。
2. 明確了輸入數(shù)據(jù)后,還有一個(gè)關(guān)鍵的步驟是提取輸出,提取哪個(gè)輸出呢?這里首先使用 python 檢測(cè)模型的輸出參數(shù),這里實(shí)際執(zhí)行如下指令以及對(duì)應(yīng)的打印如下:

? saved_model_cli show --dir exported_models/efficientdet_d0/saved_model/ --tag_set serve --signature_def serving_default
2023-12-27 11:17:23.958429: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-12-27 11:17:23.959999: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-12-27 11:17:23.990118: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-12-27 11:17:23.990510: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-12-27 11:17:24.489577: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-12-27 11:17:25.022727: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:268] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
2023-12-27 11:17:25.022762: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:168] retrieving CUDA diagnostic information for host: fedora
2023-12-27 11:17:25.022765: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:175] hostname: fedora
2023-12-27 11:17:25.022836: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:199] libcuda reported version is: 535.146.2
2023-12-27 11:17:25.022845: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:203] kernel reported version is: 535.146.2
2023-12-27 11:17:25.022847: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:309] kernel version seems to match DSO: 535.146.2
The given SavedModel SignatureDef contains the following input(s):
  inputs['input_tensor'] tensor_info:
      dtype: DT_UINT8
      shape: (1, -1, -1, 3)
      name: serving_default_input_tensor:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['detection_anchor_indices'] tensor_info:
      dtype: DT_FLOAT
      shape: (1, 100)
      name: StatefulPartitionedCall:0
  outputs['detection_boxes'] tensor_info:
      dtype: DT_FLOAT
      shape: (1, 100, 4)
      name: StatefulPartitionedCall:1
  outputs['detection_classes'] tensor_info:
      dtype: Dming lingT_FLOAT
      shape: (1, 100)
      name: StatefulPartitionedCall:2
  outputs['detection_multiclass_scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (1, 100, 1)
      name: StatefulPartitionedCall:3
  outputs['detection_scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (1, 100)
      name: StatefulPartitionedCall:4
  outputs['num_detections'] tensor_info:
      dtype: DT_FLOAT
      shape: (1)
      name: StatefulPartitionedCall:5
  outputs['raw_detection_boxes'] tensor_info:
      dtype: DT_FLOAT
      shape: (1, 49104, 4)
      name: StatefulPartitionedCall:6
  outputs['raw_detection_scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (1, 49104, 1)
      name: StatefulPartitionedCall:7
Method name is: tensorflow/serving/predict

使用命令 saved_model_cli 可以更直觀地看到模型的輸入和輸出,因?yàn)榕迤鏅z測(cè)模型是單個(gè)類別,主要是測(cè)試位置的信息,即目標(biāo)佩奇在視場(chǎng)中的相對(duì)位置信息,這里我們重點(diǎn)關(guān)注兩個(gè) tensor。

outputs['detection_scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (1, 100)
      name: StatefulPartitionedCall:4
  outputs['detection_boxes'] tensor_info:
      dtype: DT_FLOAT
      shape: (1, 100, 4)
      name: StatefulPartitionedCall:1

這里在 tflite 中,我們就要通過名字是 StatefulPartitionedCall:4 的 tensor 來獲取得推理結(jié)果的數(shù)據(jù),從 tensor 的 shape 可以看到,這里含有 100 個(gè)推理的結(jié)果。對(duì)應(yīng)地,通過名字是 StatefulPartitionedCall:1 的 tensor 來獲取得對(duì)應(yīng)概率結(jié)果的目標(biāo)框。也可以更加直觀的使用類似下述代碼進(jìn)行提取我們關(guān)心的輸出 tensor。

// 直接找到輸出 tensor 指針
  auto detection_scores_tensor = interpreter- >output_tensor_by_signature("detection_scores", "serving_default");
  auto detection_boxes_tensor = interpreter- >output_tensor_by_signature("detection_boxes", "serving_default");
  1. 提取圖片數(shù)據(jù)填充到輸入
    這里因?yàn)槭菧y(cè)試,我首先通過 Python 將圖片的 RGB 數(shù)據(jù)提取出來,然后存儲(chǔ)到一個(gè)數(shù)組中。然后在 minimal 的工程中,直接調(diào)用這個(gè)數(shù)組的數(shù)據(jù)填充模型的輸入。
    提取圖片的 RGB 數(shù)據(jù)并存儲(chǔ)文件的 Python 腳本如下:
#!/bin/python

import cv2 as cv
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import sys
import numpy as np

g_color_bits = 4
g_file_name = "gg"
g_file_extend = "xx"
g_file_full_name = g_file_name+'_'+g_file_extend+".cpp"
g_pic_200200_name = '.' + g_file_name+'_.'+g_file_extend

def scale_img(img_name):
	# 讀入原圖片
	img = cv.imread(img_name)
	# 打印出圖片尺寸
	print(img.shape)
	# 將圖片高和寬分別賦值給x,y
	#  x, y = img.shape[0:2]
	# 顯示原圖
	#  cv.imshow('OriginalPicture', img)
	img_200x200 = cv.resize(img, (200, 200))
	#  cv.imshow('img200200', img_200x200)
	print(g_pic_200200_name)
	cv.imwrite(g_pic_200200_name , img_200x200)
	
def load(img_name):
	global g_color_bits
	global g_file_name
	global g_file_extend
	global g_file_full_name
	global g_pic_200200_name
	g_file_extend = img_name.split('.')[-1]
	g_file_name = img_name.split('/')[-1]
	print(g_file_name.split('.'))
	g_file_name = g_file_name.split('.')[-2]
	g_file_full_name = g_file_name+'_'+g_file_extend+".cpp"
	g_pic_200200_name = '.' + g_file_name + "_200200_." + g_file_extend
	print(img_name + " load succes will change to " + g_file_full_name)
	print(img_name + " load succes will scale to " + g_pic_200200_name)
	scale_img(img_name)
	img = mpimg.imread(g_pic_200200_name)
	if isinstance(img[0,0,0], np.float32):
		img *= 255
	else:
		print(type(img[0,0,0]))
	#  類型轉(zhuǎn)換
	img=np.uint32(img)
	if img.shape[2] == 4:
		g_color_bits = 32
	else:
		g_color_bits = 32
	print("img shape:",  img.shape, g_color_bits);
	return img

def dump_info(img):
	print(img.shape)
	print(type(img.shape[0]))
	print(type(img.shape[1]))
	print(type(img.shape[2]))
	print(type(img[0,0,0]))
	#  print(type(img[500,500,0]))

def show_img(img):
	plt.imshow(img)
	plt.show()

def write_data2file(img):
	global g_file_name
	global g_file_extend
	global g_color_bits
	global g_file_full_name
	ans = np.zeros((img.shape[0], img.shape[1]), dtype = np.uint32)
	output_str="extern "C" { "+ 'n'
	output_str+="unsigned int raw_data[] = {" + 'n'
	# 列
	for i in range(img.shape[1]):
		# 行
		for j in range(img.shape[0]):
			for n in range(4):
				if g_color_bits == 32:
					ans[j, i] = img[j, i, 0] < < 16
					ans[j, i] |= img[j, i, 1] < < 8
					ans[j, i] |= img[j, i, 2]
	#  print(type(img[500, 100, :]), img[500, 100, :])
	#  print('final value:%x' %(ans[500, 100]))
	for j in range(img.shape[0]):
		for i in range(img.shape[1]):
			output_str += hex(ans[j, i]) + ", "
			if (j * img.shape[1] + i) % 16 == 0:
				output_str = output_str[:-1]
				output_str += 'n'
	output_str = output_str[:-2]
	output_str += "};n"
	output_str += "};n"
	global g_file_full_name
	output_file = open(g_file_full_name, "w")
	output_file.write(output_str)
	output_file.close()

#  scale_img(sys.argv[1])
image4convert = load(sys.argv[1])
dump_info(image4convert)

write_data2file(image4convert)
#  show_img(image4convert)

使用圖片進(jìn)行測(cè)試:

p01.jpg
執(zhí)行完該腳本后,會(huì)得到一個(gè) .cpp 文件,該文件中包含了 RGB 信息的數(shù)組。
如:

Screenshot from 2023-12-27 13-24-10.png
接下來就是將數(shù)組填充到模型的輸入 tensor,這部分關(guān)鍵的代碼如下:

int insert_raw_data(uint8_t *dst, unsigned int *data)
  {
     int i, j, k, l;

     for (i = 0; i < 200; i++)
       for (j = 0; j < 200; j++)
       {
          *dst++ = *data > > 16 & 0XFF;
          *dst++ = *data > > 8 & 0XFF;
          *dst++ = *data & 0XFF;
          data++;
       }
     return 0;
  }
....
uint8_t * input_tensor = interpreter- >typed_input_tensor< uint8_t >(0);
insert_raw_data(input_tensor, raw_data);

代碼解析

工程現(xiàn)在完整的 minimal.cc 文件是:

/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include < cstdio >
#include < iostream >
#include < vector >
#include < sys/time.h >

#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"

// This is an example that is minimal to read a model
// from disk and perform inference. There is no data being loaded
// that is up to you to add as a user.
//
// NOTE: Do not add any dependencies to this that cannot be built with
// the minimal makefile. This example must remain trivial to build with
// the minimal build tool.
//
// Usage: minimal < tflite model >

#define TFLITE_MINIMAL_CHECK(x)                              
  if (!(x)) {                                                
    fprintf(stderr, "Error at %s:%dn", __FILE__, __LINE__); 
    exit(1);                                                 
  }

int insert_raw_data(uint8_t *dst, unsigned int *data)
{
   int i, j, k, l;

   for (i = 0; i < 200; i++)
     for (j = 0; j < 200; j++)
     {
        *dst++ = *data > > 16 & 0XFF;
        *dst++ = *data > > 8 & 0XFF;
        *dst++ = *data & 0XFF;
        data++;
     }
   return 0;
}

int dump_tflite_tensor(TfLiteTensor *tensor)
{
  std::cout < < "Name:" < < tensor- >name < < std::endl;
  if (tensor- >dims)
  {
      std::cout < < "Shape: [" ;
      for (int i = 0; i < tensor- >dims- >size; i++)
          std::cout < < tensor- >dims- >data[i] < < ",";
      std::cout < < "]" < < std::endl;
  }
  std::cout < < "Type:" < < tensor- >type < < std::endl;

  return 0;
}

extern unsigned int raw_data[];
int main(int argc, char* argv[]) {
  if (argc != 2) {
    fprintf(stderr, "minimal < tflite model >n");
    return 1;
  }
  const char* filename = argv[1];

  // Load model
  // 加載模型
  std::unique_ptr< tflite::FlatBufferModel > model =
      tflite::FlatBufferModel::BuildFromFile(filename);
  TFLITE_MINIMAL_CHECK(model != nullptr);

  // Build the interpreter with the InterpreterBuilder.
  // Note: all Interpreters should be built with the InterpreterBuilder,
  // which allocates memory for the Interpreter and does various set up
  // tasks so that the Interpreter can read the provided model.
  tflite::ops::builtin::BuiltinOpResolver resolver;
  tflite::InterpreterBuilder builder(*model, resolver);
  // builder.SetNumThreads(12);
  // 初始化解釋器
  std::unique_ptr< tflite::Interpreter > interpreter;
  builder(&interpreter);
  TFLITE_MINIMAL_CHECK(interpreter != nullptr);

auto a_input = interpreter- >inputs()[0];
auto a_input_batch_size = interpreter- >tensor(a_input)- >dims_signature- >data[0];
auto a_input_height = interpreter- >tensor(a_input)- >dims_signature- >data[1];
auto a_input_width = interpreter- >tensor(a_input)- >dims_signature- >data[2];
auto a_input_channels = interpreter- >tensor(a_input)- >dims_signature- >data[3];

std::cout < < "The input tensor has the following dimensions: ["
          < < a_input_batch_size < < ","
          < < a_input_height < < ","
          < < a_input_width < < ","
          < < a_input_channels < < "]" < < std::endl;

  // 強(qiáng)制修改 tensor 的 shape
  std::vector< int > peppa_jpg = {1,200,200,3};
  interpreter- >ResizeInputTensor(0, peppa_jpg);
  // Allocate tensor buffers.
  // 申請(qǐng)推理需要的內(nèi)存
  TFLITE_MINIMAL_CHECK(interpreter- >AllocateTensors() == kTfLiteOk);
  printf("=== Pre-invoke Interpreter State ===n");
  // 打印解釋器的狀態(tài)
  // tflite::PrintInterpreterState(interpreter.get());

  // auto keys = interpreter- >signature_keys();
  // for (auto k: keys)
  // {
    // std::cout < < *k < < std::endl;
  // }
  // std::cout < < "---------------------------" < < std::endl;

  // 直接找到輸出 tensor 指針
  auto detection_scores_tensor = interpreter- >output_tensor_by_signature("detection_scores", "serving_default");
  auto detection_boxes_tensor = interpreter- >output_tensor_by_signature("detection_boxes", "serving_default");

  // auto abc = interpreter- >signature_outputs("serving_default");
  // std::cout < < abc.size() < < std::endl;
  // for (auto a:abc)
      // std::cout < < a.first < < "and" < < a.second < < std::endl;

  // Fill input buffers
  // TODO(user): Insert code to fill input tensors.
  // Note: The buffer of the input tensor with index `i` of type T can
  // be accessed with `T* input = interpreter- >typed_input_tensor< T >(i);`
  uint8_t * input_tensor = interpreter- >typed_input_tensor< uint8_t >(0);
  insert_raw_data(input_tensor, raw_data);
  // Run inference
  // 執(zhí)行推理過程
  struct timeval tv;
  if (0 == gettimeofday(&tv, NULL))
  {
    std::cout < < tv.tv_sec * 1000000 + tv.tv_usec < < std::endl;
  }
  TFLITE_MINIMAL_CHECK(interpreter- >Invoke() == kTfLiteOk);
  if (0 == gettimeofday(&tv, NULL))
  {
    std::cout < < tv.tv_sec * 1000000 + tv.tv_usec < < std::endl;
  }
  printf("n=== Post-invoke Interpreter State ===n");
  // tflite::PrintInterpreterState(interpreter.get());

  int i  = 0;
  for ( ; i < 2; i++)
  {
      std::cout < < detection_scores_tensor- >data.f[i] < < '[';
      std::cout < < detection_boxes_tensor- >data.f[i*4] < < ',';
      std::cout < < detection_boxes_tensor- >data.f[i*4 + 1] < < ',';
      std::cout < < detection_boxes_tensor- >data.f[i*4  +2] < < ',';
      std::cout < < detection_boxes_tensor- >data.f[i*4+3] < < ']' < < std::endl;
  }
  // Read output buffers
  // TODO(user): Insert getting data out code.
  // Note: The buffer of the output tensor with index `i` of type T can
  // be accessed with `T* output = interpreter- >typed_output_tensor< T >(i);`
  // T* output = interpreter- >typed_output_tensor< T >(i);

  return 0;
}

測(cè)試結(jié)果為:

? ./minimal model.tflite
2023-12-27 14:19:30.468885: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
INFO: Created TensorFlow Lite delegate for select TF ops.
2023-12-27 14:19:30.491445: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
INFO: TfLiteFlexDelegate delegate: 4 nodes delegated out of 21284 nodes with 2 partitions.

The input tensor has the following dimensions: [1,-1,-1,3]
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
WARNING: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors (tensor#394 is a dynamic-sized tensor).
=== Pre-invoke Interpreter State ===
1703657970521273
1703657971811800

=== Post-invoke Interpreter State ===
0.831981[0.23847,0.269423,0.909584,0.87969]
0.679475[0.114574,0.145309,0.785652,0.755186]

從代碼中可以看到,通過在 TFLITE_MINIMAL_CHECK(interpreter->Invoke() == kTfLiteOk); 前后分別添加如下代碼打印時(shí)間戳:

if (0 == gettimeofday(&tv, NULL))                                                                                                |    main(int argc,char * argv[])
    {                                                                                                                                |~
      std::cout < < tv.tv_sec * 1000000 + tv.tv_usec < < std::endl;                                                                    |~
    }

發(fā)現(xiàn)默認(rèn)的推理耗時(shí):

單位us
1703657970521273
1703657971811800

可以看到推理耗時(shí)首次在PC端大概1.3s(這個(gè)原因暫時(shí)未知,如果有知道的小伙伴希望可以告知一下),之后的推理時(shí)間大概在400ms附近,我繪制了接近50次的推理耗時(shí),結(jié)果如下圖所示:

peppa_times.png

這個(gè)和用 Python 的 2s 左右比較,速度提高了接近5倍。
因?yàn)椋以?C++ 代碼這里只是打印出了前2個(gè)概率較高的推理結(jié)果,這里截取 Python 端的前 2 個(gè)推理的結(jié)果對(duì)比如下:

'detection_scores': < tf.Tensor: shape=(1, 100), dtype=float32, numpy=
array([[0.8284813 , 0.67629, ....
{'detection_boxes': < tf.Tensor: shape=(1, 100, 4), dtype=float32, numpy=
array([[[0.23848376, 0.26942557, 0.9095545 , 0.8796709 ],
[0.1146237 , 0.14536926, 0.7857162 , 0.7552357 ],
...

附上使用 Python 標(biāo)注后的圖片信息:
marked_p00.jpeg
從數(shù)據(jù)中可以看到結(jié)果是完全匹配的,至此就完成了使用 C++ 在 PC 端對(duì) tensorflow Lite 的調(diào)用。

到現(xiàn)在為止,完成了 C++ 在 PC 的推理測(cè)試,因?yàn)槲业捻?xiàng)目是要跟蹤目標(biāo)的,核心是對(duì)采集的圖像進(jìn)行識(shí)別,根據(jù)識(shí)別的目標(biāo)位置變化驅(qū)動(dòng)轉(zhuǎn)臺(tái)反向運(yùn)動(dòng),將目標(biāo)鎖定在視場(chǎng)中心,本次試用我重點(diǎn)將工作放在目標(biāo)識(shí)別,檢測(cè)以及動(dòng)作預(yù)測(cè)上,這里我選擇了佩奇作為識(shí)別的目標(biāo),繪制了四張圖片,佩奇分別在上,下,左,右位置。我將它們放在一張圖上。
peppa_pos.png
接著就是在 minimal.cc 文件中修改邏輯了。我將這幾張佩奇的圖片對(duì)應(yīng)的 RGB 信息存儲(chǔ)在 4 個(gè)數(shù)組中。然后定義一個(gè) map 來索引它們。

static jpg_info_t gs_test_peppa_maps[4] = {
    {up_raw_data, "up"},
    {down_raw_data, "down"},
    {left_raw_data, "left"},
    {right_raw_data, "right"},
  };

通過在 main 函數(shù) 的 while(1) 中使用隨機(jī)數(shù)調(diào)用對(duì)應(yīng)的圖片模擬目標(biāo)的移動(dòng),然后通過執(zhí)行模型推理計(jì)算目標(biāo)中心點(diǎn)相對(duì)上一次的偏移,接著通過打印輸出對(duì)應(yīng)的控制動(dòng)作反向修正目標(biāo)的偏移實(shí)現(xiàn)目標(biāo)鎖定在視場(chǎng)中心的效果,這部分動(dòng)作控制的邏輯如下:

int do_with_move_action(int &last_x, int &last_y, int x, int y)
  {
    if (x > last_x)
      std::cout < < "move right ";
    else if (x < last_x)
      std::cout < < "move left ";

    if (y > last_y)
      std::cout < < "move down";
    else if (y < last_y)
      std::cout < < "move up";
 
    std::cout < < std::endl;

    last_x = x;
    last_y = y;

    return 0;
  };

整個(gè) minimal.cc 文件 修改為如下所示:

/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include < cstdio >
#include < iostream >
#include < vector >
#include < sys/time.h >

#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"

// This is an example that is minimal to read a model
// from disk and perform inference. There is no data being loaded
// that is up to you to add as a user.
//
// NOTE: Do not add any dependencies to this that cannot be built with
// the minimal makefile. This example must remain trivial to build with
// the minimal build tool.
//
// Usage: minimal < tflite model >

#define TFLITE_MINIMAL_CHECK(x)                              
  if (!(x)) {                                                
    fprintf(stderr, "Error at %s:%dn", __FILE__, __LINE__); 
    exit(1);                                                 
  }

int insert_raw_data(uint8_t *dst, unsigned int *data)
{
   int i, j, k, l;

   for (i = 0; i < 200; i++)
     for (j = 0; j < 200; j++)
     {
        *dst++ = *data > > 16 & 0XFF;
        *dst++ = *data > > 8 & 0XFF;
        *dst++ = *data & 0XFF;
        data++;
     }
   return 0;
}

int dump_tflite_tensor(TfLiteTensor *tensor)
{
  std::cout < < "Name:" < < tensor- >name < < std::endl;
  if (tensor- >dims)
  {
      std::cout < < "Shape: [" ;
      for (int i = 0; i < tensor- >dims- >size; i++)
          std::cout < < tensor- >dims- >data[i] < < ",";
      std::cout < < "]" < < std::endl;
  }
  std::cout < < "Type:" < < tensor- >type < < std::endl;

  return 0;
}

extern unsigned int raw_data[];
extern unsigned int up_raw_data[];
extern unsigned int down_raw_data[];
extern unsigned int left_raw_data[];
extern unsigned int right_raw_data[];

typedef struct jpg_info {
  unsigned int *data;
  char name[8];
} jpg_info_t;

static jpg_info_t gs_test_peppa_maps[4] = {
  {up_raw_data, "up"},
  {down_raw_data, "down"},
  {left_raw_data, "left"},
  {right_raw_data, "right"},
};

int do_with_move_action(int &last_x, int &last_y, int x, int y)
{
  if (x > last_x)
    std::cout < < "move right ";
  else if (x < last_x)
    std::cout < < "move left ";

  if (y > last_y)
    std::cout < < "move down";
  else if (y < last_y)
    std::cout < < "move up";

  std::cout < < std::endl;

  last_x = x;
  last_y = y;

  return 0;
};


int main(int argc, char* argv[]) {
  if (argc != 2) {
    fprintf(stderr, "minimal < tflite model >n");
    return 1;
  }
  const char* filename = argv[1];

  // Load model
  // 加載模型
  std::unique_ptr< tflite::FlatBufferModel > model =
      tflite::FlatBufferModel::BuildFromFile(filename);
  TFLITE_MINIMAL_CHECK(model != nullptr);

  // Build the interpreter with the InterpreterBuilder.
  // Note: all Interpreters should be built with the InterpreterBuilder,
  // which allocates memory for the Interpreter and does various set up
  // tasks so that the Interpreter can read the provided model.
  tflite::ops::builtin::BuiltinOpResolver resolver;
  tflite::InterpreterBuilder builder(*model, resolver);
  builder.SetNumThreads(4);
  // 初始化解釋器
  std::unique_ptr< tflite::Interpreter > interpreter;
  builder(&interpreter);
  TFLITE_MINIMAL_CHECK(interpreter != nullptr);

auto a_input = interpreter- >inputs()[0];
auto a_input_batch_size = interpreter- >tensor(a_input)- >dims_signature- >data[0];
auto a_input_height = interpreter- >tensor(a_input)- >dims_signature- >data[1];
auto a_input_width = interpreter- >tensor(a_input)- >dims_signature- >data[2];
auto a_input_channels = interpreter- >tensor(a_input)- >dims_signature- >data[3];

std::cout < < "The input tensor has the following dimensions: ["
          < < a_input_batch_size < < ","
          < < a_input_height < < ","
          < < a_input_width < < ","
          < < a_input_channels < < "]" < < std::endl;

  // 強(qiáng)制修改 tensor 的 shape
  std::vector< int > peppa_jpg = {1,200,200,3};
  interpreter- >ResizeInputTensor(0, peppa_jpg);
  // Allocate tensor buffers.

  // Fill input buffers
  // TODO(user): Insert code to fill input tensors.
  // Note: The buffer of the input tensor with index `i` of type T can
  // be accessed with `T* input = interpreter- >typed_input_tensor< T >(i);`
  uint8_t * input_tensor;
  int map_index;
  int pos_x, pos_y;
  int last_pos_x = 100, last_pos_y = 100;

  while(1)
  {
  // 申請(qǐng)推理需要的內(nèi)存
  TFLITE_MINIMAL_CHECK(interpreter- >AllocateTensors() == kTfLiteOk);
  // printf("=== Pre-invoke Interpreter State ===n");
  input_tensor = interpreter- >typed_input_tensor< uint8_t >(0);

  // 直接找到輸出 tensor 指針
  auto detection_scores_tensor = interpreter- >output_tensor_by_signature("detection_scores", "serving_default");
  auto detection_boxes_tensor = interpreter- >output_tensor_by_signature("detection_boxes", "serving_default");
      map_index = random() % 4;

      std::cout < < "This raw " < < gs_test_peppa_maps[map_index].name < < '@' < < map_index < < std::endl;
      insert_raw_data(input_tensor, gs_test_peppa_maps[map_index].data);
      // Run inference
      // 執(zhí)行推理過程
      struct timeval tv;
      if (0 == gettimeofday(&tv, NULL))
      {
        std::cout < < tv.tv_sec * 1000000 + tv.tv_usec < < '~';
      }
      TFLITE_MINIMAL_CHECK(interpreter- >Invoke() == kTfLiteOk);
      if (0 == gettimeofday(&tv, NULL))
      {
        std::cout < < tv.tv_sec * 1000000 + tv.tv_usec < < std::endl;
      }
      // printf("n=== Post-invoke Interpreter State ===n");
      // tflite::PrintInterpreterState(interpreter.get());
      std::cout < < detection_boxes_tensor- >data.f[0] < < detection_boxes_tensor- >data.f[1] < <
          detection_boxes_tensor- >data.f[2] < < detection_boxes_tensor- >data.f[3] < < std::endl;

      // 這里注意,推理結(jié)果方框的格式是 (y1, x1) 和 (y2, x2)
      pos_y = 100 * (detection_boxes_tensor- >data.f[0] + detection_boxes_tensor- >data.f[2]);
      pos_x = 100 * (detection_boxes_tensor- >data.f[1] + detection_boxes_tensor- >data.f[3]);
          std::cout < < detection_scores_tensor- >data.f[0] < < '[';
          std::cout < < pos_x < < ',';
          std::cout < < pos_y < < ']' < < std::endl;

      do_with_move_action(last_pos_x, last_pos_y, pos_x, pos_y);
      usleep(1000);
      }
  // Read output buffers
  // TODO(user): Insert getting data out code.
  // Note: The buffer of the output tensor with index `i` of type T can
  // be accessed with `T* output = interpreter- >typed_output_tensor< T >(i);`
  // T* output = interpreter- >typed_output_tensor< T >(i);

  return 0;
}

截取測(cè)試部分的截圖如下所示:

Screenshot from 2023-12-27 16-59-05.png

接下來就是重新交叉編譯 minimal 工程,然后在飛騰派上測(cè)試了。過程和 PC 端差別不大,首先通過 scp 發(fā)送到飛騰派,然后查看下依賴:

red@phytiumpi:/tmp$ ldd minimal
        linux-vdso.so.1 (0x0000ffff9cb15000)
        libtensorflowlite_flex.so = > /lib/libtensorflowlite_flex.so (0x0000ffff805e0000)
        librt.so.1 = > /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffff805c0000)
        libdl.so.2 = > /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffff805a0000)
        libpthread.so.0 = > /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff80580000)
        libm.so.6 = > /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffff804e0000)
        libstdc++.so.6 = > /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffff802b0000)
        libgcc_s.so.1 = > /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffff80280000)
        libc.so.6 = > /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff800d0000)
        /lib/ld-linux-aarch64.so.1 (0x0000ffff9cadc000)

接著運(yùn)行 minimal 進(jìn)行模型推理。這里只是展示下測(cè)試的結(jié)果,以及預(yù)測(cè)的耗時(shí):

Screenshot from 2023-12-27 20-12-42.png
可以從圖中看出,有關(guān)模型推理的結(jié)果部分和在PC端的可以匹配。

下面從打印信息中繪制飛騰派上的推理耗時(shí),結(jié)果如下圖所示:

ft_times.png

可以看到在飛騰派上首次推理接近8分鐘(不知道什么原因),之后趨向穩(wěn)定,單次推理在1.2s左右。和PC端的 400ms 差別不是特別明顯,我使用 btop 看了下,在飛騰派上開了22個(gè)線程:
Screenshot from 2023-12-27 20-25-54.png

同樣的在PC端,使用 btop 看下:

Screenshot from 2023-12-27 20-27-08.png

開了30個(gè)線程。現(xiàn)在看來飛騰派使用 C++ CPU 推理的速度大概是 1/3 的 PC 性能。

文章寫到這里,就暫時(shí)完成本次在飛騰派上的試用工作,通過最近這一系列的連載文章,我主要記錄了自己如何一步步實(shí)現(xiàn)在飛騰派上部署目標(biāo)識(shí)別算法,并提取模型輸出進(jìn)一步完成控制動(dòng)作的這個(gè)過程。
列舉這近一個(gè)月來的文章匯總:

在臨近試用的最后,再次向提供我這次試用機(jī)會(huì)的電子發(fā)燒友,飛騰信息技術(shù)有限公司表示感謝。

希望我連載的這些文章可以對(duì)想接觸體驗(yàn)使用TensorFlow Lite在嵌入式設(shè)備上進(jìn)行機(jī)器學(xué)習(xí)的小伙伴提供一些幫助。

審核編輯 黃宇

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場(chǎng)。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請(qǐng)聯(lián)系本站處理。 舉報(bào)投訴
  • C++
    C++
    +關(guān)注

    關(guān)注

    22

    文章

    2108

    瀏覽量

    73641
  • tensorflow
    +關(guān)注

    關(guān)注

    13

    文章

    329

    瀏覽量

    60535
  • 飛騰派
    +關(guān)注

    關(guān)注

    2

    文章

    9

    瀏覽量

    215
收藏 人收藏

    評(píng)論

    相關(guān)推薦

    飛騰4G免費(fèi)試用】第四部署模型飛騰的嘗試

    ) red@phytiumpi:~$ 可以看到檢測(cè)的結(jié)果和PC端的一致。 至此已經(jīng)完成了佩奇檢測(cè)模型部署飛騰的前期準(zhǔn)備工作(環(huán)境搭建
    發(fā)表于 12-20 21:10

    飛騰4G免費(fèi)試用第五章:使用C++部署tflite模型飛騰

    免費(fèi)試用】第三:抓取圖像,手動(dòng)標(biāo)注并完成自定義目標(biāo)檢測(cè)模型訓(xùn)練和測(cè)試 【飛騰
    發(fā)表于 12-27 21:17

    飛騰4G免費(fèi)試用】2飛騰openwrt固件燒錄

    接上文【飛騰4G免費(fèi)試用】環(huán)境搭建 9-工具包 Win32DiskImager2.0.1.8寫鏡像文件。 選擇:
    發(fā)表于 12-27 21:37

    飛騰4G免費(fèi)試用】初步認(rèn)識(shí)飛騰4G版開發(fā)板

    這幾天收到飛騰 4G 基礎(chǔ)套件,給大家做個(gè)介紹,讓大家可以了解一下這塊開發(fā)板, 飛騰 4G
    發(fā)表于 01-02 22:23

    飛騰4G免費(fèi)試用】大家來了解飛騰4G版開發(fā)板

    今天把收到的飛騰4G版開發(fā)板做各視頻,讓大家直觀的了解一下做工精細(xì),布線合理,做工扎實(shí)的飛騰4G
    發(fā)表于 01-02 22:43

    飛騰4G免費(fèi)試用】2飛騰 openkylin 固件燒錄

    接上文【飛騰4G免費(fèi)試用】環(huán)境搭建 9-工具包 Win32DiskImager2.0.1.8寫鏡像文件。 選擇:
    發(fā)表于 01-06 22:09

    飛騰4G免費(fèi)試用飛騰開發(fā)板運(yùn)行Ubuntu系統(tǒng)

    飛騰4G版開發(fā)板是一款做工精細(xì),布線合理的開發(fā)板,今天給大家介紹一下如何運(yùn)行Ubuntu系統(tǒng),下面是網(wǎng)上的資料,幫助大家快速認(rèn)識(shí)飛騰
    發(fā)表于 01-08 22:40

    飛騰4G免費(fèi)試用】紅綠燈項(xiàng)目-2飛騰 openkylin 進(jìn)行IO控制2

    | 接上文【飛騰4G免費(fèi)試用】紅綠燈項(xiàng)目-2飛騰
    發(fā)表于 01-17 19:46

    飛騰4G免費(fèi)試用】來更多的了解飛騰4G版開發(fā)板!

    以及優(yōu)刻谷邊緣物聯(lián)網(wǎng)關(guān)等產(chǎn)品。 值得一提的是,飛騰還公布了飛騰“種子計(jì)劃”,該計(jì)劃將在飛騰派發(fā)布一年內(nèi),以創(chuàng)新大賽、現(xiàn)場(chǎng)交流會(huì)、產(chǎn)品賦能培訓(xùn)會(huì)等形式,培育不少于10000名
    發(fā)表于 01-22 00:34

    飛騰4G免費(fèi)試用飛騰4G版開發(fā)板套裝測(cè)試及環(huán)境搭建

    先簡(jiǎn)單介紹一下這款飛騰4G版開發(fā)板套裝; 飛騰是由中電港螢火工場(chǎng)研發(fā)的一款面向行業(yè)工程師、學(xué)生和愛好者的開源硬件。主板處理器采用
    發(fā)表于 01-22 00:47

    飛騰4g試用

    4G飛騰
    夢(mèng)の旅驛站
    發(fā)布于 :2024年01月07日 14:13:20

    【新品體驗(yàn)】飛騰4G版基礎(chǔ)套裝免費(fèi)試用

    飛騰是由飛騰攜手中電港螢火工場(chǎng)研發(fā)的一款面向行業(yè)工程師、學(xué)生和愛好者的開源硬件,采用飛騰嵌入式四核處理器,兼容ARM V8架構(gòu),板載64位 DDR
    發(fā)表于 10-25 11:44

    飛騰4G免費(fèi)試用】1.開箱與鏡像燒錄

    飛騰4G免費(fèi)試用】1.開箱 & 鏡像燒錄 首先非常感謝 飛騰
    發(fā)表于 12-08 12:47

    飛騰4G免費(fèi)試用】開發(fā)環(huán)境搭建

    ,非常有競(jìng)爭(zhēng)力的開源產(chǎn)品。 欣賞完飛騰的外觀和做工,下面進(jìn)入正題。將這么好的開源硬件耍起來。 1、燒錄系統(tǒng)鏡像 飛騰派系統(tǒng)可以選擇從TF卡啟動(dòng)。 1)準(zhǔn)備一張32G及以上的TF卡。
    發(fā)表于 12-09 17:53

    飛騰4G免費(fèi)試用】第四部署模型飛騰的嘗試

    本章記錄這幾天嘗試將訓(xùn)練的佩奇檢測(cè)模型部署飛騰的階段總結(jié)。
    的頭像 發(fā)表于 12-20 20:54 ?2598次閱讀
    【<b class='flag-5'>飛騰</b><b class='flag-5'>派</b><b class='flag-5'>4G</b>版<b class='flag-5'>免費(fèi)</b><b class='flag-5'>試用</b>】第四<b class='flag-5'>章</b>:<b class='flag-5'>部署</b><b class='flag-5'>模型</b><b class='flag-5'>到</b><b class='flag-5'>飛騰</b><b class='flag-5'>派</b>的嘗試
    主站蜘蛛池模板: 国产理论在线| 欧美交片| 91操碰| 视频h在线| 嗯好舒服好爽好快好大| 一色屋成人免费精品网站| 日本免费一区二区视频| 永久精品免费影院在线观看网站| 欧美成人天天综合在线视色| 国产免费久久精品| 午夜精品久久久久久毛片| 国产一级毛片午夜| 欧美在线不卡视频| 欧美色性视频| 99久久网站| 好吊色37pao在线观看| 亚洲卡一卡2卡三卡4卡国色| 亚洲综合色婷婷| 欧美成人午夜| 欧美aaaav免费大片| xx性欧美| 五月婷婷深深爱| 九月丁香婷婷| 天天干天天色综合网| 日本三级11k影院在线| 四虎最新网址在线观看| 黄色性生活毛片| 日本高清视频网站www| 中文字幕区| 色站视频| 成年网站在线| 黄色免费看网站| 婷婷中文网| 国产九色在线| 欧美极品第1页专区| 日本理论在线| 午夜精品一区二区三区在线观看| 糖心vlog麻豆精东影业传媒| 日韩一级精品视频在线观看| 又黄又爽的成人免费网站 | 国产拍拍拍精品视频|