###core: provides the basic components and functionalities of the system.
operator: defines the schema of operators, such as convolution, relu, pooling, etc. al. Here is the current support operator list.
###serializer: is to load the saved model. The serializer framework is extensible to support different format, including the customized one. Caffe/ONNX/Tensorflow/MXNet and Tengine models can be loaded directly by Tengine.
###executor: implements the code to run graph and operators. Current version provides a highly optimized implementation for multi A72 cores.
###driver: is the adapter of real H/W and provides service to device executor by HAL API. It is possible for single driver to create multiple devices.
###wrapper: provides the wrapper of APIs for different frameworks. Both Caffe API wrapper and Tensorflow API wrapper work now.
##前端其他模型格式导入
The serializer module loads the whole model file stored in disk, and creates a Tengine in-memory IR, which is StaticGraph.
(导入其他格式的模型文件,比如tensorflow的,变成内部内存中的格式StaticGraph)
The serializer module also can store the StaticGraph into disk in the specific format. However, current version of this document describes the loading process, which is more important than the storing process.
Load Interface
unsigned int GetFileNum(void);
--- 返回模型共有几个文件
bool LoadModel(const std::vector<std::string>& file_list, StaticGraph * static_graph);
----把模型文件转成StaticGraph
###gatherV2 ops
gather用于获取tensor中某几位的量
https://baijiahao.baidu.com/s?id=1602069319915188130&wfr=spider&for=pc
def embedding_lookup(
params,
ids,
partition_strategy="mod",
name=None,
validate_indices=True, # pylint: disable=unused-argument
max_norm=None):
"""Looks up `ids` in a list of embedding tensors.
This function is used to perform parallel lookups on the list of
tensors in `params`. It is a generalization of
@{tf.gather}, where `params` is
interpreted as a partitioning of a large embedding tensor. `params` may be
a `PartitionedVariable` as returned by using `tf.get_variable()` with a
partitioner.
If `len(params) > 1`, each element `id` of `ids` is partitioned between
the elements of `params` according to the `partition_strategy`.
In all strategies, if the id space does not evenly divide the number of
partitions, each of the first `(max_id + 1) % len(params)` partitions will
be assigned one more id.
If `partition_strategy` is `"mod"`, we assign each id to partition
`p = id % len(params)`. For instance,
13 ids are split across 5 partitions as:
`[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`
If `partition_strategy` is `"div"`, we assign ids to partitions in a
contiguous manner. In this case, 13 ids are split across 5 partitions as:
`[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`
The results of the lookup are concatenated into a dense
tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
Args:
params: A single tensor representing the complete embedding tensor,
or a list of P tensors all of same shape except for the first dimension,
representing sharded embedding tensors. Alternatively, a
`PartitionedVariable`, created by partitioning along dimension 0. Each
element must be appropriately sized for the given `partition_strategy`.
ids: A `Tensor` with type `int32` or `int64` containing the ids to be looked
up in `params`.
partition_strategy: A string specifying the partitioning strategy, relevant
if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default
is `"mod"`.
name: A name for the operation (optional).
validate_indices: DEPRECATED. If this operation is assigned to CPU, values
in `indices` are always validated to be within range. If assigned to GPU,
out-of-bound indices result in safe but unspecified behavior, which may
include raising an error.
max_norm: If provided, embedding values are l2-normalized to the value of
max_norm.
Returns:
A `Tensor` with the same type as the tensors in `params`.
Raises:
ValueError: If `params` is empty.
"""
return _embedding_lookup_and_transform(
params=params,
ids=ids,
partition_strategy=partition_strategy,
name=name,
max_norm=max_norm,
transform_fn=None)
网友评论