重点:最后一句。这个和 c++ 的结构体还不太一样。在c++里面,可以允许有多个构造函数,但是结构体本身是不变的,成员可以被默认初始化。但是在这里,ADT 你可以理解为就是一个 tagged unions
如下:
Numbers 是一个 ADT,具有3个构造函数
# Defines an ADT named "Numbers"
data Numbers {
Empty : () -> Numbers
Single : (Tensor[(), int32]) -> Numbers
Pair : (Tensor[(), int32], Tensor[(), int32]) -> Numbers
}
# A Numbers value can be produced using an Empty, Single, or Pair
# constructor, each with a signature given above
def @sum(%n : Numbers[]) -> Tensor[(), int32] {
# The match expression branches on the constructor that was
# used to produce %n. The variables in each case are bound
# if the constructor matches that used for %n
match(%n) {
case Empty() { 0 }
case Single(x) { x }
case Pair(x, y) { x + y }
}
}
@sum(Empty()) # evaluates to 0
@sum(Single(3)) # evaluates to 3
@sum(Pair(5, 6)) # evaluates to 11
由于 ADT 是通过名字标识的,意味着2个相同具有相同构造函数的ADT,在类型系统看来仍然是不同的
比如下面,调用 @sum(Empty2()) 就会报错
# structurally identical constructors to Numbers
data Numbers2 {
Empty2 : () -> Numbers2
Single2 : (Tensor[(), int32]) -> Numbers2
Pair2 : (Tensor[(), int32], Tensor[(), int32]) -> Numbers2
}
# the below results in a type error because Numbers2
# is a distinct type from Numbers
# fn() { @sum(Empty2()) }
relay 并没有定义任何新的 Type 类型,全部直接服用 tvm/ir 里的定义,因此这里不展开描述
// namespace update for backward compact
// will be removed later.
using AnyNode = tvm::tir::AnyNode;
using Any = tvm::tir::Any;
using Kind = TypeKind;
using Type = tvm::Type;
using TypeNode = tvm::TypeNode;
using TypeVar = tvm::TypeVar;
using TypeVarNode = tvm::TypeVarNode;
using GlobalTypeVar = tvm::GlobalTypeVar;
using GlobalTypeVarNode = tvm::GlobalTypeVarNode;
using TupleType = tvm::TupleType;
using TupleTypeNode = tvm::TupleTypeNode;
using TypeConstraint = tvm::TypeConstraint;
using TypeConstraintNode = tvm::TypeConstraintNode;
using FuncType = tvm::FuncType;
using FuncTypeNode = tvm::FuncTypeNode;
using IncompleteType = tvm::IncompleteType;
using IncompleteTypeNode = tvm::IncompleteTypeNode;
using RelayRefType = tvm::RelayRefType;
using RelayRefTypeNode = tvm::RelayRefTypeNode;
using TensorType = tvm::TensorType;
using TensorTypeNode = tvm::TensorTypeNode;
using TypeCall = tvm::TypeCall;
using TypeCallNode = tvm::TypeCallNode;
using TypeRelation = tvm::TypeRelation;
using TypeRelationNode = tvm::TypeRelationNode;
using TypeRelationFn = tvm::TypeRelationFn;
using TypeReporter = tvm::TypeReporter;
using TypeReporterNode = tvm::TypeReporterNode;
2. Expr
relay 里面的所有 Expr 都是继承自 ir 里面的 RelayExpr 类
namespace tvm {
namespace relay {
using Expr = tvm::RelayExpr;
relay 定义的 Expr 列表如下:
Constant:常量
Tuple:数组,由N个Expr构成
Var:局部变量,计算图最常见的结构之一
Call:算子调用,计算图最常见的结构之一
Let:Let binding
If:条件表达式
TupleGetItem
RefCreate
RefRead
RefWrite
TempExpr
比如 Tuple
class Tuple : public Expr {
public:
/*!
* \brief The constructor
* \param fields The fields of a tuple.
* \param span The source span of the expression.
*/
TVM_DLL explicit Tuple(tvm::Array fields, Span span = Span());
TVM_DEFINE_OBJECT_REF_METHODS(Tuple, RelayExpr, TupleNode);
TVM_DEFINE_OBJECT_REF_COW_METHOD(TupleNode);
};
class VarNode : public ExprNode {
public:
/*!
* \brief The unique identifier of the Var.
*
* vid will be preserved for the same Var during type inference
* and other rewritings, while the VarNode might be recreated
* to attach additional information.
* This property can be used to keep track of parameter Var
* information across passes.
*/
Id vid;
/*!
* \brief type annotaion of the variable.
* This field records user provided type annotation of the Var.
* This field is optional and can be None.
*/
Type type_annotation;
// ...
}
Id 可以简单理解为一个字符串,是一个全局唯一的标识。你可以改下代码,增加一些调试,打印出来是这样的:
/*!
* \brief Runtime primitive data type.
*
* This class is a thin wrapper of DLDataType.
* We also make use of DataType in compiler to store quick hint
*/
class DataType {
public:
/*!
* \brief Type code for the DataType.
*
* DLPack consistency:
* 1) kInt is consistent with kDLInt
* 2) kUInt is consistent with kDLUInt
* 3) kFloat is consistent with kDLFloat
*/
enum TypeCode {
kInt = kDLInt,
kUInt = kDLUInt,
kFloat = kDLFloat,
kHandle = TVMArgTypeCode::kTVMOpaqueHandle,
kBFloat = kDLBfloat,
kCustomBegin = 129
};
/*! \brief default constructor */
DataType() { data_ = DataType::Void(); }
/*!
* \brief Constructor
* \param dtype The DLDataType
*/
explicit DataType(DLDataType dtype) : data_(dtype) {}
2)程序自身依赖的一些 so 动态链接库,由于内核的 page cache 回收算法,并不感知 page 具体是普通的文件 cache,还是 so 动态链接库,所以当容器内存不足时,内核通过一些粗略的回收算法回收 page cache,一旦把 so 的缓存页回收掉了,程序在调用相关函数时,会出现严重的性能抖动