Title here
Summary here
import { parseExecutorch } from "@wetron/executorch";
const graph = parseExecutorch(bytes: Uint8Array): ModelGraphSynchronous. Detected by ET12 at bytes 4-7.
Program table -> first ExecutionPlanopType string (e.g. aten.conv2d.default)EValue list scanned for Tensor entries -> shape and dtype for tensorShapesChain instructions -> KernelCall entries -> operator index resolved to op typeExecutionPlan.inputs and .outputs -> graph inputs and outputs| ExecuTorch enum | dtype |
|---|---|
| 0 | uint8 |
| 1 | int8 |
| 2 | int16 |
| 3 | int32 |
| 4 | int64 |
| 5 | float16 |
| 6 | float32 |
| 7 | float64 |
| 11 | bool |
KernelCall arguments are categorised by whether their EValue index was already produced by a prior instruction:
This is a heuristic. It matches the common SSA-style ExecuTorch programs where each value index is written exactly once, and tracks plan inputs as “pre-produced.” It is approximate for programs that use:
aten::add.out(a, b, *, out)) - out is passed in
the args list as a pre-allocated buffer and may appear classified as an
output here even though the kernel mutates it in place.For most viewer purposes the result is correct enough to walk the operator chain, but the produced IR should not be relied on for graph rewriting.
warnings.ModelGraph.initializers is empty and ModelGraph.weights is absent - constant buffer payloads are not surfaced.flatbuffers npm package with raw vtable offset decoding.