Skip to content

Graph & TensorFlow

Graph building, batched operations, message passing, and the TensorFlow FFI backend for @casys/shgat. Published on JSR.

Manages the hypergraph structure. Handles unified node registration, legacy tool/capability nodes, incidence matrix construction, and index management.

import { GraphBuilder } from "@casys/shgat";
const builder = new GraphBuilder();
// Register unified nodes
builder.registerNode({ id: "tool-a", embedding: embA, children: [], level: 0 });
builder.registerNode({ id: "tool-b", embedding: embB, children: [], level: 0 });
builder.registerNode({ id: "cap-1", embedding: embC, children: ["tool-a", "tool-b"], level: 0 });
// Finalize: recompute levels and rebuild indices
builder.finalizeNodes();
MethodSignatureDescription
registerNode()(node: Node) => voidRegister a unified node. Also populates legacy maps for backward compatibility.
finalizeNodes()() => voidRecompute levels via DFS and rebuild indices. Call once after all registrations.
getNode()(id: string) => Node | undefinedGet a unified node by ID.
getNodes()() => Map<string, Node>Get all unified nodes.
getNodesByLevel()(level: number) => Node[]Get nodes at a specific hierarchy level.
getMaxLevel()() => numberMaximum hierarchy level from unified nodes.
getNodeIdsByLevel()() => Map<number, Set<string>>Node IDs grouped by level.
getDescendants()(id: string) => string[]All transitive descendant IDs of a node.
hasNode()(id: string) => booleanCheck if a unified node exists.
getNodeCount()() => numberNumber of unified nodes.
getNodeIds()() => string[]All unified node IDs.
clearNodes()() => voidClear all unified nodes.

Build incidence structures for all levels of an n-SuperHyperGraph.

import { buildMultiLevelIncidence, type MultiLevelIncidence } from "@casys/shgat";
const incidence: MultiLevelIncidence = buildMultiLevelIncidence(graphBuilder);

Returns: MultiLevelIncidence — contains incidence matrices for each level transition, tool-to-capability mappings, and capability-to-capability parent/child relationships.

Compute hierarchy levels for capabilities using topological sort. Detects cycles.

import { computeHierarchyLevels, HierarchyCycleError, type HierarchyResult } from "@casys/shgat";
try {
const result: HierarchyResult = computeHierarchyLevels(graphBuilder);
console.log(`Max level: ${result.maxHierarchyLevel}`);
console.log(`Levels: ${[...result.hierarchyLevels.entries()]}`);
} catch (e) {
if (e instanceof HierarchyCycleError) {
console.error("Cycle detected in capability hierarchy");
}
}

Generate a deterministic default embedding for a tool ID (used when no embedding is provided).

import { generateDefaultToolEmbedding } from "@casys/shgat";
const embedding = generateDefaultToolEmbedding("psql_query", 1024);

The recommended node type. Hierarchy is implicit from structure.

interface Node {
id: string; // Unique identifier
embedding: number[]; // Embedding vector (e.g., BGE-M3 1024-dim)
children: string[]; // Child node IDs. Empty = leaf (level 0)
level: number; // Hierarchy level. Computed at graph construction.
}
  • children.length === 0 — leaf node (level 0)
  • children.length > 0 — composite node (level = 1 + max child level)
/** @deprecated Use Node with children: [] */
interface ToolNode {
id: string;
embedding: number[];
toolFeatures?: ToolGraphFeatures;
}
/** @deprecated Use Node with children: [...] */
interface CapabilityNode {
id: string;
embedding: number[];
members: Member[];
hierarchyLevel: number;
successRate: number;
toolsUsed?: string[];
children?: string[];
parents?: string[];
hypergraphFeatures?: HypergraphFeatures;
}
type Member =
| { type: "tool"; id: string }
| { type: "capability"; id: string };

Helper functions for working with Node maps directly (without GraphBuilder).

FunctionSignatureDescription
buildGraph()(nodes: Node[]) => Map<string, Node>Build a graph map from an array, computing levels via DFS.
computeAllLevels()(nodes: Map<string, Node>) => voidCompute levels for all nodes in-place using DFS with memoization.
groupNodesByLevel()(nodes: Map<string, Node>) => Map<number, Node[]>Group nodes by their hierarchy level.
FunctionSignatureDescription
buildIncidenceMatrix()(nodes, childLevel, parentLevel) => { matrix, childIndex, parentIndex }Build incidence matrix between two levels. A[child][parent] = 1 if child is in parent.
buildAllIncidenceMatrices()(nodes) => Map<number, { matrix, childIndex, parentIndex }>Build matrices for all level transitions (0-to-1, 1-to-2, etc.).

BLAS-optimized operations for the unified Node type. Pre-compute graph structure once, reuse for all forward passes.

import { precomputeGraphStructure, type BatchedGraphStructure } from "@casys/shgat";
const structure: BatchedGraphStructure = precomputeGraphStructure(nodeMap);
interface BatchedGraphStructure {
nodesByLevel: Map<number, Node[]>;
embeddingsByLevel: Map<number, BatchedEmbeddings>;
incidenceMatrices: Map<number, { matrix, childIndex, parentIndex }>;
maxLevel: number;
embDim: number;
}
FunctionSignatureDescription
batchGetEmbeddings()(nodes, ids?) => BatchedEmbeddingsGet embeddings as [N x dim] matrix for batched BLAS ops.
batchGetEmbeddingsByLevel()(nodes, level) => BatchedEmbeddingsGet embeddings for nodes at a specific level only.
batchGetNodes()(nodes, ids) => Node[]Get multiple nodes by ID in a single pass.
interface BatchedEmbeddings {
matrix: number[][]; // [N x dim] embedding matrix
ids: string[]; // Node IDs in row order
indexMap: Map<string, number>; // ID to row index
}
FunctionSignatureDescription
batchedForward()(structure, W_up, W_down, ...) => BatchedForwardResultMulti-level message passing using matrix operations.
batchedUpwardPass()(structure, ...) => Map<number, number[][]>Upward aggregation: children to parents via incidence matrices.
batchedDownwardPass()(structure, ...) => Map<number, number[][]>Downward propagation: parents to children.
interface BatchedForwardResult {
E: Map<number, number[][]>; // Final embeddings per level
attentionUp: Map<number, number[][]>; // Upward attention weights
attentionDown: Map<number, number[][]>; // Downward attention weights
}
FunctionSignatureDescription
batchScoreAllNodes()(structure, intent, headParams, config) => NodeScore[]Score all nodes in a single batched operation.
precomputeAllK()(structure, headParams, config) => BatchedScoringCachePre-compute K vectors for all heads across all levels.
batchedKHeadScoring()(cache, intent, config) => NodeScore[]Score using pre-computed K vectors.
batchedBackwardKHead()(cache, ...) => gradientsCompute gradients for K-head parameters.

Orchestrates multi-level message passing for n-SuperHyperGraphs. Supports three phases.

Aggregate tool embeddings into capability embeddings using attention-weighted sum over the incidence matrix.

Level 0 (tools) ──attention──> Level 1 (capabilities)
Level 1 ──attention──> Level 2 (meta-capabilities)
...
import { MultiLevelOrchestrator, type CooccurrenceEntry, type V2VParams } from "@casys/shgat";
const orchestrator = new MultiLevelOrchestrator(trainingMode, { residualWeight: 0.3 });
orchestrator.setCooccurrenceData(cooccurrenceEntries);

For large graphs, sparse message passing avoids dense matrix overhead.

import {
buildSparseConnectivity,
sparseMPForward,
sparseMPBackward,
applySparseMPGradients,
} from "@casys/shgat";
const connectivity = buildSparseConnectivity(graphBuilder);
const { enrichedEmbeddings, cache } = sparseMPForward(embeddings, connectivity, params);

@casys/shgat uses libtensorflow via Deno FFI for native C performance. No WASM, no npm TensorFlow packages.

Initialize the TensorFlow FFI backend. Must be called once before any tensor operations.

import { initTensorFlow, isInitialized, getBackend } from "@casys/shgat";
const backend = await initTensorFlow(); // "ffi"
console.log(isInitialized()); // true
console.log(getBackend()); // "ffi"
FunctionSignatureDescription
tensor()(data, shape?) => TFTensorCreate a tensor from nested arrays or flat data + shape.
zeros()(shape) => TFTensorTensor filled with zeros.
ones()(shape) => TFTensorTensor filled with ones.
variable()(tensor, name?) => VariableWrap a tensor as a trainable variable.
FunctionDescription
matMul(a, b)Matrix multiplication.
softmax(t, axis?)Softmax activation.
gather(t, indices, axis?)Gather slices by index.
unsortedSegmentSum(t, segmentIds, numSegments)Segment sum (used for scatter/gather gradients).
add(a, b) / sub(a, b) / mul(a, b) / div(a, b)Element-wise arithmetic.
transpose(t)Matrix transpose.
reshape(t, shape)Reshape tensor.
concat(tensors, axis)Concatenate tensors along axis.
slice(t, begin, size)Slice a tensor.
expandDims(t, axis)Add a dimension.
squeeze(t, axis?)Remove dimensions of size 1.
clipByValue(t, min, max)Clip values to range.
FunctionDescription
relu(t)Rectified linear unit.
leakyRelu(t, alpha?)Leaky ReLU with configurable slope.
elu(t)Exponential linear unit.
sigmoid(t)Sigmoid activation.
tanh(t)Hyperbolic tangent.
FunctionDescription
sum(t, axis?)Sum reduction.
mean(t, axis?)Mean reduction.
max(t, axis?)Max reduction.
square(t)Element-wise square.
sqrt(t)Element-wise square root.
exp(t)Element-wise exponential.
log(t)Element-wise natural log.
neg(t)Element-wise negation.
FunctionSignatureDescription
tidy()<T>(fn: () => T) => TExecute function scope. FFI tensors must still be manually disposed.
dispose()(t: TFTensor | TFTensor[]) => voidDispose tensor(s) to free memory.
memory()() => { numTensors, numBytes }Memory usage info.
logMemory()() => voidLog memory usage to console.

Wraps a tensor as a trainable variable with assign() for in-place updates.

import { tensor, variable, Variable } from "@casys/shgat";
const W = variable(tensor([[0.1, 0.2], [0.3, 0.4]]), "W_projection");
console.log(W.shape); // [2, 2]
const current = W.read(); // Get underlying tensor
W.assign(newTensor); // Replace with new values (disposes old)
W.dispose(); // Free memory

The full low-level FFI bindings are available via the tff namespace.

import { tff } from "@casys/shgat";
console.log(tff.version()); // libtensorflow version
console.log(tff.isAvailable()); // true if libtensorflow loaded
tff.close(); // Close FFI handle

Pure TypeScript math functions (no TensorFlow dependency). Used internally and available for custom computations.

import { math } from "@casys/shgat";
const dot = math.dotProduct(vecA, vecB);
const cos = math.cosineSimilarity(vecA, vecB);
const result = math.matVecMul(matrix, vector);
const product = math.matmul(matA, matB);
const sm = math.softmax(logits);
const lrelu = math.leakyRelu(values, 0.2);

The math namespace includes:

  • Vector operations: dotProduct, cosineSimilarity, vectorAdd, vectorScale, vectorNorm, normalize
  • Matrix operations: matVecMul, matmul, matmulTranspose, transpose
  • Activations: softmax, leakyRelu, sigmoid, relu
  • Statistics: mean, variance, standardDeviation
  • BLAS acceleration: automatically initialized on module load for optimized linear algebra

Register a custom WASM kernel for UnsortedSegmentSum. Required for gather gradient support on the WASM backend (not needed with FFI/libtensorflow).

import {
registerUnsortedSegmentSumKernel,
isUnsortedSegmentSumRegistered,
} from "@casys/shgat";
if (!isUnsortedSegmentSumRegistered()) {
registerUnsortedSegmentSumKernel();
}
FunctionSignatureDescription
initializeParameters()(config: SHGATConfig) => SHGATParamsInitialize all SHGAT parameters (head params, fusion weights, intent projection).
initializeLevelParameters()(config, embDim, level) => LevelParamsInitialize parameters for a single hierarchy level.
countParameters()(params: SHGATParams) => numberCount total trainable parameters.
getAdaptiveHeadsByGraphSize()(leaves, composites, maxLevel, preserveDim, embDim) => { numHeads, hiddenDim, headDim }Compute adaptive head count based on graph size.
seedRng()(seed: number) => voidSeed the RNG for reproducible parameter initialization.
import { initializeParameters, countParameters, seedRng, type SHGATParams } from "@casys/shgat";
seedRng(42); // Reproducible initialization
const params: SHGATParams = initializeParameters(DEFAULT_SHGAT_CONFIG);
console.log(`Total parameters: ${countParameters(params)}`);