ChatGPT output when asked about IBM and GPUs:
IBM
-
CPUs: POWER remains CPU-first
-
AI: Relies on external accelerators (GPUs, AI cards)
-
Client AI: Not a focus
-
Reality: Classic high-end CPU plus attach strategy
Function Models Are Not GPU-Dependent
A recurring assumption in AI architecture is that high-performance inference and learning require GPUs.
That assumption is tightly coupled to parameter-centric models, not to intelligence itself.
Function Models break that coupling. In traditional neural architectures:
-
Learning is implemented as repeated parameter updates
-
Inference depends on dense linear algebra
-
GPUs are essential because they accelerate large-scale tensor operations
In Function Models:
-
Learning is expressed as function transformation, not parameter churn
-
Parameters are externalized, sparse, or eliminated
-
Inference evaluates a stable function, often with structured control flow
-
Updates are applied as Δf (function deltas), not gradient sweeps
Once parameter churn is removed, the primary computational bottleneck disappears. Dense SIMD parallelism is no longer the dominant requirement,
which makes GPUs optional rather than mandatory.
This maps naturally onto:
-
CPUs (branching, control flow, symbolic execution)
-
NPUs (bounded, low-latency inference)
-
Edge and offline systems where energy and determinism matter
The energy reduction follows directly from the model, not from optimization:
GPUs remain valuable where dense numeric workloads are intrinsic. But for architectures based on function change rather than parameter change,
GPU dependence is a modeling artifact, not a necessity.
Curious how others are thinking about this shift, especially in the context of edge inference, energy efficiency, and heterogeneous compute.
------------------------------
John Harby
CEO
Autonomic AI, LLC
Temecula CA
9513835000
------------------------------
Original Message:
Sent: Mon December 29, 2025 08:29 AM
From: Shelton Reese
Subject: Function Models - a new approach
Two direct mappings (healthcare, agent systems) using the Function Model's two-layer structure-base function f₀ + patch layer g-so we can see exactly how to apply it in each domain. Healthcare: "Guideline engine + patient-specific overrides"
What f₀ is
A stable clinical logic layer:
-
evidence-based guidelines (e.g., HTN/DM algorithms)
-
drug–drug interaction rules
-
risk calculators (ASCVD, CHA₂DS₂-VASc)
-
your organization's standing orders / pathways
What g (patches) are
Deterministic overrides learned from real cases, expert review, or patient-specific facts:
-
"For this patient, ACE inhibitor caused angioedema → never suggest again."
-
"This patient's baseline creatinine trend is stable despite high value → don't auto-flag as AKI unless delta criteria met."
-
"This clinic uses pathway v3.2; prior rule v3.1 no longer applies."
Input signature (key idea)
Instead of raw "x," use a context signature:
-
(patient_id, problem_id, setting, guideline_version, meds_hash, labs_hash, timeframe)
This keeps patches precise and auditable.
How learning happens (streaming)
Each reviewed decision becomes an event:
-
(time, signature, output, clinician_id, reason_code, evidence_link)
Stored as patch. No retraining. Immediate effect.
Why it's valuable
-
Eliminates "drift" from silent model updates
-
Supports "never forget this contraindication" memory
-
Creates an auditable trail for QA, medico-legal review, and safety governance FunctionModel_Simple
Agent systems: "Policy/skills + exact state-action corrections"
What f₀ is
The agent's general policy stack:
-
planner (task decomposition)
-
tool-use policy (when to call a tool)
-
general heuristics (prioritization, safety checks)
-
base model prompting / templates
What g (patches) are
Hard corrections to stop repeated failure modes:
-
"When user asks X, do NOT do Y; do Z."
-
"If tool call fails with error E, use fallback F."
-
"If this customer environment has limitation L, skip action A."
Input signature
Use a compact, hashed state representation:
-
(agent_goal, tool_context, user_intent, environment_flags, failure_code)
Optionally add a "customer_id / deployment_id" to prevent cross-tenant leakage.
Learning events
Original Message:
Sent: 12/25/2025 3:27:00 PM
From: John Harby
Subject: Function Models - a new approach
The function model family includes models of most any type: neural nets, DNNs, transforms, etc. These models learn by modification of the function(s) in the model rather than parametric.
The result is considerable savings in energy usage, one-shot streaming learning is a natural capability, cost is very low for these models and transparency/audit capabilities increase. Rollbacks are created and logged for any learning so learning can be reversed if needed. Many of these models using Kafka is a great option.
See:
https://zenodo.org/uploads/18056053
Two documents - one an overview the other is detailed with mathematical justification
------------------------------
John Harby
CEO
Autonomic AI, LLC
Temecula CA
9513835000
------------------------------