
Contrary to popular belief, not all generative models for AI require deep learning, neural networks, and the large language models (LLMs) many consider synonymous with Generative AI, if not AI itself. Traditional non-statistical AI approaches, which consists of rules-based systems and are typified by symbolic reasoning, are just as viable, if not more so, for certain generative model use cases.
In fact, there are generative models that involve both non-statistical rules and probabilistic, or statistical, measures. According to Franz CEO Jans Aasman, some of these AI models utilize a “rules-based, statistical approach. If you have rules, rules can still say, ‘if the probability is higher than this, let’s do this. If it’s less than this, let’s do that’.”
Properly implemented State Transition Models (STMs) exemplify probabilistic, rules-based AI approaches that excel at applications of synthetic data and simulation. For these use cases, STM outputs are more explainable, traceable, and reproducible than those of purely statistical language models—even with Retrieval Augmented Generation (RAG).
While users must mitigate language models’ tendencies to hallucinate when answering questions (like how financial markets will respond to certain scenarios), approaches for generating synthetic data — like STMs and other techniques — can simulate those scenarios for more trusted insight.
The MITRE Corporation’s Synthea, a synthetic data generator largely used to generate patient data in healthcare, relies on an STM. Synthea’s output data is based on data from real patients, is statistically similar to that data, yet is artificially generated without machine learning. As such, it doesn’t contain any PII or sensitive data.
Granted, there are several approaches to generating synthetic data, some of which utilize deep neural networks. Nonetheless, Synthea’s incorporation of a STM illustrates that generative models don’t need neural networks while hinting at the broader possibilities of synthetic data generation techniques.
“Think about cases like clinical trials, where there’s new drugs they want to test,” commented Brett Wujek, principal data scientist with the Artificial Intelligence and Machine Learning division in SAS R&D. “Clinical trials are very expensive. Acquisition of medical data is very restricted; there are privacy issues around it. Synthetic data generation techniques can accelerate those development efforts by feeding more representative and appropriate data into those efforts, without the cost, or violating any privacy issues.”
Deconstructing STMs
STM systems scrutinize the various states — and the relationships between them — that an entity, like a patient, goes through. The objective is to determine the likelihood of the entity starting at one state and eventually reaching another in a manner that’s consistently predictable. For example, “You have many states you can be in as a person,” Aasman said. “You can have diabetes, hypertension. You can have a blood value higher than a certain value.”
By assessing how often people transition through these various states, it’s possible to establish rules about which states they’re likely to go through. Ultimately, those rules become the basis for an ability to generate new, or synthetic, data that’s identical to that of an existing dataset — yet lacks PII or other sensitive data. This same paradigm is applicable to any entity for devising other types of synthetic data.
Many synthetic data techniques, including rules-based ones like STMs, are primed for simulation use cases. They can render what Wujek termed “synthetic customers” to test new stratagems for marketing efforts, patient treatment scenarios, and more. These synthetic data manifestations effectively model customers and how they behave in these situations.
Alternatively, organizations could feed customer data to a language model, ground its response via RAG or some other form of prompt augmentation, and ask models to predict how customers will interact with treatment scenarios or marketing strategies. With synthetic data, organizations can simulate each aspect of a dataset, model it appropriately, and see how it responds to “a new product or process you need to gain insight on before you put it further out into the wild,” Wujek said.
Synthea
Synthea invokes the STM approach in conjunction with clinical care mapping (of the states patients traverse) to create fully anonymous synthetic datasets. Healthcare organizations can utilize this synthetic data to perform analytics to improve diagnosis and care—without compromising regulatory compliance, data privacy, and data security.
With this rules-based, non-statistical AI framework, one “counts how often you go from one state to the other,” Aasman explained. “How many times people with hypertension get a stroke. You put every state you can find in healthcare, a state being I’m taking aspirin. I’ve got a stroke. I’ve got hypertension. That’s a state. They go through the lifespan of a patient, and go from state to state that you have in your model, and count how often that happens.”
With this and other synthetic data generation approaches, there are few limits to what can be modeled. Digital twin applications in manufacturing, in which organizations create replicas of real-world systems and determine how to optimize them before putting them in production, are enabled by techniques for rendering synthetic data. With these applications, users can see how to optimize procedures—which they can’t do by asking language models, regardless of how much prompts are augmented.
Rules and Probability
Once the various states of an entity are defined within the STM paradigm, users delineate — and model — the specifics about how the entity transitions through those states. For example, a transition might involve the weather going from sunny to cloudy. But more granularly, there are particulars about how and when transitions occur, which are expressed via rules. “Maybe you can’t go through too much salt intake to hypertension, but first you have to go through something else,” Aasman said.
The statistical aspect of STMs is attributed to the notion of probability, or the chance of an entity transitioning between different states. For instance, on specific types of days (such as when the wind is moving above a certain speed), there may be a 40 percent chance of going from a cloudy to a sunny day.
“You can directly translate this into diseases,” Aasman said. “You can imagine you have hypertension, and you take a certain medication, and you have a 90 percent chance you’re at an upper limit systolic state.” Other considerations modeled in STMs include changes over time, which might occur in cycles or discrete steps.
A Bit of Both
STM systems are noteworthy because they are yet another demonstration of the endurance of non-statistical AI in the age of foundation models. Moreover, they confirm that there are more than two forms of AI — statistical and non-statistical — and that rules-based or non-statistical techniques can support statistical AI applications. However, it’s important to understand the viability of these forms of AI, as well as hybrid techniques like STMs and Bayesian models, is entirely predicated on the use case. One of these methodologies isn’t intrinsically better than the others.
Language models are optimal for some enterprise AI applications. There are also several purely statistical methods for generating synthetic data. Combining these types of AI may well be a harbinger of the future of its use in enterprise deployments. Wujek summarized this possible reality as one in which “What we typically see and, if you think about it, what makes the most sense is, any given problem typically requires a combination of those approaches.”
The post Who Needs Neural Networks? The Generative Prowess of State Transition Models appeared first on The New Stack.
Hybrid non-statistical and statistical AI implementations of STMs are adept at generating real-world simulations and synthetic data.