AI has already become capable of self-replication and self-creating process in terms of coding & modeling. Bayesian program synthesis is a new technique that makes that possible by building algorithms capable of learning from fewer examples only.
The underlying technologies beneath program synthesis combine cutting-edge advances in logical & symbolic reasoning, these are the techniques that become popular in the first wave of AI and are also behind many different static program analysis tool & machine learning (ML) which is the ongoing to revolution in AI and we believe that much more technical disruption will come if we put these two different styles of techniques together: logical reasoning in one hand and statistical techniques in the form of ML in the other.
At BayesLearn, we have been working towards putting together our solution BayesianRhapsody for developing these program synthesizers and we believe that use of framework like these shall evolve the writing new libraries to that of writing new synthesisers for those domains.
As models, our probabilistic programs are related to deep neural networks with categorical variables, but offering more benefits compared to the lastest generation of deep learning.
We employ Bayesian program synthesis that is based on a mathematical framework named after 18th century mathematician Thomas Bayes. The Bayesian probability is used to refine predictions about the world using experience. This form of probabilistic programming, a code that includes probabilities & some determinism instead of specific variables only. It requires fewer examples to make a determination, such as, that the sky is blue with patches of white clouds. The program also refines its knowledge as further examples are provided.
A probabilistic program can determine, for instance, that it’s highly probable that cats have ears, whiskers, and tails. As further examples are provided, the code behind the model can edited & rewritten, and the probabilities tweaked. At a certain point, the AI program takes over, and models are created on their own. In other words, it is learning how to teach itself instead of us needing to teach it. Indeed, probabilistic programming is a powerful abstraction layer for Bayesian inference, separating the model learning, suitable for probabilistic programming & based on maximum marginal likelihood, and the inference part of the problem.
Besides, ML offers tremendous promise for businesses, and technologies powered by ML solutions are rapidly being adopted in the marketplace. Importantly, customers' expectations today in terms of access and service are constantly increasing, which will require implementation of ML solutions to meet this demand. However, many existing ML solutions fall short of their promise because of how the underlying technology works. At BayesLearn, we believe that by taking the best of the different approaches and unifying them mathematically, we can develop ML that offers breakthrough improvements in accuracy & efficiency, and creates a system where human intelligence can easily guide the system.
Allowing randomness and simulating systems via probabilistic programs & Bayesian synthesis.
The idea is to build Bayesian probabilistic programs that model the data and that play the role of simulators able to simulate the generative story of your data & how your data came to be. Each program will try as many possibilities through random variable in the model as it can to try to match your sparse data which gives the posterior distribution as output. Determinism will also be part of the model when we have pretty good idea of the story that created the data.