We employ Bayesian program synthesis that is based on a mathematical framework named after 18th century mathematician Thomas Bayes. The Bayesian probability is used to refine predictions about the world using experience. This form of probabilistic programming, a code that uses probabilities instead of specific variables, requires fewer examples to make a determination, such as, for example, that the sky is blue with patches of white clouds. The program also refines its knowledge as further examples are provided, and its code can be rewritten to tweak the probabilities.
A probabilistic program can determine, for instance, that it’s highly probable that cats have ears, whiskers, and tails. As further examples are provided, the code behind the model is rewritten, and the probabilities tweaked. At a certain point, the AI program takes over, and models are created on their own. In other words, it is learning how to teach itself instead of us needing to teach it.
As models, our probabilistic programs are related to deep neural networks with categorical variables, but offering more benefits compared to the lastest generation of deep learning.
Besides, Machine Learning (ML) offers tremendous promise for businesses, and technologies powered by ML solutions are rapidly being adopted in the marketplace. Importantly, customers' expectations today in terms of access and service are constantly increasing, which will require implementation of ML solutions to meet this demand. However, many existing ML solutions fall short of their promise because of how the underlying technology works. At BayesLearn, we believe that by taking the best of the different approaches and unifying them mathematically, we can develop machine learning that offers breakthrough improvements in accuracy & efficiency, and creates a system where human intelligence can easily guide the system.
Cost reductions & productivity gains in the industry require AIs that can teach themselves as a necessary force for the creation of the AI-driven economy.
While AI is a key part of scalable new technologies, there is a human bottleneck with only hundreds of thousands of AI engineers on the planet, but to continue at the pace we’re on, millions will be needed. In other words, genius is in short supply. But if we want to see cost reductions & productivity gains in the industry any time soon, AI able to teach itself is a necessary force for the creation of the AI-driven economy.
Artificial Intelligence (AI) has already become capable of self-replication and self-creating process in terms of coding and modeling. Bayesian program synthesis is a new technique that makes that possible by building algorithms capable of learning from fewer examples.
For instance, to train a deep-learning algorithm to recognize a cat, you first must feed it hundreds of thousands of images of felines, capturing a huge amount of variation in size, shape, texture, lighting, and orientation. Indeed, it would be much more efficient if an algorithm could develop an idea or a concept about what makes a cat a cat from fewer examples, just as we humans do.
Allowing randomness and simulating systems via probabilistic programs & Bayesian synthesis.
The idea is to build Bayesian probabilistic programs that model the data and that play the role of simulators able to simulate the generative story of your sparse data & how your data came to be. Each program will try as many possibilities through random variable in the model as it can to try to match the data which gives the posterior distribution as output. Determinism will also be part of the model when we have pretty good idea of the story that created the data.
Starting with your URL.