Ariadve.eu / Methodology

Data & Code with Elixir

+

Elixir is a dynamic, functional language for building scalable & maintainable applications.

Elixir runs on the Erlang VM, known for creating low-latency, distributed, and fault-tolerant systems. These capabilities and Elixir tooling allow developers to be productive in several domains, such as web development, embedded software, machine learning, data pipelines, and multimedia processing, across a wide range of industries. Key features ...

Model & Train with Numerical Elixir

Empower Elixir with numerical extensions for data, machine learning and AI.

Numerical Elixir is an effort started in 2021 to bring the power of numerical computing to Elixir (and vice-versa). This organization hosts several projects that empowers Elixir in the areas of data, machine learning, AI, and more. Our beloved mascot is the Numbat . Key features ...

Learn & Share with Livebook

Automate code & data workflows with interactive notebooks.

Get rid of scripts, manual steps, and outdated docs. Use Elixir and Livebook to share knowledge, deploy apps, visualize data, run machine learning models, debug systems, and more! Key features ...

Apply & Deploy with Phoenix

Phoenix Framework

Peace of mind from prototype to production.

Build rich, interactive web applications quickly, with less code and fewer moving parts. Join our growing community of developers using Phoenix to craft APIs, HTML5 apps and more, for fun or at scale. Key features ...

Target with Data Science

Data Science

Guide Machine Learning in Retrospection and treat AI as Probable but Non-Deterministic.

Until recently algorithms were predominantly constructed as deterministic and data input/output transactional. With the advent of Machine Learning and AI that is about to change.

Neural Networks are the engines of Machine Learning. They are built up from input, transitional, and output nodes with their (weight/bias) computational interconnections. The number of interconnections grows lineair with the number of nodes within one level of depth. It grows exponentially with the depth from input, through levels of transition to final output. And, so do the number of associated weights and biases. During training the configurations of weights and biases are exhausted to find the configuration that best reflects the training data with a minimal loss in precision.

Until recently and prior to AI programming defects were either algorithmic generic or alternatively data specific. Data defects mostly related to individual data occurences. Bad data in one case would not affect the correct algorithmic transformation of correct data in another case. In a sense it was a bipolar world of programmers fixing algorithmic defects on one side and data administrators updating data defects on the other.

With Machine Learning and AI algorithms no longer are deterministic and instead become stochastic. AI tells you it is either (highly) likely or otherwise unlikely that some phenomenon is the case. But it is not a guarantee! An AI algorithm requires inclusive and proper data training & testing to be able to face the real world and make good decisions in which it has to operate. This requires a discipline of keeping training & testing data real-world precise and inclusive, making its business participants code and data aware, and at all times avoid AI derived stochastic data to intermingle with deterministic business data. Facts remain facts and should never be contaminated with likelihoods or unlikelihoods for that matter.

This discipline of proper data representation and proper interpretation of stochastic results is well known from experiments, hypothesis testing and diagnosis in the scientific world.

See Data science for further clarification.