-
Notifications
You must be signed in to change notification settings - Fork 0
Ontology development and deployment
Platform and mechanisms discuss
Based on the requirements I am thinking about the ontology development more in terms of a platform / mechanism for extendible data structures + first versions to bootstrap it rather than developing a more or less full ontology for covering the open world of AI tasks and datatypes (which is impossible anyway).
It seems that Idris will allow to build all this, mostly thanks to Type Providers and possibly Foreign Function Interfaces. While Idris allows for type checking of all functions in a program (including the external ones via type providers, if they are written in Idris), most of SNET agents will be written in other languages and therefore opaque to Idris. If we think of SNET services as "foreign functions", then an orchestrator written in AI-DSL will do a job equivalent to 'type-checking' and 'compiling' a collection of these functions into one program / workflow (both are in quotes because there will be no way to formally check the correspondence of type declarations in Idris and actual functions within SNET services). Functionality of SNET agents is currently partially described internally (in terms of input and output) as protobuf files. We will have to (initially manually) translate them into Idris type declarations and make available to the network. (Btw, there is a way to translate protobuf specifications to Haskell types (via JSON), therefore maybe it can be adapted for automatic back and forth protobuf<->Idris translation...)
We also we may need to put additional information to these task descriptions, e.g. reputation score of a task or resource utilization constraints. In terms of resource constraints for AI services which are not written in Idris -- and therefore no formal verification will ever be possible on them -- resource utilization will most probably be estimated based on average historic usage (some sort of accumulated telemetry data with which we are dealing with NuNet).
Furthermore, resource utilization constraints may need to be exposed to all kinds of external engines. E.g. for NuNet we are using Nomad job descriptions, which describe resource utilization of a service in a (kind of) JSON file -- machines on which containers are running are also described in the same way so that to match the amount of required resources for a service and available resources on a machine.
Data structures and infrastructure discuss
The goal would be to build the data structures and infrastructure which we already could use for developing the ontology and deploying/making it accessible right away for platform-wise experiments -- implementing and following the dogfooding principle.
Interfacing SingularityNET agents [and NuNet adapters] to AI-DSL discuss
Here is the preliminary design of an experiment that can be started pretty much immediately and form the basis for a test suite that would be iteratively augmented as we go further:
![](images/first_direction.png)
We will include this experiment into the nunet platform alpha development workflow. Here is the related issue for tracking the progress on the NuNet platform alpha milestone: https://gitlab.com/nunet/nunet-adapter/-/issues/16.