Every Cluster App Needs

Agni OS provides a uniform feature set conducive to building distributed business applications, namely:

  • Application Container - provides a uniform way of running and controlling application instances, standardizes features like: >
    • Application launch/shutdown cycle
    • Configuration (variable evaluation, structured overrides, macros, navigation)
    • Dependency Injection - driven by code or of the configuration - saturate the class dependencies using injections
    • Logging into various sinks with sink graphs SLA and failover
    • Cluster wide precise time source
    • Event scheduler (programmatically or through configuration)
  • CLI app management - ability to manage applications from command line interface (and GUI web console AWM(Agni Web Manager)). Ability to execute application-specific commands, specific to each application purpose, and the general ones, e.g. "gc" - forces full garbage collection (see app command line reference)
  • Manage individual application components - ability to manage individual components on the application component tree, set properties via remote commands. This is needed for real-time management,( i.e. set log severity at runtime)
  • Distributed configuration of applications on 1000s of nodes - handled by metabase structured config override
  • Data Access - data store partitioning, various models, CQRS/queues
  • Big Memory - cache business domain objects - remove hot-spots from data access. Perform high-load social graph traversal in-ram
  • Glue - connect app components together as-if they were on one machine - location transparency, contract-based programming
  • Instrumentation/Telemetry - system and business-specific data, gather data by hosts, zones, and higher-level zones. Visualize the instrumentation as charts and tables. Trigger alert conditions

Host's Process Topology

In the section above we described the topology of the Agni OS system as a whole - as defined in the metabase regional catalog.

Just like the cluster system as a whole, every host has its own process tree, while every application instance has a set of addressable components (described farther down). The process tree starts of at the Agni Host Governor (AHGOV) process which runs first, then invokes all of the necessary processes under it. How does the AHGOV know what software to run? The following outlines the process:

  • AHGOV process starts
  • AHGOV mounts the metabase (via an injectable FS like any process)
  • AHGOV determines what host is it on, and gets its ROLE
  • The ROLE (defined in the Application catalog) lists the applications that the roles consists of
  • An application is physically represented by a number of binary packages that come from the binary catalog
  • The metabase system meatched the most appropriate packages for the platform and operating system version which runs on the given machine
  • If the required packages are not present locally, then they get installed (downloaded)
  • The AHGOV runs the applications that need to be auto-started in the defined sequence

Cluster Application Container

Agni OS Applications execute in the IAgniApplication scope. The Agni app augments the typical NFX application container with the services available in the cluster environment.

    // Denotes system application/process types 
    public enum SystemApplicationType
        Unspecified = 0, HostGovernor, ZoneGovernor, WebServer, GDIDAuthority,
        ServiceHost, ProcessHost, SecurityAuthority, TestRig, Tool

    // Defines a contract for Agni OS applications
    public interface IAgniApplication : IApplication
        // Returns the name that uniquely identifies this application in the metabase. 
        // Every process/executable must provide its unique application name in metabase
        string MetabaseApplicationName { get; }

        // References system-related functionality
        IAgniSystem TheSystem { get; }

        // References application configuration root used to boot this application instance
        IConfigSectionNode BootConfigRoot { get; }

        // Denotes system application/process type that this app container has,
        //  i.e.:  HostGovernor, WebServer, etc.
        SystemApplicationType SystemApplicationType { get; }

        // References distributed lock manager
        Locking.ILockManager LockManager { get; }

        // References distributed GDID provider
        IGDIDProvider GDIDProvider { get; }

        // References distributed process manager
        Workers.IProcessManager ProcessManager { get; }

        // References dynamic host manager
        Dynamic.IHostManager DynamicHostManager { get; }

The IAgniApplication services are accessible via a static singleton shortcut AgniSystem:

// Provides a shortcut access to app-global Agni context
public static class AgniSystem
    // Returns BuildInformation object for the core agni assembly
    public static BuildInformation CoreBuildInfo{ get;}

    // Every agni application MUST ASSIGN THIS property at its entry point ONCE.
    //  Example:
    //  void Main(string[]args){ AgniSystem.MetabaseApplicationName = "MyApp1";...
    public static string MetabaseApplicationName{ get; set;}

    // Returns instance of agni application container that this AgniSystem services
    public static IAgniApplication Application { get;}

    // Denotes system application/process type that this app container has,
    // i.e.:  HostGovernor, WebServer, etc.
    public static SystemApplicationType SystemApplicationType { get; }

    // Returns current instance
    public static IAgniSystem Instance { get; }

    // Returns true when AgniSystem is active non-NOP instance
    public static bool Available { get; }

    // References application configuration root used to boot this application instance
    public static IConfigSectionNode BootConfigRoot { get; }

    // Host name of this machine as determined at boot.
    // This is a shortcut to   Agni.AppModel.BootConfLoader.HostName
    public static string HostName { get; }

    // True if this host is dynamic
    public static bool DynamicHost { get; }

    // Returns parent zone governor host name or null
    // if this is the top-level host in Agni hierarchy
    public static string ParentZoneGovernorPrimaryHostName { get; }

    // NOC name for this host as determined at boot
    public static string NOCName { get; }

    // True when metabase is mounted!=null
    public static bool IsMetabase { get; }

    // Returns metabank instance that interfaces the metabase as
    // determined at application boot.
    // If metabase is null then exception is thrown. 
    // Use IsMetabase to test for null instead
    public static Metabank Metabase { get; } 

    // Returns Metabank.SectionHost (metabase's information about this host)
    public static Metabank.SectionHost HostMetabaseSection { get; }

    // Returns Metabank.SectionNOC
    // (metabase's information about the NOC this host is in)
    public static Metabank.SectionNOC NOCMetabaseSection { get; }

    // Returns Agni distributed lock manager
    public static Locking.ILockManager LockManager { get; }

    // References distributed GDID provider
    public static IGDIDProvider GDIDProvider { get; }

    // Returns Agni distributed process manager
    public static Workers.IProcessManager ProcessManager { get;}

    // Returns Agni distributed dynamic host manager
    public static Dynamic.IHostManager DynamicHostManager { get;} 

Any process running on Agnis OS executes in the following sequence:

  • At the app entry point (usually Program.cs) the AgniSystem.MetabaseApplicationName property gets assigned, this tells the app container what app to load, as the host has a role, which may have more than one application
  • If the hosting container process is a general purpose on, (i.e. aws), then the app name must be specified from the launch command line args: aws -agni app-name=<your app name>
  • The application container is set-up as normally in NFX - in 99.9% of cases it is an instance of AgniServiceApplication class:
    using(var app = new AgniServiceApplication(SystemApplicationType.WebServer, args, null)) {}
  • The Agni cluster app internally uses a BootConfLoader which is executing in the boot app container scope. The boot app container gets configured from the file co-located with the executable module (this is a standard NFX behavior).
  • The BootConfLoader uses the "host { name='host path' }" from the boot config, or AGNI_HOST_NAME environment variable
  • The BootConfLoader then performs the following steps:
    • Mounts the file system to access the metabase. The file system configuration (type and connection parameters are taken from the boot config file, or if not present from the environmental variables, see reference)
    • Mounts the metabase via the file system
    • Calculates the configuration file for the particular application name on this host
    • Injects the configuration in AgniClusterApp
  • The Application container is now ready for use, just like in any other NFX app

CLI Management/Terminals

Agni application container provides remote/external administration features built-in into any application.

The above illustrates the AHGOV process started manually from command prompt. The AWM (Agni Web Manager) is a built-in admin portal that gets served from any app container (may be disabled if not needed). This functionality is similar to home routers and other devices that have built-in management web portal. The application does not have to be a true "web server", as the capability is very lightweight and does not take any special resources.

Similarly, the CLI tool ASCON (Application Server Console) can be used to connect to an application instance.

App-specific commands:
HGov@wmed0001> gc;
GC took 39 ms. and freed 34235454 bytes


Instrumentation is a built-in native function of NFX. Agni is built on NFX and extends instrumentation/telemetry concept into distributed domain. The hierarchical topology of the system is a natural fit for processing of instrumentation data in the Map:Reduce fashion. The whole cluster is already mapped-out using NOCs, zones and subzones. Zone governor processes receive instrumentation data from the subordinate nodes and reduce it by the zone, feeding the data further to the higher level of the hierarchy. Therefore, the top-level zone has a real-time telemetry acquired (and reduced) from the whole cluster at any given time.

The following diagram illustrates the process:

The AWM GUI provides visual charting of instrumentation data. The data is saved on the server in a cyclical buffer, so upon new GUI connect the history is shown:

Logs are aggregated and teleported to the higher levels using the similar mechanisms:

Component Model

Agni Application inherits component model from NFX. A component is any class that derives from the NFX.ApplicationModel.ApplicationComponent class. Components have directors/owners forming a tree structure. For example, a log service has sinks that have log service as director. Some sinks may have sub-sinks where they send filtered log messages.

The following screenshot depicts the CLI interface executing the "cman" (component manager) command -

Every component instance is addressable by a process-unique ID "SID" which can be looked-up by running cman. Components can also be addressed by their common name - depicted in purple above.

Individual components can be managed, their properties can be set using CLI or AWM tool. In order to be externally manageable, the properties have to be decorated with ExternalParameter attribute:

    public bool InstrumentationEnabled { get; set; }

Once decorated, the component can show the property graphically in the AWM tool, or can be changed from ASCON CLI:

AWM showing component tree with sub-components accessible via [+]

CLI command example:
cman{ name=log param=Reliable value=true};
Sets App.Log.Reliable = true; so the logger waints on stop until all buffered messages get reliably written to destination sinks