Distributional AI Testing Platform Gets $19 Million Funding

Distributional AI Testing Platform has announced that it has raised $19 million in Series A funding led by Two Sigma Ventures with participation from Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence VC, Alumni Ventures and angel investors. The new round brings Distributional’s total capital raised to $30 million less than one year since incorporation. The milestone also aligns with the initial enterprise deployments of its AI testing platform that gives AI engineering and product teams confidence in the reliability of their AI applications, reducing operational AI risk in the process.

Distributional is built to test the consistency of any AI/ML application, especially generative AI, which is particularly unreliable since it is prone to non-determinism, or varying outputs from a given input. Generative AI is also more likely to be non-stationary with many shifting components that are outside of the control of developers. As AI leaders are increasingly under pressure to ship generative AI, Distributional helps automate AI testing with intelligent suggestions on augmenting application data, suggesting tests, and enabling a feedback loop that adaptively calibrates these tests for each AI application being tested.

Distributional’s platform allows AI product teams to proactively and continuously identify, understand and address AI risk before customer impact. Prominent features include:

  • Extensible Test Framework: Distributional’s extensible test framework enables AI application teams to collect and augment data, test on this data, alert on test results, triage these results, and resolve these alerts through either adaptive calibration or analysis driven debugging. This framework can be deployed as a self-managed solution in a customer VPC and is fully integrated with existing datastores, workflow systems and alerting platforms.
  • Configurable Test Dashboard: Teams use Distributional’s configurable test dashboards to collaborate on test repositories, analyze test results, triage failed tests, calibrate tests, capture test session audit trails and report test outcomes for governance processes. This enables multiple teams to collaborate on an AI testing workflow throughout the lifecycle of the underlying application, and standardize it across AI platform, product, application and governance teams.
  • Intelligent Test Automation: Distributional makes it easy for teams to get started and scale AI testing with automation of data augmentation, test selection and calibration of these steps in an adaptive preference learning process. Intelligence is the flywheel that fine tunes a test suite to a given AI application throughout its production lifecycle and scales testing across all properties for all components of all AI applications.