Global Data Science Forum

Expand all | Collapse all

AL/ML testing

  • 1.  AL/ML testing

    Posted Mon January 07, 2019 05:38 AM
    Hi, how to test the AI and ML models. i have seen recently AI and ML testing tools there are, and if any one have idea about this tools please replay this post.

    rajasekhar kothapalli

  • 2.  RE: AL/ML testing

    Posted Tue January 08, 2019 03:53 AM
    On the IBM Cloud Private, you could test your AI/ML models, including adversarial attack tests to see if your models are secured. You could refer to videos of the previous IBM conference.
    On, these could help you dive in:
    "A real-time ML application on IBM Cloud Private for Data", "Building a Secure and Transparent ML Pipeline Using Open Source Technologies", etc.
    And it is also possible to make your end-to-end tests with
    Good luck!

    Practice Makes Perfect !

  • 3.  RE: AL/ML testing

    Posted Tue January 08, 2019 07:23 AM
    Not sure what you are asking. Can you clarify your question?

    Vladimir Lialine

  • 4.  RE: AL/ML testing

    Posted Tue February 05, 2019 06:05 AM
    Here are few examples where AI and ML can be used:

    While developing test cases around a code block, a function or an end-to-end scenario: Here, if you observe care fully, most objects are static, and only variable is the flow or sequence of actions and the response to that action. If there is a tool which can learn the sequence, action and the response, over the time, it should come up with valid test cases for that code block, function or a scenario.
    In test automation, specifically with Selenium kind of web automation, xpaths, ids etc. are used to identify the objects. If there is a tool, which knows how to build a xpath for an object once DOM is loaded in the browser, can guess the "valid actions" it could do on that "DOM object" and could call respective Selenium action method, then lots of automation coding effort can be saved.
    Self healing automation frameworks:
    Like in #2, if some thing is changed within DOM, UI or API, this AI/ML tool should understand that change, and correct itself.
    During test runs, should clearly segregate the product failures and test code failures, re-run only the code-failure test cases and log bugs for product failure cases.
    There must be some thing built around these expectation already, but what I believe is, bringing in AI/ML to software testing would give an edge, accuracy and reduces the latency that testing cycle adds to overall delivery cycle.

    Please share if you have ideas, implemented something around these areas or any how to steps if you have thought through. Thanks.

    rajesh kumar