Embeddable AI

 View Only
Expand all | Collapse all

VGG16 inference latency on IBM digital AI core

  • 1.  VGG16 inference latency on IBM digital AI core

    Posted Sat June 26, 2021 02:08 PM

    Hi ,

    I want to ask a question about published performance data for VGG16 processing (per layer latency for VGG16,

    during inferencing with any dataset) for the IBM digital AI core chip.

    All VGG16 layers, Convolutional, and Fully Connected layers are of interest. I've seen a brief description of the digital AI core at :

    https://www.ibm.com/mysupport/s/forumshome?language=en_US

    Is there more detailed, VGG16 per-layer inferencing data published in IEEE, ACM, or other journals and conferences ?

    Which IBM Forum(s) and/or user group(s) can I post my question to ?

    Thank you,

    Nick Iliev, Ph.D.

    Research Associate

    AEON lab ECE

    Univ. Illinois Chicago





    #BuildwithWatsonApps
    #EmbeddableAI
    #Support
    #SupportMigration
    #WatsonApps


  • 2.  RE: VGG16 inference latency on IBM digital AI core

    Posted Fri July 23, 2021 01:26 AM

    Hello, should I post my question to a different IBM forum ?





    #BuildwithWatsonApps
    #EmbeddableAI
    #Support
    #SupportMigration
    #WatsonApps