Skip to main content
Video s3
    Details
    Presenter(s)
    Rupesh Raj Karn Headshot
    Display Name
    Rupesh Raj Karn
    Affiliation
    Affiliation
    Khalifa University
    Country
    Author(s)
    Display Name
    Rupesh Raj Karn
    Affiliation
    Affiliation
    Khalifa University
    Display Name
    Matthew Ziegler
    Affiliation
    Affiliation
    IBM Research
    Display Name
    Jinwook Jung
    Affiliation
    Affiliation
    IBM T. J. Watson Research Center
    Display Name
    Abe Elfadel
    Affiliation
    Affiliation
    Khalifa University
    Abstract

    The long-term deployment of data-driven AI technology using artificial neural networks (ANNs) should be scalable and re-deployable when new data becomes available. For such efficient adaptation, the learning should be cumulative so that new data are processed within the modified network without compromising the inference performance of past data. Such incremental accumulation of learning experience is known as progressive learning. In this paper, we address the open problem of tuning the hyper-parameters of neural networks during task progressive learning. A hyper-parameter optimization framework is proposed that selects the best value of hyper-parameters on a task-by-task basis. The neural network model adapts to each progressive learning task by adjusting the hyper-parameters under which the neural architecture is incrementally grown. Several hyper-parameter search mechanisms are explored and compared in support of progressive learning. The proposed methodology has been successfully demonstrated using a set of cyber-security datasets.

    Slides
    • Hyper-Parameter Tuning for Progressive Learning and its Application to Network Cyber Security (application/pdf)