Skip to main content
Video s3
    Details
    Presenter(s)
    Zhengfeng Wu Headshot
    Display Name
    Zhengfeng Wu
    Affiliation
    Affiliation
    Drexel University
    Country
    Author(s)
    Display Name
    Zhengfeng Wu
    Affiliation
    Affiliation
    Drexel University
    Display Name
    Ioannis Savidis
    Affiliation
    Affiliation
    Drexel University
    Abstract

    A transfer learning technique is proposed that utilizes models trained on data in one technology node to predict the performance of a circuit based on the sizing of transistors in the new node. During transfer training, the front layers of the prior models are frozen while the remaining layers are retrained with less data in the target technology node. The technique is applied to the prediction of 7 performance metrics of an operational amplifier based on 7 design variables. Models trained on a dataset from a 180 nm process are transferred to predict the performance metrics of the op-amp utilizing 100 simulated design points from a 65 nm process. Applying transfer learning reduces the normalized MAEs on the test set in all cases by up to 50% compared with training models standalone. Results indicate that the transferred gain predictor trained with only 100 data points provides a lower test error than the model trained standalone with 1000 data points without transfer learning. Transfer learning greatly improves the sample efficiency for the training of neural networks for the prediction of the performance parameters of a circuit.

    Slides
    • Transfer Learning for Reuse of Analog Circuit Sizing Models Across Technology Nodes (application/pdf)