Yahoo Web Search

Search results

  1. View Ziyan Huangs profile on LinkedIn, a professional community of 1 billion members. Experience: Microsoft · Education: University of California, Los Angeles · Location: Seattle · 500 ...

    • 500+
    • 2.1K
    • Microsoft
    • Redmond, Washington, United States
  2. If you find our STU-Net helpful in your project, please kindly cite: @misc{huang2023stunet, title={STU-Net: Scalable and Transferable Medical Image Segmentation Models Empowered by Large-Scale Supervised Pre-training},

    • Overview
    • Environments and Requirements:
    • 1. Training Big nnUNet for Pseudo Labeling
    • 2. Filter Low-quality Pseudo Labels
    • 3. Train Small nnUNet
    • 4. Do Efficient Inference with Small nnUNet
    • Citations

    Revisiting nnU-Net for Iterative Pseudo Labeling and Efficient Sliding Window Inference

    Ziyan Huang, Haoyu Wang, Jin Ye, Jingqi Niu, Can Tu, Yuncheng Yang, Shiyi Du, Zhongying Deng, Lixu Gu, and Junjun He \\

    Built upon MIC-DKFZ/nnUNet, this repository provides the solution of team blackbean for MICCAI FLARE22 Challenge. The details of our method are described in our paper.

    Our trained model is available at Our trained model is available at RESULTS_FOLDER

    You can get our docker by

    You can reproduce our method as follows step by step:

    Install nnU-Net [1] as below. You should meet the requirements of nnUNet, our method does not need any additional requirements. For more details, please refer to https://github.com/MIC-DKFZ/nnUNet

    1.1. Copy the following files in this repo to your nnUNet environment. 1.2. Prepare 50 Labeled Data of FLARE

    Following nnUNet, give a TaskID (e.g. Task022) to the 50 labeled data and organize them folowing the requirement of nnUNet.

    1.3. Conduct automatic preprocessing using nnUNet.

    Here we do not use the default setting.

    1.6. Iteratively Train Models and Generate Pseudo Labels

    •Give a new TaskID (e.g. Task023) and organize the 50 Labeled Data and 2000 Pseudo Labeled Data as above. •Conduct automatic preprocessing using nnUNet as above. •Training new big nnUNet by all training data instead of 5-fold. •Generate new pseudo labels for 2000 unlabeled data.

    We compare Pseudo Labels in different rounds and filter out the labels with high variants.

    3.1. Copy the following files in this repo to your nnUNet environment. 3.2. Prepare 50 Labeled Data and 1924 Selected Pseudo Labeled Data of FLARE

    Give a new TaskID (e.g. Task026) and organize the 50 Labeled Data and 1924 Pseudo Labeled Data as above.

    3.3. Conduct automatic preprocessing using nnUNet

    Here we use the plan designed for small nnUNet.

    We modify a lot of parts of nnunet source code for efficiency. Please make sure the code backup is done and then copy the whole repo to your nnunet environment.

    If you find this repository useful, please consider citing our paper:

  3. Apr 13, 2023 · In this work, we design a series of Scalable and Transferable U-Net (STU-Net) models, with parameter sizes ranging from 14 million to 1.4 billion. Notably, the 1.4B STU-Net is the largest medical image segmentation model to date.

    • arXiv:2304.06716 [cs.CV]
  4. Ziyan Huang. PhD student of Biomedical Engineering, Shanghai Jiao Tong University. Verified email at sjtu.edu.cn. Auto ML Medical Image Analysis. Title. Sort. Sort by citations Sort by year Sort by title. Cited by.

  5. Champion Solution of MICCAI FLARE22 Challenge based on nnU-Net. [MIDL22] Adaptive depth and width U-Net. Search task-specific optimal depth and width of U-Net by DNAS method. A nnU-Netv2 based acceleration solution for Abdominal organs and tumor segmentation.

  6. Jul 3, 2023 · Ventromedial prefrontal neurons represent self-states shaped by vicarious fear in male mice. Ziyan Huang, Myung Chung, Kentaro Tao, Akiyuki Watarai, Mu-Yun Wang, Hiroh Ito &. Teruhiro Okuyama...

  1. People also search for