Zhexiao Xiong

Senior of Tianjin University

Personal Information


  • Zhexiao Xiong
  • 100
    MyBest:111 (Listening: 29 Reading: 30 Writing: 28 Speaking: 24)
  • 335 (166(V)+169(Q))
  • 86.2/100(3.64/4.0(WES))

Contact Information


Programming


Deep Learning Framework


EDUCATION


  • Tianjin University - B.S. in Communication Engineering

    scholarship: People’s Scholarship of Tianjin University

PUBLICATION


  • Zhexiao Xiong, Xin Wen, Xu Zhao*, Haiyun Guo, Chaoyang Zhao, Jinqiao Wang. Two-level Iteration Method for Multi-task Learning with Task-isolated Labels, International Conference on Computer Vision and Pattern Analysis, 2021.

  • Nanfei Jiang, Xu Zhao, Chaoyang Zhao, Pengkun Liu, Zhexiao Xiong, Yongqi An, Ming Tang, Jinqiao Wang. Pruning-aware Sparse Regularization for Network Pruning, AAAI Conference on Artificial Intelligence, 2022. (Under review).

  • Nanfei Jiang, Zhexiao Xiong, Hui Tian, Xu Zhao, Xiaojie Du, Chaoyang Zhao*, Jinqiao Wang. PruneFaceDet: Pruning Lightweight Face Detection Network by Sparsity Training, Cognitive Computation and Systems, 2021.

RESEARCH EXPERENCE


  • Research Intern, Intelligent Perception and Interaction Research Department, OPPO Research, Beijing

    • Research in image matting combined with human pose estimation.
  • Visual Transformer pruning on multitask face attribute recognition
    Graduation Thesis
    Supervisor: Prof.Xu Zhao & Prof.Weizhi Nie

    • Build multitask transformer on face attribute recognition task.
    • Propose a structured pruning approach based on L0 and L2 sparsity regularization, achieving well-balanced in accuaracy and speed.
    • Apply my Visual Transformer pruning method on multitask face attribute learning.
  • Two-level Iteration Method for Multi-task Learning with Task-isolated Labels
    National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
    Supervisor: Prof. Jinqiao Wang

    • Proposed a two-level iteration method based on multi-task learning, including the task-level inner iteration and regular outer iteration, which achieves training with task-isolated labels.
    • Achieved training multi-task face attribute recognition networks without the need for full annotations of all images.
    • Achieved higher accuracy and lower computation costs than single-task learning on CelebA, MORPH II, and self-collected datasets.
  • Face Anti-Spoofing
    National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
    Supervisor: Prof. Jinqiao Wang

    • Built face anti-spoofing model based on CelebA-Spoof, CASIA-SURF-CeFA and LCC-FASD dataset.
    • Used detection methods based on RetinaFace and designed network architecture based on MobileNet and CDCN++, including multi-task learning.
    • Used model compression and knowledge distillation to reduce the float-point-operations and run-time memory of the model.
    • Plan to submit a paper based on the work this fall.
  • Pruning-aware Sparse Regularization for Network Pruning
    National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
    Supervisor: Prof. Jinqiao Wang

    • Proposed a novel pruning method, MaskSparsity, with pruning-aware sparse regularization.
    • Only applied the sparse regularization on the unimportant channels to be pruned and minimized the negative impact of the sparse regularization on important channels.
    • Achieved 63.03%-FLOPs reduction on ResNet-110 by removing 60.34% of the parameters with no top-1 accuracy loss on CIFAR-10, which exceeds the previous state-of-the-art performance.
    • Reduced more than 51.07% FLOPs on ResNet-50 with only a loss of 0.76% in the top-1 accuracy, which shows superiority over the previous state-of-the-art methods.
  • Pruning Lightweight Face Detection Network by Sparsity Training
    National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
    Supervisor: Prof. Jinqiao Wang

    • Used network slimming algorithms, including structured sparsity and optimal thresholding to reduce the parameters, float-point-operations, and run-time memory of the model.
    • Performed the network training with sparsity regularization on channel scaling factors of each layer, and then removed the connections and the corresponding weights with the near-zero scaling factors after the sparsity training.
    • Applied the proposed pruning pipeline on a state-of-the-art face detection method, EagleEye, and got a shrunken model which has a reduced number of computing operations and parameters.
    • Achieved 56.3% reduction of parameter size with almost no accuracy loss on WiderFace dataset.
  • Cross-domain Object Detection Using Domain Adaptation
    Multimedia Processing Lab, TJU
    Supervisor: Associate Prof. Yuenan Li

    • Used Pytorch to implement object detection and image dehazing tasks in different background situations.
    • Combined detection methods such as Cascade R-CNN with ART, RPN, PSA, and GAN modules to build the object detection network.
  • Colorization of Images and Cartoon Pictures Based on Generative Adversarial Network
    Innovation and Entrepreneurship Training Program, TJU
    Team Leader Supervisor: Prof. Zhong Ji

    • Built Generative Adversarial Network (GAN) based on Pytorch and changed the structure of GAN to improve the accuracy based on the ChromaGAN by adjusting the parameters and changing the architecture of generator and loss function.
    • Proposed the research plan, sought for background algorithms, and implemented them.
    • Wrote a project final report and successfully passed the reply.

PROJECT EXPERIENCE


  • Mobile AI 2021 Real-Time Camera Scene Detection Challenge

    • CodaLab Competition: Mobile AI Workshop @ CVPR 2021
    • Achieved Fast Camera Scene Detection via Light-weight Network Designing and Model Pruning.
      Used the two-stage fine-tuning method to improve the accuracy and the model pruning method to improve the model’s efficiency.
      Converted the pretrained Pytorch model to Tensorflow and used the float32-to-int8 quantization and model pruning methods to optimize our model.
      Submitted the final TFLite model which can be deployed on mobile platforms and got a top 10 score in the evaluation.
  • Brain Tumor Detection and COVID19 Diagnosis Based on Convolutional Neural Network

    • Data Science Summer School, Imperial College London
    • Used TensorFlow to deal with medical images classification and segmentation problems and finished the project CNN based Brain Tumor Detection and COVID19 diagnosis.
      Mastered and implemented robot visual orientation and SLAM based on MATLAB.
      Got A Distinction in the program.
  • Analysis of Customers’ Reviews and Star Ratings

    • The Mathematical Contest in Modeling(MCM)
    • Used machine learning methods and natural language processing methods based on the given star ratings and feedback data of the customers.
      Used machine learning methods and natural language processing methods based on the given star ratings and feedback data of the customers.

SKILLS


Programming: Python, C++, MATLAB, Java
Deep Learning: PyTorch, TensorFlow
Embedded System Developing: C51, Arduino, FPGA
Instrument: Flute, Guitar
"Stay hungry. Stay foolish"is my motto. It spurs me to keep on exploring the unknown realm with modesty.