portcake.blogg.se

Yang yu
Yang yu





  1. #YANG YU FULL#
  2. #YANG YU SOFTWARE#
  3. #YANG YU FREE#

Instead of resorting to the fourth moment as in former parallelepiped-learning attacks, we work with the second order statistics covariance and use its spectral decomposition to recover the secret information. The first result is an improved key recovery exploiting the leakage within the base sampler investigated by Guerreau et al. In this work, we study Falcon’s side-channel resistance by analysing its Gaussian samplers.

yang yu

Due to its low bandwidth and high efficiency, Falcon is seen as an attractive option for quantum-safe embedded systems.

yang yu

(for undergraduate students.Falcon is one of the three post-quantum signature schemes selected for standardization by NIST.

  • Tutorial of Artificial Intelligence (for undergraduate students of AI School.
  • Running time analysis of evolutionary optimization (with Chao Qian and Zhi-Hua Zhou) We develop tools for analyzing the complexity of evolutionary algorithms, one of the most foundamental issues of evolutionary algorithms.
  • Approximation analysis & Pareto optimization (with Chao Qian, Xin Yao and Zhi-Hua Zhou, etc.) Our studies analyze the goodness of solutions of evolutionary algorithms, and design the Pareto optimization that has been shown as powerful approximation tools for various subset selection problems.
  • Model-based derivative-free optimization (with Hong Qian and Yi-Qi Hu, etc.) For complex optimizations in real domains, our studies address the issues including theoretical foundation, high-dimensionality, and noisy-evaluation.
  • We are working toward theoretical-grounded efficient derivative-free optimization methods for better solving machine learning problems.
  • Derivative-free optimization aims at tackling optimization problems with complex structures, such as non-convex, non-differentiable, and non-continuous problems with many local optima.
  • Evoluionary Learning: Advances in Theories and Algorithms.
  • Reinforcement learning on StarCraft (with Zhen-Jia Pang, Ruo-Zhe Liu, etc.) Our studies try as efficient as possible to learn good playing policy for this extremely large-scale partial-observable real-time-strategy game.
  • Experience reuse in reinforcement learning (with Qing Da, Chao Zhang, Zhi-Hua Zhou, etc.) Our studies design ways to accelerate reinforcement learning by resuing experiences, paricularly, accumulated in simulators.
  • These environments enable zero-cost trial-error training for industrial applications.
  • Environment virtualization for reinforcement learning (with Alibaba and Didi Inc.) To apply reinforcement learning in real-world industrial applications, our studies discover that it is feasible to build virtual environments with good generalizability solely from the historical data.
  • Reinforcement learning aims at learning models for optimal sequential decisions autonomously.
  • Our team is trying in various aspects to improve reinforcement learning, including theoretical foundation, optimization, model structure, experience reuse, abstraction, model building, etc., heading toward sample-efficient methods for large-scale physical-world applications. Its potential has not been fully released in many situations. Despite the fantastic future, reinforcement learning is still in early infancy. Reinforcement learning searches for a policy of near-optimal decisions, by learning from environment interactions autonomously. We will have the 4th Asian Workshop on Reinforcement LearningĪ quick-learned policy beats level 3 bot in Starcraft II Currently, I am mainly focusing on reinforcement learning.

    #YANG YU FREE#

    I gave an Early Career Spotlight talk on Toward Sample Efficient Reinforcement Learning in IJCAI 2018.Ī Python package for derivative free optimization. Our NeurIPS'19 paper connects neural perception and logic reasoning through abductive learning.

    #YANG YU FULL#

    We published the first paper of reinforcement learning on the full length game of StarCraft II.Ī Virtual Taobao environment is released for the research of recommendation system and reinforcement learning. Currently, I am working on reinforcement learning in various aspects, including optimization, representation, transfer, etc. My research interest is in machine learning, a sub-field of artificial intelligence. I joined the School of Artificial Intelligence of Nanjing University as a Professor from 2019. Zhi-Hua Zhou), and then joined the LAMDA Group ( LAMDA Publications), in the Department of Computer Science and Technology of Nanjing University as an Assistant Researcher from 2011, and as an Associate Professor from 2014. degree in Computer Science from Nanjing University in 2011 (supervisor Prof.

    yang yu

    #YANG YU SOFTWARE#

    LAMDA Group School of Artificial Intelligence National Key Laboratory for Novel Software Technology Nanjing University Office: 311, Computer Science Building, Xianlin Campus email: received my Ph.D. Can be pronounced as "young you" Ph.D., Professor







    Yang yu