Profile
ProfessorDepartment of Computer Science, Faculty of Engineering
Nagoya Institute of Technology
Research Interests: Coding Theory, Signal processing for wireless communications, Deep learning
Biography
Tadashi Wadayama was born in Kyoto, Japan, on May 9,1968. He received the B.E., the M.E., and the D.E. degrees from Kyoto Institute of Technology in 1991, 1993 and 1997, respectively. On 1995, he started to work with Faculty of Computer Science and System Engineering, Okayama Prefectural University as a research associate. From April 1999 to March 2000, he stayed in Institute of Experimental Mathematics, Essen University (Germany) as a visiting researcher. On 2004, he moved to Nagoya Institute of Technology as an associate professor. Since 2010, he has been a full professor of Nagoya Institute of Technology. His research interests are in Coding theory, Information theory, Signal processing for wireless communications. He is a member of IEICE and IEEE.Contact
email: wadayama at nitech.ac.jpRecent Highlights: Deep Unfolding for Signal Processing
Deep unfolding is a technique for improving iterative algorithms based on standard deep learning toolkit such as back propagation and stochastic gradient descent methods. One can tune embedded trainable parameters in unfolded signal flow graphs. In most cases, we can get faster convergence without sacrificing the performance of the orignal algorithm.Related documents (manuscripts and slides)
- Slides of Data-Driven Tuning of Proximal Gradient Descent Algorithms for Signal Recovery Problems, Asia-Europe workshop on informaton theory 11, 2019.
- Fundamentals on Deep Learning for Wireless Communications (Github, IEICE MIKA2019): sample codes of deep unfolding(pytorch). main text: MIKA2019.pdf (in Japanese)
Our contributions
- S. Takabe, TW, Y.C.Eldar, ``Complex Trainable ISTA for Linear and Nonlinear Inverse Problems'', to appeear in ICASSP2020 (arXiv version), 2019.
- TW and S. Takabe,``Chebyshev Inertial Landweber Algorithm for Linear Inverse Problems'', on arXiv, 2020.
- TW and S. Takabe,``Chebyshev Inertial Iteration for Accelerating Fixed-Point Iterations'', on arXiv, 2020.
- S. Takabe and TW, ``Theoretical interpretation of learned step size in deep-unfolded gradient descent'', on arXiv, 2020.
- D. Ito, S. Takabe, TW, ``Trainable ISTA for Sparse Signal Recovery,'' IEEE Trans. on Signal Processing, 2019.
- Implementation of TISTA in PyTorch (on Github)
- S. Takabe, M. Imanishi, TW ; R. Hayakawa, K. Hayash, ``Trainable Projected Gradient Detector for Massive Overloaded MIMO Channels: Data-Driven Tuning Approach,'' IEEE Access, 2019
- Implementation of TPG detector for massive overloaded MIMO in PyTorch (on Github)
- Implementation of C-TISTA in PyTorch (on Github)
- TW and S. Takabe, ``Deep Learning-Aided Trainable Projected Gradient Decoding for LDPC Codes, '' on arXiv, 2019.