What I do / focus / read : generally everything related to NMF

  • Matrix/Tensor Factorizations

    • low complexity (low-rank) model : make the problem simple

    • complexity estimation : how to find the factorization rank?

    • regularisation and constraints : make the problem less underdetermined/ill-pose

  • Iterative non-linear programming in continuous optimisations : they are cool !

    • First order methods and accelerated first order methods, with rate of convergence

  • Global optimization on Non-convex optimisation

    • When will a non-convex factorisation problem not so scary ?

    • Convergence to global solution for a non-convex optimisation problem

  • Robustness analysis of algorithm : like those what my boss did

    • How large can noise be added to a noiseless structure such that the problem is still solvable?

    • Finding the phase transition boundary of the factorisation model

  • Other things that are interesting

    • Analytic continuation : for parameter tuning in regularized problems

    • Random matrix theory : so you still using the elbow of the scree plot of PCA?

    • lifting technique : problem is hard in original form? Lift it to a bigger space so that the problem is easier (but more expensive) to solve !

    • Discrete optimisation, extended formulation, polytope geometry

    • Computational complexity theory

  • What I don't read : “deep” things, randomised-column-subset-selection-things, randomised-algorithm-without-variance-or-convergence-proof

Not yet published work

Valentin Leplat, Andersen M.S. Ang, Nicolas Gillis, Minimum-Volume Rank-deficient Non-negative Matrix Factorizations
What : volume regularised NMF also works for rank deficient case
[conference preprint]

Journal Papers

J2. Andersen Man Shun Ang, Nicolas Gillis, Accelerating Nonnegative Matrix Factorization Algorithms using Extrapolation, to appear in Neural Computation
What : solving NMF using extrapolation – update \(W^* = W_n + \beta(W_n - W)\) where \(W_n\) and \(W\) are the previous two iterates and parameter \(\beta \in [0,1]\) (similarly for \(H\)).
😁 : simple, deterministic approach, in “line search” style, very fast – faster than the Block proximal linearized method of (Xu & Yin 2013)
☹ : no theoretical convergence – hard to prove, working in progress
[arXiv], [MATLAB], [Slide], [Old slide 1], [Old slide 2, for IMSP2018]
Presented in

  • OR2018, Brussels, Belgium, 2018.09.14

  • ISMP 2018, Bordeaux, France, 2018.07.05

G 

Conference papers

C3. Andersen Man Shun Ang, Nicolas Gillis, Volume regularized Non-negative Matrix Factorisations, IEEE WHISPERS 2018, Amsterdam, Netherlands, 2018.09.25
What : an iteratively reweighed least square formulation for minimising log-determinant regularized NMF
Finding : in fact no need to do complicated det regularisation nor Taylor series approximation of logdet, a simple column \(l_2\) regularisation is enough
😁 : proposed method is fast for this special problem
☹ : no theoretical convergence – hard to prove, working in progress
[Short slide], [Full slide (last updated 2018-May-18)], [conference poster], [conference preprint], [Full paper (later, still can be improved)]

  • Presented in

    • Chinese University of Hong Kong, Hong Kong, 2017.12.27

    • University of Hong Kong, Hong Kong, 2017.11.30

    • XMaths Workshop, University of Bari Aldo Moro, Bari, Italy, 2017.12.20

    • ORBEL32, University of Liège, Liège, Belgium, 2018.02.01

    • SIAM ALA18, Hong Kong Baptist University, Hong Kong, 2018.05.04

    • inforTech'Day 2018, Mons, Belgium, 2018.05.16.

    • IEEE WHISPERS 2018, Amsterdam, Netherlands, 2018.09.25

Given the black dots, find the red dots. 

Other stuff : presentations / posters / old works

  • Work before 2017.02 (J1 and C1-2)
    See “Old things”.
    In short : applied researches that focus on biomedical signal processing systems, no theory/proof.