Innovation in numerical analysis

Innovation does not stop in the field of numerical analysis, which lies at the heart of industrial applications of numerical simulation and modeling.

In this regards the recent paper “Multifrontal Factorization of Sparse SPD Matrices on GPUs (ACM)” by T George, V Saxena, A Gupta, A Singh and A R Choudhury in the IPDPS ’11 Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium is very interesting. The authors report a seven-fold speed-up for their coupled single-core CPU+GPU hybrid implementation w.r.t. a state-of-the-art single-core CPU implementation. Speedups were the even higher (10 to 25 w.r.t. single-core CPU) with 2-cores CPU + 2 GPU. The proposed methods apply only to symmetric positive definite (SPD) sparse matrices, such as those found in FEM methods and the like; to apply these techniques to the solution of the large-scale systems of non-linear equations and differential-algebraic equations that occur in steady-state and dynamical process simulation would require adapting them to non-symmetric multifrontal solvers.

Unfortunately to successfully pursue innovation in these areas requires a challenging mix of competences in numerical analysis, algorithmic complexity theory, computer science and processor architecture. The innovations are likely to be applied first for the benefit of users that are served by vertical suppliers, which integrate hardware + software + numerics + domain specific competence. Other businesses such as the process industries, which are served by a more complex network of vendors (process automation suppliers, simulator vendors, research centers specialized in numerics …), will see these innovations with a larger delay.

Take the “parallel-computing-for-the-masses” revolution we are living now: gaming consoles, netbooks, tablets and even smartphones have multi-core CPUs; and consumer software is quickly adapting to this switch: games, office software, compression utilities, multimedia applications…

But if we try to search for the keyword “multicore” in the websites of four mayor vendors in the arena of computing applied to the process industries, we wont’ find any match, no even to vague roadmaps, vaporware announcements or bold marketing presentations:




BTW right now I cant’ remember any academic contribution in this area, except one which modesty prevents me to cite here.

Even if the first non-embedded dual-core processor was released in 2001 and the first OpenMP Fortran API was released in 1997, this is the progress for the transition from single-core to multi-core CPUs in process simulation: 0% as of today.

The de-facto industry standard parallel computing architecture for general-purpose computing on graphics processing unit NIVIDIA CUDA was released with complete single-precision floating point support with version 2 around 2008. Guess when exploiting multi-core CPUs and GPUs at the same time will be a must in the  control rooms and on process engineers desks ?

Amazing: there is a lot of work to do !