Impressions from the sps ipc drives Italia 2016 fair

The state of confusion that currently prevails when the Internet comes to manufacturing was confirmed at the sps ipc drives Italia fair that took place this week in Parma, Italy.


The confusion starts from the terminology. If you view the encounter of Internet and the industry as dominated by the former, you will use as label IIoT (industrial Internet of things); this seems typical of American companies, especially with an IT (information technology) background.

If you think that the encounter should be dominated by the industrial culture you’ll use the Industrie 4.0 label, as most German companies and even the German government do. Digital manufacturing looks like a neutral term, but it is biased towards discrete manufacturing and not very popular in the process industry, which is already quite digital … albeit not connected ! There are also the CPS (cyber-physical systems) and cloud labels, or you can sprinkle some smart- prefixes here and there.

And finally, as a consequence of these technological transitions, a reconfiguration should ensue, driving everybody happily towards servitization i.e. renting their machines with a pay-per-use, machines-as-a-service business model.

As anybody who has been enthusiastic for SOA (Service Oriented Architecture) or the network computer (or for any of the dozens of buzzwords which have plagued the industry in the last decades) knows well, not everything that comes out of the marketing guru’s heads turns into reality. Or it might become real sometime, but who knows when ?

For this Internet + manufacturing thing there are many reasons for all stakeholders to be quite frightened of the consequences, which you can extrapolate from what happened since we as consumers have embraced the smart-phone revolution:

  1. I am actually dumber, as the phone tells me where to go, what to, how much to exercise etc.
  2. all my data are sucked out and sold multiple times by third parties
  3. rather than buying phones, I subscribe long-term service-access contracts bundled with some hardware
  4. the major European smart-phone producer Nokia has vanished because hardware is now a commodity
  5. the (American) platform owners Apple and Google win everything.

In the industry, secretive end users are scared of loosing the control on the data and on the know-how. Those who handle dangerous substances and processes fear the risk of hackers wreaking havoc. OEMs may sense the danger of being driven to compete on totally flat, global and frictionless digital marketplaces, where their service is totally replaceable by their competitors’, and the only winner is the one biggest player or the owner of the platform itself. And while small end users may benefit from the cloud and machines-as-as-service, because that lowers the cash-flow barriers for them, by buying smart machines they may actually become dumber, i.e. lose the control of how much value is added by those machines to their business.

Anyway whatever buzzword they choose to use, it is a fact that the marketing departments of the big automation and industrial IT providers are pushing hard on those, and the largest among their customers may soon decide to sail into these troubled waters: a large corporation may be confident that their sheer size will allow them to overcome the storm.
But the enthusiasm is markedly limited in European SMEs which stick to the generally accepted wisdom that what is good for the big fish is not good for the small fish; and Italian SMEs play even cooler, as they are conservative and followers by attitude.

There are exceptions though, and in certain niche applications the impression is that SMEs may actually be much quicker than anyone else in making the jump; if they overcome their fears, the flexibility of the SME wins.
Given the astonishingly quick rate of adoption among consumers, it would seem natural that end users with a contiguity with the consumer sector would have lower barriers against the cloud. Those may be for example OEMs who supply artisans, small food & beverage producers etc. – although I am not able to name examples or lay down quantified figures on the market penetration. What I do have are signals that some SMEs are already working with other SMEs around architectures and business models that you could label Internet + manufacturing, but they do so below the radar, and you wont’ find their success stories in the most exhaustive analyst reports.

In conclusion, if you are a SME and have a business case in mind, please drop us a line at and we’ll find out together how we can turn your something into a smart-something, along a down-to-earth evolution path.

Posted in Uncategorized | Leave a comment

Debugging LIBPF applications with gdb

GNU debugger (gdb) is the standard command-line debugger on many Unix-like systems for troubleshooting C++ programs.

To prepare for debugging your application, compile it with debugging symbols enabled; for example assuming you want to debug Qpepper and use bjam to build:

cd ~/LIBPF/pepper
bjam debug Qpepper

or if you use qmake/make to build:

cd ~/LIBPF/pepper
make debug

A typical debugging session starts by launching gdb with the relative path to the executable as a parameter:

cd ~/LIBPF/bin
gdb ./pepper/gcc-4.9.2/debug/Qpepper

Next we typically want to set up a breakpoint at the Error::Error function, which is where the control flow will pass if an exception is thrown; to do that, use the b (breakpoint) command:

b Error::Error

Then you launch your application with the required command-line parameters with the r (run) command:

r new jjj

When the exception is thrown, the debugger will stop at the breakpoint:

Breakpoint 1, Error::Error (this=0xed2080, 
    cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)") at ../utility/src/
56      Error::Error(const char *cf) : msg_("Error was thrown by function: ") {

From here you can:

  1. examine the call stack with the where command, which will return something like:
    #0  Error::Error (this=0xed2080, 
        cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)") at ../utility/src/
    #1  0x00000000006097b2 in ErrorObjectFactory::ErrorObjectFactory (this=0xed2080, 
        cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)", ty=0xed09e8 "type jjj not found")
        at ../utility/src/
    #2  0x00000000007d30c1 in NodeFactory::create (this=0x7fffffffd7ef, type="jjj", defaults=..., id=0, 
        persistency=0x0, parent=0x0, root=0x0) at src/
    #3  0x00000000004263ec in createCase_ (type="jjj", defaults=..., error=@0x7fffffffdffc: 32767, svgs=true)
        at src/
    #4  0x0000000000427901 in Libpf::User::createCase (type="jjj", tag="jjj", description="", jcd="", 
        error=@0x7fffffffdffc: 32767) at src/
    #5  0x000000000040e64d in main (argc=3, argv=0x7fffffffe158) at ../user/src/

    notice the first column that is the frame number, and the error message details found as ty parameter to the function call in frame #1: type jjj not found

  2. jump to the frame that occurred in your own code and not in the library, such as frame #5, using the f (frame) command:
    f 5
  3. list the source code around the current execution point with the l (list) command, which will return something like:
    189         Libpf::User::Handle caseHandle = Libpf::User::createCase(type, tag, description, options, error);
    (gdb) l
    184         std::string options("");
    185         if (argc > 5) {
    186           options = argv[5];
    187         } // if options are passed
    189         Libpf::User::Handle caseHandle = Libpf::User::createCase(type, tag, description, options, error);
    190         if (error < 0)
    191           quitNow(error);
    192         else
    193           quitNow(;

Issuing the same commands repeatedly at the gdb command prompt is common, therefore it’s handy to enable gdb command history:

cat >> ~/.gdbinit
set history save
set history filename ~/.gdb_history

For more debugging tips, check the excellent RMS gdb tutorial or the gdb manual.

Posted in C++, Howtos | Leave a comment

Summary of the A&T fair, 2016 edition

Here is the Affidabilità e Tecnologie (A&T) fair, 2016 edition (held in , Torino April 20-21 2016) summarized by three audiovisual documents:

  1. Robot drives train:
  2. Robot plays golf:
  3. Robot brews coffee:
Posted in Philosophy | Leave a comment

Bash on Windows 10

This week at Build 2016, the yearly developer-oriented conference, Microsoft announced that Windows 10 will be able to run Linux’s Bash shell, by executing the native Ubuntu binary as-is.

Don’t stop at the news headline though: this is not just about Linux Bash, the command shell and scripting language.
All Ubuntu user space commands can potentially work, including the apt package manager with which you can tap from the 60000+ software packages available in the Ubuntu repos.

More technical details are found in two blog posts by Dustin Kirkland, an Ubuntu employee that worked with Microsoft on the magic behind.

This is no virtualization / container technology. It is more about API emulation: Linux system calls get translated in real time into Win32 API calls. No need to recompile the binaries.


It’s an approach that resembles the POSIX subsystem that was part of Windows NT, whose latest (2004) denomination was “Subsystem for UNIX-based Applications” (SUA), deprecated with Windows 8 and Windows Server 2012 and completely removed in Windows 8.1 and Windows Server 2012 R2. I guess it its just a resurrection of this approach.

Even if this technology is aimed at developers, if you think about it, it has certain strategic implications.

On the Operating System competition landscape, this levels the field with Apple OS X, which already had Bash and several package managers (but not apt ! and the binaries had to be recompiled !). It is a praise to the outstanding technical excellence of the Debian Linux distribution, which lies at the foundation of Ubuntu. It lowers the attractiveness of Linux on the Desktop, as developers can run all their preferred tools from within Windows. It lowers the barriers against migrating to Windows services and solutions developed on Linux technologies and stacks (MAMP, LAMP …): not that this wasn’t possible before, but you had to depend on many more bits and pieces of uncertain trust-wordiness. Now it looks like a simpler and well supported path.

It obsoletes certain technologies designed for similar purposes such as Cygwin and MinGW. It also obsoletes the plethora of ad-hoc installers and Windows-specific binaries for tools such as ActivePerl, git, PostgreSQL, nginx, Ruby, Node.js et cetera.

Finally, on the Open Source / commercial software divide, it demonstrates once more (should there be any need for it) that business can benefit from Open Source: effective immediately, thousands of Open Source enthusiasts are working for the good of Microsoft, with no compensation.

ATM many questions are still open: when will this technology land on Windows Server (currently it requires to install an app from the Windows Store, which is not always possible) ? Will this be available on previous versions of Windows like Windows 7 and 8.1 ? Will this be integrated with system administration tasks such as installing / un-installing a service ?

Posted in Philosophy | Tagged | Leave a comment

Modeling a pipe with a large pressure drop

The Pipe model is a concentrated parameter model for pipes. The correlations it uses are applicable only for small pressure drops, i.e. less than 10% of the absolute inlet pressure. If the calculated pressure drop is larger than that, you’ll get a warning.

But what to do if you have a long pipe or a pipe with a large pressure drop ?

Pipes by Nigel Howe

Pipes by Nigel Howe

The solution is to use a MultiStage unit to put together a number of Pipe units in series, thereby effectively discretizing the unit.

Assume this is your flowsheet definition (in the constructor):

addUnit("Pipe", defaults.relay("DN300", "connection piping"));
// ...

addStream("StreamLiquid", defaults.relay("S03", "inlet flow"), "RX", "out", "DN300", "in");
addStream("StreamLiquid", defaults.relay("S04", "outlet flow"), "DN300", "out", "BD", "in");

and this is the pipe unit data (in the setup method):

Q("").set(300.0, "mm");
Q("DN300.s").set(5.0, "mm");
Q("DN300.L").set(2000.0, "m");
Q("DN300.h").set(0.0, "m");
Q("DN300.eps").set(0.0457, "mm");
Q("DN300.vhConcentrated").set(3.0 * 0.75);

(beware this is C++ code ! check a tutorial if you have no clue how process modeling in C++ is possible !)

So to discretize the Pipe unit you’d merely change the addUnit command creation to create a MultiStage unit (no need to change the addStream statements):

addUnit("MultiStage", defaults.relay("DN300", "connection piping")
  ("nStreams", 1)
  ("typeT", "Pipe")
  ("typeU", "StreamLiquid")
  ("nStage", 30));

The meaning of the options passed to the addUnit command and ultimately to the constructor of the MultiStage unit is:

  • nStreams: this is useful for more complex multi-stream arrangements, in this case each Pipe unit has just one inlet and one outlet so we set it to 1
  • typeT: the type of the unit operation model used for each “stage”
  • typeU: the type of the stream model which connects the “stages”.
  • nStage: the number of stages i.e. of discretization steps

The model setup becomes:

for (int j=0; j< I("DN300.nStage"); ++j) {
  std::string stage("DN300:S[" + std::to_string(j) + "]");
  at(stage).Q("de").set(300.0, "mm");
  at(stage).Q("s").set(5.0, "mm");
  at(stage).Q("L").set(2000.0 / static_cast<double>(I("DN300.nStage")), "m");
  at(stage).Q("h").set(0.0, "m");
  at(stage).Q("eps").set(0.0457, "mm");
  if (j == 0)
    at(stage).Q("vhConcentrated").set(3.0 * 0.75);

Here we iterate over all the “stages” and set de (external diameter), s (thickness), h (elevation) and eps (rugosity) to the same values as before on all discretization “stages”; the L (length) we divide by the number of “stages”; finally the vhConcentrated (velocity heads associated to the concentrated pressure drops) we place only once in the 1st “stage”.

Done !

Posted in C++, Chemeng | Leave a comment

Where is the SQL database plugin for ODBC (qsqlodbc.dll) in Qt 5.6 ?

It looks like the SQL database plugin for ODBC (qsqlodbc.dll) is missing from the standard Qt 5.6 installer for Windows you download from

You’d expect to find it in C:\Qt\Qt5.6.0\5.6\msvc2015_64\plugins\sqldrivers\ where the sqlite, mysql and postgresql ones are found, but the ODBC plugin is not there.

The issue is known (see QTBUG-49420 and QTBUG-51390) but these bug reports are closed and it seems there will be no fix on a short term.

So here is how you build qsqlodbc.dll from source for Windows 64-bit with Visual Studio 2015, based on the instructions “How to Build the ODBC Plugin on Windows”.

First get the source package from and unzip it in any location, for example C:\qt-everywhere-opensource-src-5.6.0.

Now open a “VS2015 x64 native tools command prompt”, CD to the location of the qt-everywhere-opensource-src-5.6.0 directory then:

cd qtbase\src\plugins\sqldrivers\odbc

At the end you’ll find the qsqlodbc.dll in C:\qt-everywhere-opensource-src-5.6.0\qtbase\plugins\sqldrivers alogside with qsqlodbcd.dll and qsqlodbcd.pdb.

Just copy these three files to the location where the Qt 5.6 installer for Windows should have put them
C:\Qt\Qt5.6.0\5.6\msvc2015_64\plugins\sqldrivers\ and you’ll be done !

Posted in C++, Howtos, Rants | Leave a comment

“api-ms-win-crt-runtime-l1-1-0.dll is missing” error with LIBPFonOPC on Windows Server 2012 R2 64 bit

You may receive the error “The program can’t start because api-ms-win-crt-runtime-l1-1-0.dll is missing” when launching the LIBPFonOPC configurator 1.0. 2264 on a fresh install of Windows Server 2012 R2 64 bit.

01Related symptom: this error silently appears in the Setup Log of the Event viewer during the installation of LIBPFonOPC:
Windows update could not be installed because of error 2149842967 "" (Command line: ""C:\Windows\SysNative\wusa.exe" "C:\ProgramData\Package Cache\3ACBF3890FC9C8A6F3D2155ECF106028E5F55164\packages\Patch\x64\Windows8.1-KB2999226-x64.msu" /quiet /norestart")


Here is a hint to the solution: “Excel can’t start because api-ms-win-crt-runtime-l1-1-0.dll is missing”.

So it turns out that the Visual C++ Redistributable for Visual Studio 2015 silently fails to install on a fresh install of Windows Server 2012 R2 64 bit … So just update the OS and reinstall the Visual C++ Redistributable for Visual Studio 2015 update 1 from here:

Posted in Howtos, Uncategorized | Leave a comment

HOWTO make verbosity level accessible for users

LIBPF’s  4 diagnostic levels allow the model developer to fine-tune the verbosity at a global, compilation, unit, function and class instance level. Of these, two (verbosityGlobal and verbosityInstance) are also available at run-time, but normally not for the model user.

If you want to make them accessible to the user, there are several options. Here I’ll show you how to do it using a couple of real-valued Quantities that can be directly manipulated in the user interface or via the LIBPF™ Model User API.

NOTE: we use Quantities rather than Integers because currently there is no way for the user to change integers at runtime; the user will have to enter an integer value such as 1, 2 or 100 for really high verbosity.

Se here we go:

  1. declare two custom variables in your model class:
    Quantity global; ///< global verbosity
    Quantity instance; ///< instance verbosity
  2. initialize them in the model class constructor initializer list:
    DEFINE(global, "global verbosity", 0.0, ""),
    DEFINE(instance, "instance verbosity", 0.0, ""),
  3. register them in the model class constructor body:
  4. make them user-modifiable in the setup method of the model class:
  5. use them at model calculation to increase /decrease the verbosityGlobal and verbosityInstance variables, by implementing the FlowSheet::pre and FlowSheet::post overrides of your model class:
    void MyModel::pre(SolutionMode solutionMode, int level) {
      verbosityGlobal += global.toDouble();
      verbosityInstance += instance.toDouble();
    void MyModel::post(SolutionMode solutionMode, int level) {
      verbosityGlobal -= global.toDouble();
      verbosityInstance -= instance.toDouble();
Posted in C++, Chemeng, Howtos | Tagged | Leave a comment

Qt Data Visualization preview

In the Qt Roadmap for 2016 they say “In addition, Qt 5.7 includes a lot of modules that have been previously available only with the commercially licensed Qt: … Qt Data Visualization – Versatile set of chart types for 3D visualization of data“.

This is interesting news: the Qt Data Visualization module provides a way to visualize data in 3D. What if we want to try it out now ? For example on Debian 8 which ships Qt 5.3.2 ?


1. get and build QDV:

cd /tmp
git clone git://
cd qtdatavis3d

(the build of the library succeeds, while some example fails; we’ll skip on that).

2. build the Surface Example:

cd examples/datavisualization/surface

3. To run it, we need to make it find the right dynamic libraries:

find ../../.. | grep

this is easy with (on 32-bit systems, use /lib/ rather than /lib64/

/lib64/ --library-path ../../../lib/. ./surface

Mandatory screen-shot:

Note: to rotate, press the RMB (right mouse button) and drag.

Posted in C++, Howtos | Tagged , | Leave a comment

RSS feed for EASME news

If you are looking for the RSS feed of the EASME news (the European Commission’s Executive Agency for Small and Medium-sized Enterprises), here it is:


For details how this works, see the related post RSS feed for AIChE ChEnected where we published scrapped the RSS feed of AIChE ChEnected (the online community of young professional chemical engineers hosted by the American Institute of Chemical Engineers)

Disclaimer: we do not alter in any way the contents of EASME news; we do not claim any rights on their content; we have no responsibility of their content; the service may break in the future; etc etc.

Posted in Chemeng | Leave a comment