How to make the digital transformation of manufacturing happen ?

[Originally appeared as guest post on the Industrial IoT/Industrie 4.0 Viewpoints blog]

The Digital Transformation is there: devices, technologies and suppliers are ready to bring manufacturing enterprises to a new level with increased productivity and more added value.

But if you go for a walk in production now, you’ll probably not see smart sensors, edge analytics-packed machinery, forklift drivers with smart-glasses and foremen yelling orders to their assistive bots.

The macro-level reasons why it is not happening or it is just slowly happening are covered in many posts in the Industrial IoT/Industrie 4.0 Viewpoints blog: there are cultural issues:

a standards mess:

and more:

From a strategic point of view it’s really a war between the incumbent-technology ecosystem and the new ecosystem, rather than between the technologies themselves (see Ron Adner and Rahul Kapoor “Right Tech, Wrong Time” Harvard Business Review November 2016).

Besides those reasons, there are also micro-level reasons that come up along the business decision process when a specific digital transformation project is proposed within a manufacturing company.

In the end it’s just like any other project: you spend upfront a given amount of money and hope to get a return of investment (ROI).

The trouble is that an innovative Industrial IoT project will be complex and high-risk, and there are several ways it could fail:

  • it proves impossible to build (technology or organization issues);
  • it does not deliver the ROI promise (lower returns or higher costs than expected);
  • it does not get traction (the target users will not use it / buy it);
  • it turns out to be unmaintainable in the long term (workforce turnover, product cycles).

At least a third of all Information Technology (IT) projects fail (see Lessons From a Decade of IT Failures from IEEE Spectrum), and probably even more of those of the high-risk type. These rates of failure are unacceptably high for manufacturing, where the average project failure rate is probably less than 10%.

So if the proposed project has a good ROI on paper, how to keep these risks under control and make it a success ?

One approach we know from IT is outsourcing most of the trouble to a reliable supplier, and get the solution you need with a software-as-a-service (SaaS) arrangement. Off-the-shelf SaaS is currently booming in IT: see what Adobe, Autodesk, Microsoft, Salesforce and Trello are doing.

With SaaS, there is no upfront cost, and pay-per-use scales linearly with the number of users; typical figures for consumer-oriented services are 5-10 €/per user/per month and for business-oriented services 10-200 €/per user/per month.

So SaaS is a perfect fit for a small enterprise without an IT infrastructure, or for a larger organization that prefers to keep the internal infrastructure slim. It is also suitable for an innovative project, where user acceptance progresses slowly and the numbers may be initially low.

But all of the SaaS vendors above offer off-the-shelf, standard tools – nobody is offering your-own-IIoT-as-a-service yet ! For that you need tailor-made SaaS.

Giovanni Battista Moroni “Il Tagliapanni”, circa 1570

Giovanni Battista Moroni “Il Tagliapanni”, circa 1570

With tailor-made SaaS, a solution provider will build the IIoT solution based on the requirements of the manufacturing company, with the agreement that:

  1. the provider will not apply their full margin on the upfront costs;
  2. the provider will keep the intellectual property of the solution;
  3. the manufacturing company will perpetually pay for the use of the solution in a SaaS fashion.

With the pay-per-use model the gain for the solution provider will come later if the project is successful, while the OPEX for the manufacturing company will grow gradually as the solution is deployed and gets traction. This setup can slash the CAPEX for the upfront costs due to the initial effort of adapting, integrating and customizing the chosen platforms / technologies by a factor of 2, while creating a strong commitment for the provider to make the project a success.

Of course it’s a generic framework that can be adapted with any option and variant your legal and financial advisors can imagine. For example the pay-per-use business models we know from IT SaaS can be creatively adapted to the OT environment by stipulating per-installation/per-hour fees.
The bottom line is that if you find an agreement with a trusted supplier, with tailor-made SaaS you can share the road towards digital transformation with them, and make it happen for real.

Posted in Chemeng, Philosophy, Uncategorized | Tagged , , | Leave a comment

HOWTO migrate tasks from kanboard to phabricator

Kanboard is a Kanban Project Management tool written in PHP. an excellent lightweight tool to quickly set up a project and organize tasks (think of it as a down-to-earth trello).

Phabricator on the other hand is a complete suite of software development collaboration tools, which among others includes a Kanban-like view of tasks tagged with each project.

If you happen to have to migrate tasks from Kanboard to Phabricator, this guide is for you. But beware in the spirit of Phabricator’s creators we have no well-tested tool to offer, just a semi-manual procedure based on Phabricator’s Conduit API.

Log into the server where you have installed Kanboard, and navigate to the data directory inside that:

cd /var/www/kanboard/data

open the sqlite database:

sqlite3 db.sqlite

explore the database schema:

.schema projects
.schema tasks

find the project (board) you’re interested in

select * from projects;

in our case it was project 3; list tasks and columns from that project:

select id,column_id,title from tasks where project_id=3 order by column_id;

recognize how the column_id field matches the board columns …

Now extract in Python dictionary format the list of tasks, one column at a time (you’ll have to do some manual escaping here !)

  • backlog (that was column_id 9 for us):
    select '{"title": "'||title||'", "description": """'||description||'"""},' from tasks where project_id=3 and column_id = 9;
  • ready (column_id 10):
    select '{"title": "'||title||'", "description": """'||description||'"""},' from tasks where project_id=3 and column_id = 10;
  • work in progress (column_id 11):
    select '{"title": "'||title||'", "description": """'||description||'"""},' from tasks where project_id=3 and column_id = 11;

now copy-paste those in the python script skeleton, replacing the ellipsis dots:

import requests

phabricator_instance = ''
api_token = 'api-aaaaaaaaaaaaaaaaaaaaaaaaaaaa'
projectPHID = "PHID-PROJ-aaaaaaaaaaaaaaaaaaaa"
tasks_backlog = [ ... ]
tasks_ready = [ ... ]
tasks_wip = [ ... ]

def create_task(s, title, description):
    data = {'api.token': api_token,
            'title': title,
            'description': description,
            'projectPHIDs[]': [projectPHID]}
    url = 'https://' + phabricator_instance + '/api/maniphest.createtask'
    req = requests.Request('POST', url, data=data)
    prepped = s.prepare_request(req)
    resp = s.send(prepped)
    results = resp.json()
    error_info = results['error_info']
    if error_info:
        print 'internal: error while creating phabricator task: %s' % error_info
        return {}
    uri = results['result']['uri']
    task_id = results['result']['id']
    return {"uri": uri, "task_id": task_id}

In this script you also have to modify the phabricator_instance (should be the FQDN of the Phabricator instance where you want to file the task), the api_token (can be obtained as follows: as a Phabricator admin, create a bot account then “Edit Settings”, go to “Conduit API Tokens”, click “Generate API token”) and the projectPHID (the Phabricator ID of the project you want to file your task against; here is how you can find that out)).

Now you’re all set to manually execute one by one the imports, starting from the rightmost column:

s = requests.Session()
for t in tasks_wip:
    title = t['title']
    description = t['description']
    create_task(s, title, description)

each time go to the project work-board in Phabricator and move the newly created tasks in the right column.

This was the starting situation in Kanboard:

And this is the final situation in Phabricator:

Quite a lot of work still to do ! But at least we’ve got titles, descriptions and columns right !

Posted in Howtos | Leave a comment

Extend the system partition in a Windows virtual machine running within kvm with file-based virtual disk

The post Extend the system partition in a Windows virtual machine running within kvm/lvm is applicable if the virtual disk is on a LVM volume.

If the virtual disk is file-based, these are the required steps:

  1. Find out what is the file the virtual machine disk is attached to; assuming disk images are in /var/lib/libvirt/images:
    sudo grep 'var.lib.libvirt.images' /etc/libvirt/qemu/name_of_virtual_machine.xml

    You might see something like:

    <source file='/var/lib/libvirt/images/w7_64_cdev15.dd'/>
  2. Shut down the virtual machine
  3. Resize the disk image file:
    sudo qemu-img resize /var/lib/libvirt/images/w7_64_cdev15.dd +10G
  4. Restart the VM
  5. Extend the disk using the guest operating system specific tool; with Windows 7 and later use the extend command from within disk manager (should require no reboot).
Posted in Howtos, Uncategorized | Leave a comment

Effect of initial estimates for KLLs on the convergence of liquid-liquid equilibrium calculations

The initial estimates for the KLLs (equilibrium factors) have a big influence on the convergence behavior of liquid-liquid equilibrium calculations.

To highlight this effect, let’s try out something with LIBPF.

We choose the system H2O / 2-ethyl-1-hexanol.

Experimental measurements for the liquid-liquid equilibrium of this system can be found in the publication: Frank Gremer, Gerhard Herres, Dieter Gorenflo, “Vapour – liquid and liquid – liquid equilibria of mixtures of water and alcohols: measurements and correlations“, High Temperatures – High Pressures, 2002, volume 34, pages 355 – 362 (direct link). We are also grateful to the authors for providing additional data.

Let’s pick from from that source the data point at 120.384 – 129.785°C, here are the water molar fraction:

  • watery phase 0.9998 mol/mol
  • organic phase 0.334 mol/mol

Fitting this data-point we get these NRTL binary parameters:

  • alfa = 0.2
  • B12 = 3060.4764877456
  • B21 = -156.0296483827

These parameters can be used to reproduce the chosen experimental data point, see the HOWTO calculate a liquid-liquid separation tutorial.

The program yields these results:

Phase   fraction        Water   ETEX    Water   ETEX
Name    mol frac        mol frac        ndot, kmol/s    ndot, kmol/s
STREAM:Vphase   0.278635989043  0.999484893116  0.000515106883773       27.8492 0.0143527
STREAM:Lphase   0.721364010957  0.30706763201   0.69293236799   22.1508 49.9856
STREAM:Tphase   1       0.5     0.5     50      50
Water Kll: 3.25493405663 
ETEX Kll:  0.000743372524604

Here Vphase is the first liquid phase, and Lphase is the second liquid phase.

We have used KLL[0] = 1E5 and KLL[1] = 1E-5 as initial estimates, making the first component (the water) more affine with the first liquid phase. Consequently the solver converges to a solution (let’s call it solution B) where the organic phase is the second liquid phase.

If we set each KLL to its reverse (1E-5 and 1E5 respectively) we get the same results, but the phases are inverted:

Phase   fraction        Water   ETEX    Water   ETEX
Name    mol frac        mol frac        ndot, kmol/s    ndot, kmol/s
STREAM:Vphase   0.721364010957  0.30706763201   0.69293236799   22.1508 49.9856
STREAM:Lphase   0.278635989043  0.999484893116  0.000515106883773       27.8492 0.0143527
STREAM:Tphase   1       0.5     0.5     50      50
Water Kll: 0.307225886179 
ETEX Kll:  1345.22055484

here the organic phase is the first phase (let’s call it solution A) and the final KLLs are the reciprocal of those in solution B !

Now let’s try a scan of the range of possible initial estimates for the KLLs, sweeping each from 1E-5 to 1E5:

std::cout << "kll0\tkll1\terrors\titerations\tVphase.x[0]\tLphase.x[0]" << std::endl;
double factor(10.0);
for (double kll0=1E-5; kll0<1E5; kll0*=factor) {
  for (double kll1=1E-5; kll1setPristineRecursive();
    // TODO automate valid initial point table
    std::cout << kll0 << "\t" << kll1 << "\t" <errors.size() << "\t" <NITER_NLEFlash.value() << "\t" <Q("Vphase.x[0]") << "\t" <Q("Lphase.x[0]") << std::endl;
  } // loop over kll1
} // loop over kll0

This prints a 10×10 grid which looks like this:

kll0    kll1    errors  iterations      Vphase.x[0]     Lphase.x[0]

1e-05   1e-05   0       1       0.5     0.5 
1e-05   0.0001  2       500     0.454166789146  0.5 
1e-05	10000	0	4	0.143450341144 	0.999719891117 
10000	1e-05	0	4	0.999719891047 	0.143450278799 

There are four possible situations:

  1. the liquid-liquid split is found, with the 1st liquid being the organic phase: solution A
  2. the same liquid-liquid split is found, with the 2nd liquid being the organic phase: solution B
  3. we have errors: the initial estimate was off and did not enable the nonlinear algebraic solver to find the solution
  4. it converges, but the degenerate solution (with both phases having the same composition) is found

We can generate several of these grids, with the water content in the feed spanning the entire range of compositions.

This animated GIF shows a slideshow of the results:

Here the blue color stands for solution A, green for solution B, red for errors, yellow for single-phase and gray for degenerate solution.

We note the following:

  1. when the initial estimates for the KLLs are close to one another, the degenerate solution is always found
  2. intermediate initial estimates typically cause convergence errors or a spurious single-phase solution
  3. if the alcohol fraction is greater than water:
    • if the difference between the initial estimate of the equilibrium factor for the alcohol (KLL[1]) and the one for the water (45° sloped boundary of the blue area) is higher than a certain difference threshold and the initial estimate of the equilibrium factor for the alcohol (KLL[1]) is higher than than another threshold, solution A is found
    • the sloped boundary advances to the right (i.e. the maximum difference threshold is decreased) as the water content in the feed increases
    • if the difference between the initial estimate of the equilibrium factor for the alcohol (KLL[1]) and the one for the water (45° sloped boundary of the blue area) is lower than a certain difference threshold and the one for the alcohol (KLL[1]) is lower than another threshold, solution B is found
    • the sloped boundary advances to the left (i.e. the maximum difference threshold is decreased) as the water content in the feed increases
  4. if the water fraction is greater than alcohol:
    • the shape of the blue / green areas flip
    • the 45° sloped boundaries recede to the bottom / top as the alcohol content in the feed decreases

Conclusions: the initial estimates of the equilibrium factors can be used to steer the solver towards making the first or the second phase the organic phase; for example to make the second phase the organic phase (solution B) a practical initialization strategy is to set the KLLs for the key organic components (those present in large amount) to small value such as 1E-5, and the KLL for for water to a large value such as 1E5. The KLLs for the trace components can be left uninitialized to their default value of 1.

Posted in C++, Chemeng, Uncategorized | Tagged , | Leave a comment

Running your own kernel from the LIBPF user interface on OS X

During model development, you rapidly produce new versions of the calculation kernel (the command-line executable version of your models).

The easiest thing to do to try them out is to run them from the LIBPF user interface.

Here is a step-by-step howto for running your own kernel from the LIBPF user interface on OS X.

I assume you have received a pre-packaged OS X disk image (dmg) file:


If you mount it (by double-clicking) you’ll see that it contains the UIPF application package (that acronym stands for User Interface for Process Flowsheeting, it’s really just the LIBPF user interface).

Rather than dragging and dropping that from the mounted volume to the Applications folder as described in the LIBPF™ OS X Installation manual, drag and drop it inside your development folder (I assume it’s LIBPF_SDK_osx_1.0.2346 on the Desktop):


We now have to issue some command-line magic so open a Terminal and cd to the location of your development folder (you may need to adapt this command if your development folder is somewhere else):
cd Desktop/LIBPF_SDK_osx_1.0.2346

Now check the kernel currently configured with the UIPF application:
ls -l

this should return something similar to:
-rwxr-xr-x 1 paolog staff 6423572 24 Mar 23:19

What we want to do is replace that with the kernel produced by Qt Creator, for example for debug mode:
ls -l bin/mcfcccs/debug/mcfcccs
-rwxr-xr-x 1 paolog staff 23013068 15 Giu 12:01 bin/mcfcccs/debug/mcfcccs

So now delete the currently configured kernel:

and replace it with a symbolic link to the kernel produced by Qt Creator:
ln -s ../../../bin/mcfcccs/debug/mcfcccs

If you check now what kernel is currently configured with the UIPF application:
ls -l

it should return:
lrwxr-xr-x 1 paolog staff 34 15 Giu 12:04 -> ../../../bin/mcfcccs/debug/mcfcccs

So now it should be all set: when you open the LIBPF user interface double-clicking on the UIPF application package located in development folder, it will run your latest debug mode kernel !

Posted in C++, Chemeng, Howtos | Leave a comment

Impressions from the sps ipc drives Italia 2016 fair

The state of confusion that currently prevails when the Internet comes to manufacturing was confirmed at the sps ipc drives Italia fair that took place this week in Parma, Italy.


The confusion starts from the terminology. If you view the encounter of Internet and the industry as dominated by the former, you will use as label IIoT (industrial Internet of things); this seems typical of American companies, especially with an IT (information technology) background.

If you think that the encounter should be dominated by the industrial culture you’ll use the Industrie 4.0 label, as most German companies and even the German government do. Digital manufacturing looks like a neutral term, but it is biased towards discrete manufacturing and not very popular in the process industry, which is already quite digital … albeit not connected ! There are also the CPS (cyber-physical systems) and cloud labels, or you can sprinkle some smart- prefixes here and there.

And finally, as a consequence of these technological transitions, a reconfiguration should ensue, driving everybody happily towards servitization i.e. renting their machines with a pay-per-use, machines-as-a-service business model.

As anybody who has been enthusiastic for SOA (Service Oriented Architecture) or the network computer (or for any of the dozens of buzzwords which have plagued the industry in the last decades) knows well, not everything that comes out of the marketing guru’s heads turns into reality. Or it might become real sometime, but who knows when ?

For this Internet + manufacturing thing there are many reasons for all stakeholders to be quite frightened of the consequences, which you can extrapolate from what happened since we as consumers have embraced the smart-phone revolution:

  1. I am actually dumber, as the phone tells me where to go, what to, how much to exercise etc.
  2. all my data are sucked out and sold multiple times by third parties
  3. rather than buying phones, I subscribe long-term service-access contracts bundled with some hardware
  4. the major European smart-phone producer Nokia has vanished because hardware is now a commodity
  5. the (American) platform owners Apple and Google win everything.

In the industry, secretive end users are scared of loosing the control on the data and on the know-how. Those who handle dangerous substances and processes fear the risk of hackers wreaking havoc. OEMs may sense the danger of being driven to compete on totally flat, global and frictionless digital marketplaces, where their service is totally replaceable by their competitors’, and the only winner is the one biggest player or the owner of the platform itself. And while small end users may benefit from the cloud and machines-as-as-service, because that lowers the cash-flow barriers for them, by buying smart machines they may actually become dumber, i.e. lose the control of how much value is added by those machines to their business.

Anyway whatever buzzword they choose to use, it is a fact that the marketing departments of the big automation and industrial IT providers are pushing hard on those, and the largest among their customers may soon decide to sail into these troubled waters: a large corporation may be confident that their sheer size will allow them to overcome the storm.
But the enthusiasm is markedly limited in European SMEs which stick to the generally accepted wisdom that what is good for the big fish is not good for the small fish; and Italian SMEs play even cooler, as they are conservative and followers by attitude.

There are exceptions though, and in certain niche applications the impression is that SMEs may actually be much quicker than anyone else in making the jump; if they overcome their fears, the flexibility of the SME wins.
Given the astonishingly quick rate of adoption among consumers, it would seem natural that end users with a contiguity with the consumer sector would have lower barriers against the cloud. Those may be for example OEMs who supply artisans, small food & beverage producers etc. – although I am not able to name examples or lay down quantified figures on the market penetration. What I do have are signals that some SMEs are already working with other SMEs around architectures and business models that you could label Internet + manufacturing, but they do so below the radar, and you wont’ find their success stories in the most exhaustive analyst reports.

In conclusion, if you are a SME and have a business case in mind, please drop us a line at and we’ll find out together how we can turn your something into a smart-something, along a down-to-earth evolution path.

Posted in Uncategorized | Leave a comment

Debugging LIBPF applications with gdb

GNU debugger (gdb) is the standard command-line debugger on many Unix-like systems for troubleshooting C++ programs.

To prepare for debugging your application, compile it with debugging symbols enabled; for example assuming you want to debug Qpepper and use bjam to build:

cd ~/LIBPF/pepper
bjam debug Qpepper

or if you use qmake/make to build:

cd ~/LIBPF/pepper
make debug

A typical debugging session starts by launching gdb with the relative path to the executable as a parameter:

cd ~/LIBPF/bin
gdb ./pepper/gcc-4.9.2/debug/Qpepper

Next we typically want to set up a breakpoint at the Error::Error function, which is where the control flow will pass if an exception is thrown; to do that, use the b (breakpoint) command:

b Error::Error

Then you launch your application with the required command-line parameters with the r (run) command:

r new jjj

When the exception is thrown, the debugger will stop at the breakpoint:

Breakpoint 1, Error::Error (this=0xed2080, 
    cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)") at ../utility/src/
56      Error::Error(const char *cf) : msg_("Error was thrown by function: ") {

From here you can:

  1. examine the call stack with the where command, which will return something like:
    #0  Error::Error (this=0xed2080, 
        cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)") at ../utility/src/
    #1  0x00000000006097b2 in ErrorObjectFactory::ErrorObjectFactory (this=0xed2080, 
        cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)", ty=0xed09e8 "type jjj not found")
        at ../utility/src/
    #2  0x00000000007d30c1 in NodeFactory::create (this=0x7fffffffd7ef, type="jjj", defaults=..., id=0, 
        persistency=0x0, parent=0x0, root=0x0) at src/
    #3  0x00000000004263ec in createCase_ (type="jjj", defaults=..., error=@0x7fffffffdffc: 32767, svgs=true)
        at src/
    #4  0x0000000000427901 in Libpf::User::createCase (type="jjj", tag="jjj", description="", jcd="", 
        error=@0x7fffffffdffc: 32767) at src/
    #5  0x000000000040e64d in main (argc=3, argv=0x7fffffffe158) at ../user/src/

    notice the first column that is the frame number, and the error message details found as ty parameter to the function call in frame #1: type jjj not found

  2. jump to the frame that occurred in your own code and not in the library, such as frame #5, using the f (frame) command:
    f 5
  3. list the source code around the current execution point with the l (list) command, which will return something like:
    189         Libpf::User::Handle caseHandle = Libpf::User::createCase(type, tag, description, options, error);
    (gdb) l
    184         std::string options("");
    185         if (argc > 5) {
    186           options = argv[5];
    187         } // if options are passed
    189         Libpf::User::Handle caseHandle = Libpf::User::createCase(type, tag, description, options, error);
    190         if (error < 0)
    191           quitNow(error);
    192         else
    193           quitNow(;

Issuing the same commands repeatedly at the gdb command prompt is common, therefore it’s handy to enable gdb command history:

cat >> ~/.gdbinit
set history save
set history filename ~/.gdb_history

For more debugging tips, check the excellent RMS gdb tutorial or the gdb manual.

Posted in C++, Howtos | 2 Comments

Summary of the A&T fair, 2016 edition

Here is the Affidabilità e Tecnologie (A&T) fair, 2016 edition (held in , Torino April 20-21 2016) summarized by three audiovisual documents:

  1. Robot drives train:
  2. Robot plays golf:
  3. Robot brews coffee:
Posted in Philosophy | Leave a comment

Bash on Windows 10

This week at Build 2016, the yearly developer-oriented conference, Microsoft announced that Windows 10 will be able to run Linux’s Bash shell, by executing the native Ubuntu binary as-is.

Don’t stop at the news headline though: this is not just about Linux Bash, the command shell and scripting language.
All Ubuntu user space commands can potentially work, including the apt package manager with which you can tap from the 60000+ software packages available in the Ubuntu repos.

More technical details are found in two blog posts by Dustin Kirkland, an Ubuntu employee that worked with Microsoft on the magic behind.

This is no virtualization / container technology. It is more about API emulation: Linux system calls get translated in real time into Win32 API calls. No need to recompile the binaries.


It’s an approach that resembles the POSIX subsystem that was part of Windows NT, whose latest (2004) denomination was “Subsystem for UNIX-based Applications” (SUA), deprecated with Windows 8 and Windows Server 2012 and completely removed in Windows 8.1 and Windows Server 2012 R2. I guess it its just a resurrection of this approach.

Even if this technology is aimed at developers, if you think about it, it has certain strategic implications.

On the Operating System competition landscape, this levels the field with Apple OS X, which already had Bash and several package managers (but not apt ! and the binaries had to be recompiled !). It is a praise to the outstanding technical excellence of the Debian Linux distribution, which lies at the foundation of Ubuntu. It lowers the attractiveness of Linux on the Desktop, as developers can run all their preferred tools from within Windows. It lowers the barriers against migrating to Windows services and solutions developed on Linux technologies and stacks (MAMP, LAMP …): not that this wasn’t possible before, but you had to depend on many more bits and pieces of uncertain trust-wordiness. Now it looks like a simpler and well supported path.

It obsoletes certain technologies designed for similar purposes such as Cygwin and MinGW. It also obsoletes the plethora of ad-hoc installers and Windows-specific binaries for tools such as ActivePerl, git, PostgreSQL, nginx, Ruby, Node.js et cetera.

Finally, on the Open Source / commercial software divide, it demonstrates once more (should there be any need for it) that business can benefit from Open Source: effective immediately, thousands of Open Source enthusiasts are working for the good of Microsoft, with no compensation.

ATM many questions are still open: when will this technology land on Windows Server (currently it requires to install an app from the Windows Store, which is not always possible) ? Will this be available on previous versions of Windows like Windows 7 and 8.1 ? Will this be integrated with system administration tasks such as installing / un-installing a service ?

Posted in Philosophy | Tagged | Leave a comment

Modeling a pipe with a large pressure drop

The Pipe model is a concentrated parameter model for pipes. The correlations it uses are applicable only for small pressure drops, i.e. less than 10% of the absolute inlet pressure. If the calculated pressure drop is larger than that, you’ll get a warning.

But what to do if you have a long pipe or a pipe with a large pressure drop ?

Pipes by Nigel Howe

Pipes by Nigel Howe

The solution is to use a MultiStage unit to put together a number of Pipe units in series, thereby effectively discretizing the unit.

Assume this is your flowsheet definition (in the constructor):

addUnit("Pipe", defaults.relay("DN300", "connection piping"));
// ...

addStream("StreamLiquid", defaults.relay("S03", "inlet flow"), "RX", "out", "DN300", "in");
addStream("StreamLiquid", defaults.relay("S04", "outlet flow"), "DN300", "out", "BD", "in");

and this is the pipe unit data (in the setup method):

Q("").set(300.0, "mm");
Q("DN300.s").set(5.0, "mm");
Q("DN300.L").set(2000.0, "m");
Q("DN300.h").set(0.0, "m");
Q("DN300.eps").set(0.0457, "mm");
Q("DN300.vhConcentrated").set(3.0 * 0.75);

(beware this is C++ code ! check a tutorial if you have no clue how process modeling in C++ is possible !)

So to discretize the Pipe unit you’d merely change the addUnit command creation to create a MultiStage unit (no need to change the addStream statements):

addUnit("MultiStage", defaults.relay("DN300", "connection piping")
  ("nStreams", 1)
  ("typeT", "Pipe")
  ("typeU", "StreamLiquid")
  ("nStage", 30));

The meaning of the options passed to the addUnit command and ultimately to the constructor of the MultiStage unit is:

  • nStreams: this is useful for more complex multi-stream arrangements, in this case each Pipe unit has just one inlet and one outlet so we set it to 1
  • typeT: the type of the unit operation model used for each “stage”
  • typeU: the type of the stream model which connects the “stages”.
  • nStage: the number of stages i.e. of discretization steps

The model setup becomes:

for (int j=0; j< I("DN300.nStage"); ++j) {
  std::string stage("DN300:S[" + std::to_string(j) + "]");
  at(stage).Q("de").set(300.0, "mm");
  at(stage).Q("s").set(5.0, "mm");
  at(stage).Q("L").set(2000.0 / static_cast<double>(I("DN300.nStage")), "m");
  at(stage).Q("h").set(0.0, "m");
  at(stage).Q("eps").set(0.0457, "mm");
  if (j == 0)
    at(stage).Q("vhConcentrated").set(3.0 * 0.75);

Here we iterate over all the “stages” and set de (external diameter), s (thickness), h (elevation) and eps (rugosity) to the same values as before on all discretization “stages”; the L (length) we divide by the number of “stages”; finally the vhConcentrated (velocity heads associated to the concentrated pressure drops) we place only once in the 1st “stage”.

Done !

Posted in C++, Chemeng | Leave a comment