Effect of initial estimates for KLLs on the convergence of liquid-liquid equilibrium calculations

The initial estimates for the KLLs (equilibrium factors) have a big influence on the convergence behavior of liquid-liquid equilibrium calculations.

To highlight this effect, let’s try out something with LIBPF.

We choose the system H2O / 2-ethyl-1-hexanol.

Experimental measurements for the liquid-liquid equilibrium of this system can be found in the publication: Frank Gremer, Gerhard Herres, Dieter Gorenflo, “Vapour – liquid and liquid – liquid equilibria of mixtures of water and alcohols: measurements and correlations“, High Temperatures – High Pressures, 2002, volume 34, pages 355 – 362 (direct link). We are also grateful to the authors for providing additional data.

Let’s pick from from that source the data point at 120.384 – 129.785°C, here are the water molar fraction:

  • watery phase 0.9998 mol/mol
  • organic phase 0.334 mol/mol

Fitting this data-point we get these NRTL binary parameters:

  • alfa = 0.2
  • B12 = 3060.4764877456
  • B21 = -156.0296483827

These parameters can be used to reproduce the chosen experimental data point, see the HOWTO calculate a liquid-liquid separation tutorial.

The program yields these results:

------------------------------------------------------------------------------------
Phase   fraction        Water   ETEX    Water   ETEX
Name    mol frac        mol frac        ndot, kmol/s    ndot, kmol/s
STREAM:Vphase   0.278635989043  0.999484893116  0.000515106883773       27.8492 0.0143527
STREAM:Lphase   0.721364010957  0.30706763201   0.69293236799   22.1508 49.9856
STREAM:Tphase   1       0.5     0.5     50      50
------------------------------------------------------------------------------------
Water Kll: 3.25493405663 
ETEX Kll:  0.000743372524604

Here Vphase is the first liquid phase, and Lphase is the second liquid phase.

We have used KLL[0] = 1E5 and KLL[1] = 1E-5 as initial estimates, making the first component (the water) more affine with the first liquid phase. Consequently the solver converges to a solution (let’s call it solution B) where the organic phase is the second liquid phase.

If we set each KLL to its reverse (1E-5 and 1E5 respectively) we get the same results, but the phases are inverted:

------------------------------------------------------------------------------------
Phase   fraction        Water   ETEX    Water   ETEX
Name    mol frac        mol frac        ndot, kmol/s    ndot, kmol/s
STREAM:Vphase   0.721364010957  0.30706763201   0.69293236799   22.1508 49.9856
STREAM:Lphase   0.278635989043  0.999484893116  0.000515106883773       27.8492 0.0143527
STREAM:Tphase   1       0.5     0.5     50      50
------------------------------------------------------------------------------------
Water Kll: 0.307225886179 
ETEX Kll:  1345.22055484

here the organic phase is the first phase (let’s call it solution A) and the final KLLs are the reciprocal of those in solution B !

Now let’s try a scan of the range of possible initial estimates for the KLLs, sweeping each from 1E-5 to 1E5:

std::cout << "kll0\tkll1\terrors\titerations\tVphase.x[0]\tLphase.x[0]" << std::endl;
double factor(10.0);
for (double kll0=1E-5; kll0<1E5; kll0*=factor) {
  for (double kll1=1E-5; kll1setPristineRecursive();
    feed->resetErrors();
    feed->calculate();
    // TODO automate valid initial point table
    std::cout << kll0 << "\t" << kll1 << "\t" <errors.size() << "\t" <NITER_NLEFlash.value() << "\t" <Q("Vphase.x[0]") << "\t" <Q("Lphase.x[0]") << std::endl;
  } // loop over kll1
} // loop over kll0

This prints a 10×10 grid which looks like this:

kll0    kll1    errors  iterations      Vphase.x[0]     Lphase.x[0]

1e-05   1e-05   0       1       0.5     0.5 
1e-05   0.0001  2       500     0.454166789146  0.5 
...
1e-05	10000	0	4	0.143450341144 	0.999719891117 
...
10000	1e-05	0	4	0.999719891047 	0.143450278799 
...

There are four possible situations:

  1. the liquid-liquid split is found, with the 1st liquid being the organic phase: solution A
  2. the same liquid-liquid split is found, with the 2nd liquid being the organic phase: solution B
  3. we have errors: the initial estimate was off and did not enable the nonlinear algebraic solver to find the solution
  4. it converges, but the degenerate solution (with both phases having the same composition) is found

We can generate several of these grids, with the water content in the feed spanning the entire range of compositions.

This animated GIF shows a slideshow of the results:
anim

Here the blue color stands for solution A, green for solution B, red for errors, yellow for single-phase and gray for degenerate solution.

We note the following:

  1. when the initial estimates for the KLLs are close to one another, the degenerate solution is always found
  2. intermediate initial estimates typically cause convergence errors or a spurious single-phase solution
  3. if the alcohol fraction is greater than water:
    • if the difference between the initial estimate of the equilibrium factor for the alcohol (KLL[1]) and the one for the water (45° sloped boundary of the blue area) is higher than a certain difference threshold and the initial estimate of the equilibrium factor for the alcohol (KLL[1]) is higher than than another threshold, solution A is found
    • the sloped boundary advances to the right (i.e. the maximum difference threshold is decreased) as the water content in the feed increases
    • if the difference between the initial estimate of the equilibrium factor for the alcohol (KLL[1]) and the one for the water (45° sloped boundary of the blue area) is lower than a certain difference threshold and the one for the alcohol (KLL[1]) is lower than another threshold, solution B is found
    • the sloped boundary advances to the left (i.e. the maximum difference threshold is decreased) as the water content in the feed increases
  4. if the water fraction is greater than alcohol:
    • the shape of the blue / green areas flip
    • the 45° sloped boundaries recede to the bottom / top as the alcohol content in the feed decreases

Conclusions: the initial estimates of the equilibrium factors can be used to steer the solver towards making the first or the second phase the organic phase; for example to make the second phase the organic phase (solution B) a practical initialization strategy is to set the KLLs for the key organic components (those present in large amount) to small value such as 1E-5, and the KLL for for water to a large value such as 1E5. The KLLs for the trace components can be left uninitialized to their default value of 1.

Posted in C++, Chemeng, Uncategorized | Tagged , | Leave a comment

Running your own kernel from the LIBPF user interface on OS X

During model development, you rapidly produce new versions of the calculation kernel (the command-line executable version of your models).

The easiest thing to do to try them out is to run them from the LIBPF user interface.

Here is a step-by-step howto for running your own kernel from the LIBPF user interface on OS X.

I assume you have received a pre-packaged OS X disk image (dmg) file:

screenshot_osx1

If you mount it (by double-clicking) you’ll see that it contains the UIPF application package (that acronym stands for User Interface for Process Flowsheeting, it’s really just the LIBPF user interface).

Rather than dragging and dropping that from the mounted volume to the Applications folder as described in the LIBPF™ OS X Installation manual, drag and drop it inside your development folder (I assume it’s LIBPF_SDK_osx_1.0.2346 on the Desktop):

aaa

We now have to issue some command-line magic so open a Terminal and cd to the location of your development folder (you may need to adapt this command if your development folder is somewhere else):
cd Desktop/LIBPF_SDK_osx_1.0.2346

Now check the kernel currently configured with the UIPF application:
ls -l UIPF.app/Contents/Resources/kernel

this should return something similar to:
-rwxr-xr-x 1 paolog staff 6423572 24 Mar 23:19 UIPF.app/Contents/Resources/kernel

What we want to do is replace that with the kernel produced by Qt Creator, for example for debug mode:
ls -l bin/mcfcccs/debug/mcfcccs
-rwxr-xr-x 1 paolog staff 23013068 15 Giu 12:01 bin/mcfcccs/debug/mcfcccs

So now delete the currently configured kernel:
rm UIPF.app/Contents/Resources/kernel

and replace it with a symbolic link to the kernel produced by Qt Creator:
ln -s ../../../bin/mcfcccs/debug/mcfcccs UIPF.app/Contents/Resources/kernel

If you check now what kernel is currently configured with the UIPF application:
ls -l UIPF.app/Contents/Resources/kernel

it should return:
lrwxr-xr-x 1 paolog staff 34 15 Giu 12:04 UIPF.app/Contents/Resources/kernel -> ../../../bin/mcfcccs/debug/mcfcccs

So now it should be all set: when you open the LIBPF user interface double-clicking on the UIPF application package located in development folder, it will run your latest debug mode kernel !

Posted in C++, Chemeng, Howtos | Leave a comment

Impressions from the sps ipc drives Italia 2016 fair

The state of confusion that currently prevails when the Internet comes to manufacturing was confirmed at the sps ipc drives Italia fair that took place this week in Parma, Italy.

WP_20160526_002

The confusion starts from the terminology. If you view the encounter of Internet and the industry as dominated by the former, you will use as label IIoT (industrial Internet of things); this seems typical of American companies, especially with an IT (information technology) background.

If you think that the encounter should be dominated by the industrial culture you’ll use the Industrie 4.0 label, as most German companies and even the German government do. Digital manufacturing looks like a neutral term, but it is biased towards discrete manufacturing and not very popular in the process industry, which is already quite digital … albeit not connected ! There are also the CPS (cyber-physical systems) and cloud labels, or you can sprinkle some smart- prefixes here and there.

And finally, as a consequence of these technological transitions, a reconfiguration should ensue, driving everybody happily towards servitization i.e. renting their machines with a pay-per-use, machines-as-a-service business model.

As anybody who has been enthusiastic for SOA (Service Oriented Architecture) or the network computer (or for any of the dozens of buzzwords which have plagued the industry in the last decades) knows well, not everything that comes out of the marketing guru’s heads turns into reality. Or it might become real sometime, but who knows when ?

For this Internet + manufacturing thing there are many reasons for all stakeholders to be quite frightened of the consequences, which you can extrapolate from what happened since we as consumers have embraced the smart-phone revolution:

  1. I am actually dumber, as the phone tells me where to go, what to, how much to exercise etc.
  2. all my data are sucked out and sold multiple times by third parties
  3. rather than buying phones, I subscribe long-term service-access contracts bundled with some hardware
  4. the major European smart-phone producer Nokia has vanished because hardware is now a commodity
  5. the (American) platform owners Apple and Google win everything.

In the industry, secretive end users are scared of loosing the control on the data and on the know-how. Those who handle dangerous substances and processes fear the risk of hackers wreaking havoc. OEMs may sense the danger of being driven to compete on totally flat, global and frictionless digital marketplaces, where their service is totally replaceable by their competitors’, and the only winner is the one biggest player or the owner of the platform itself. And while small end users may benefit from the cloud and machines-as-as-service, because that lowers the cash-flow barriers for them, by buying smart machines they may actually become dumber, i.e. lose the control of how much value is added by those machines to their business.

Anyway whatever buzzword they choose to use, it is a fact that the marketing departments of the big automation and industrial IT providers are pushing hard on those, and the largest among their customers may soon decide to sail into these troubled waters: a large corporation may be confident that their sheer size will allow them to overcome the storm.
But the enthusiasm is markedly limited in European SMEs which stick to the generally accepted wisdom that what is good for the big fish is not good for the small fish; and Italian SMEs play even cooler, as they are conservative and followers by attitude.

There are exceptions though, and in certain niche applications the impression is that SMEs may actually be much quicker than anyone else in making the jump; if they overcome their fears, the flexibility of the SME wins.
Given the astonishingly quick rate of adoption among consumers, it would seem natural that end users with a contiguity with the consumer sector would have lower barriers against the cloud. Those may be for example OEMs who supply artisans, small food & beverage producers etc. – although I am not able to name examples or lay down quantified figures on the market penetration. What I do have are signals that some SMEs are already working with other SMEs around architectures and business models that you could label Internet + manufacturing, but they do so below the radar, and you wont’ find their success stories in the most exhaustive analyst reports.

In conclusion, if you are a SME and have a business case in mind, please drop us a line at info@simevo.com and we’ll find out together how we can turn your something into a smart-something, along a down-to-earth evolution path.

Posted in Uncategorized | Leave a comment

Debugging LIBPF applications with gdb

GNU debugger (gdb) is the standard command-line debugger on many Unix-like systems for troubleshooting C++ programs.

To prepare for debugging your application, compile it with debugging symbols enabled; for example assuming you want to debug Qpepper and use bjam to build:

cd ~/LIBPF/pepper
bjam debug Qpepper

or if you use qmake/make to build:

cd ~/LIBPF/pepper
qmake
make debug

A typical debugging session starts by launching gdb with the relative path to the executable as a parameter:

cd ~/LIBPF/bin
gdb ./pepper/gcc-4.9.2/debug/Qpepper

Next we typically want to set up a breakpoint at the Error::Error function, which is where the control flow will pass if an exception is thrown; to do that, use the b (breakpoint) command:

b Error::Error

Then you launch your application with the required command-line parameters with the r (run) command:

r new jjj

When the exception is thrown, the debugger will stop at the breakpoint:

Breakpoint 1, Error::Error (this=0xed2080, 
    cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)") at ../utility/src/Error.cc:56
56      Error::Error(const char *cf) : msg_("Error was thrown by function: ") {

From here you can:

  1. examine the call stack with the where command, which will return something like:
    #0  Error::Error (this=0xed2080, 
        cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)") at ../utility/src/Error.cc:56
    #1  0x00000000006097b2 in ErrorObjectFactory::ErrorObjectFactory (this=0xed2080, 
        cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)", ty=0xed09e8 "type jjj not found")
        at ../utility/src/Error.cc:117
    #2  0x00000000007d30c1 in NodeFactory::create (this=0x7fffffffd7ef, type="jjj", defaults=..., id=0, 
        persistency=0x0, parent=0x0, root=0x0) at src/NodeFactory.cc:57
    #3  0x00000000004263ec in createCase_ (type="jjj", defaults=..., error=@0x7fffffffdffc: 32767, svgs=true)
        at src/Kernel.cc:228
    #4  0x0000000000427901 in Libpf::User::createCase (type="jjj", tag="jjj", description="", jcd="", 
        error=@0x7fffffffdffc: 32767) at src/Kernel.cc:317
    #5  0x000000000040e64d in main (argc=3, argv=0x7fffffffe158) at ../user/src/main.cc:189
    

    notice the first column that is the frame number, and the error message details found as ty parameter to the function call in frame #1: type jjj not found

  2. jump to the frame that occurred in your own code and not in the library, such as frame #5, using the f (frame) command:
    f 5
    
  3. list the source code around the current execution point with the l (list) command, which will return something like:
    189         Libpf::User::Handle caseHandle = Libpf::User::createCase(type, tag, description, options, error);
    (gdb) l
    184         std::string options("");
    185         if (argc > 5) {
    186           options = argv[5];
    187         } // if options are passed
    188
    189         Libpf::User::Handle caseHandle = Libpf::User::createCase(type, tag, description, options, error);
    190         if (error < 0)
    191           quitNow(error);
    192         else
    193           quitNow(caseHandle.id());
    (gdb) 
    

Issuing the same commands repeatedly at the gdb command prompt is common, therefore it’s handy to enable gdb command history:

cat >> ~/.gdbinit
set history save
set history filename ~/.gdb_history
^d

For more debugging tips, check the excellent RMS gdb tutorial or the gdb manual.

Posted in C++, Howtos | Leave a comment

Summary of the A&T fair, 2016 edition

Here is the Affidabilità e Tecnologie (A&T) fair, 2016 edition (held in , Torino April 20-21 2016) summarized by three audiovisual documents:

  1. Robot drives train:
  2. Robot plays golf:
  3. Robot brews coffee:
    WP_20160421_010
Posted in Philosophy | Leave a comment

Bash on Windows 10

This week at Build 2016, the yearly developer-oriented conference, Microsoft announced that Windows 10 will be able to run Linux’s Bash shell, by executing the native Ubuntu binary as-is.

Don’t stop at the news headline though: this is not just about Linux Bash, the command shell and scripting language.
All Ubuntu user space commands can potentially work, including the apt package manager with which you can tap from the 60000+ software packages available in the Ubuntu repos.

More technical details are found in two blog posts by Dustin Kirkland, an Ubuntu employee that worked with Microsoft on the magic behind.

This is no virtualization / container technology. It is more about API emulation: Linux system calls get translated in real time into Win32 API calls. No need to recompile the binaries.

P488

It’s an approach that resembles the POSIX subsystem that was part of Windows NT, whose latest (2004) denomination was “Subsystem for UNIX-based Applications” (SUA), deprecated with Windows 8 and Windows Server 2012 and completely removed in Windows 8.1 and Windows Server 2012 R2. I guess it its just a resurrection of this approach.

Even if this technology is aimed at developers, if you think about it, it has certain strategic implications.

On the Operating System competition landscape, this levels the field with Apple OS X, which already had Bash and several package managers (but not apt ! and the binaries had to be recompiled !). It is a praise to the outstanding technical excellence of the Debian Linux distribution, which lies at the foundation of Ubuntu. It lowers the attractiveness of Linux on the Desktop, as developers can run all their preferred tools from within Windows. It lowers the barriers against migrating to Windows services and solutions developed on Linux technologies and stacks (MAMP, LAMP …): not that this wasn’t possible before, but you had to depend on many more bits and pieces of uncertain trust-wordiness. Now it looks like a simpler and well supported path.

It obsoletes certain technologies designed for similar purposes such as Cygwin and MinGW. It also obsoletes the plethora of ad-hoc installers and Windows-specific binaries for tools such as ActivePerl, git, PostgreSQL, nginx, Ruby, Node.js et cetera.

Finally, on the Open Source / commercial software divide, it demonstrates once more (should there be any need for it) that business can benefit from Open Source: effective immediately, thousands of Open Source enthusiasts are working for the good of Microsoft, with no compensation.

ATM many questions are still open: when will this technology land on Windows Server (currently it requires to install an app from the Windows Store, which is not always possible) ? Will this be available on previous versions of Windows like Windows 7 and 8.1 ? Will this be integrated with system administration tasks such as installing / un-installing a service ?

Posted in Philosophy | Tagged | Leave a comment

Modeling a pipe with a large pressure drop

The Pipe model is a concentrated parameter model for pipes. The correlations it uses are applicable only for small pressure drops, i.e. less than 10% of the absolute inlet pressure. If the calculated pressure drop is larger than that, you’ll get a warning.

But what to do if you have a long pipe or a pipe with a large pressure drop ?

Pipes by Nigel Howe

Pipes by Nigel Howe


The solution is to use a MultiStage unit to put together a number of Pipe units in series, thereby effectively discretizing the unit.

Assume this is your flowsheet definition (in the constructor):

addUnit("Pipe", defaults.relay("DN300", "connection piping"));
// ...

addStream("StreamLiquid", defaults.relay("S03", "inlet flow"), "RX", "out", "DN300", "in");
addStream("StreamLiquid", defaults.relay("S04", "outlet flow"), "DN300", "out", "BD", "in");

and this is the pipe unit data (in the setup method):

Q("DN300.de").set(300.0, "mm");
Q("DN300.s").set(5.0, "mm");
Q("DN300.L").set(2000.0, "m");
Q("DN300.h").set(0.0, "m");
Q("DN300.eps").set(0.0457, "mm");
Q("DN300.vhConcentrated").set(3.0 * 0.75);

(beware this is C++ code ! check a tutorial if you have no clue how process modeling in C++ is possible !)

So to discretize the Pipe unit you’d merely change the addUnit command creation to create a MultiStage unit (no need to change the addStream statements):

addUnit("MultiStage", defaults.relay("DN300", "connection piping")
  ("nStreams", 1)
  ("typeT", "Pipe")
  ("typeU", "StreamLiquid")
  ("nStage", 30));

The meaning of the options passed to the addUnit command and ultimately to the constructor of the MultiStage unit is:

  • nStreams: this is useful for more complex multi-stream arrangements, in this case each Pipe unit has just one inlet and one outlet so we set it to 1
  • typeT: the type of the unit operation model used for each “stage”
  • typeU: the type of the stream model which connects the “stages”.
  • nStage: the number of stages i.e. of discretization steps

The model setup becomes:

for (int j=0; j< I("DN300.nStage"); ++j) {
  std::string stage("DN300:S[" + std::to_string(j) + "]");
  at(stage).Q("de").set(300.0, "mm");
  at(stage).Q("s").set(5.0, "mm");
  at(stage).Q("L").set(2000.0 / static_cast<double>(I("DN300.nStage")), "m");
  at(stage).Q("h").set(0.0, "m");
  at(stage).Q("eps").set(0.0457, "mm");
  if (j == 0)
    at(stage).Q("vhConcentrated").set(3.0 * 0.75);
}

Here we iterate over all the “stages” and set de (external diameter), s (thickness), h (elevation) and eps (rugosity) to the same values as before on all discretization “stages”; the L (length) we divide by the number of “stages”; finally the vhConcentrated (velocity heads associated to the concentrated pressure drops) we place only once in the 1st “stage”.

Done !

Posted in C++, Chemeng | Leave a comment

Where is the SQL database plugin for ODBC (qsqlodbc.dll) in Qt 5.6 ?

It looks like the SQL database plugin for ODBC (qsqlodbc.dll) is missing from the standard Qt 5.6 installer for Windows you download from http://www.qt.io/download-open-source.

You’d expect to find it in C:\Qt\Qt5.6.0\5.6\msvc2015_64\plugins\sqldrivers\ where the sqlite, mysql and postgresql ones are found, but the ODBC plugin is not there.

The issue is known (see QTBUG-49420 and QTBUG-51390) but these bug reports are closed and it seems there will be no fix on a short term.

So here is how you build qsqlodbc.dll from source for Windows 64-bit with Visual Studio 2015, based on the instructions “How to Build the ODBC Plugin on Windows”.

First get the source package qt-everywhere-opensource-src-5.6.0.zip from
http://download.qt.io/archive/qt/5.6/5.6.0/single/ and unzip it in any location, for example C:\qt-everywhere-opensource-src-5.6.0.

Now open a “VS2015 x64 native tools command prompt”, CD to the location of the qt-everywhere-opensource-src-5.6.0 directory then:

cd qtbase\src\plugins\sqldrivers\odbc
C:\Qt\Qt5.6.0\5.6\msvc2015_64\bin\qmake odbc.pro
nmake

At the end you’ll find the qsqlodbc.dll in C:\qt-everywhere-opensource-src-5.6.0\qtbase\plugins\sqldrivers alogside with qsqlodbcd.dll and qsqlodbcd.pdb.

Just copy these three files to the location where the Qt 5.6 installer for Windows should have put them
C:\Qt\Qt5.6.0\5.6\msvc2015_64\plugins\sqldrivers\ and you’ll be done !
sqlodbc

Posted in C++, Howtos, Rants | Leave a comment

“api-ms-win-crt-runtime-l1-1-0.dll is missing” error with LIBPFonOPC on Windows Server 2012 R2 64 bit

You may receive the error “The program can’t start because api-ms-win-crt-runtime-l1-1-0.dll is missing” when launching the LIBPFonOPC configurator 1.0. 2264 on a fresh install of Windows Server 2012 R2 64 bit.

01Related symptom: this error silently appears in the Setup Log of the Event viewer during the installation of LIBPFonOPC:
Windows update could not be installed because of error 2149842967 "" (Command line: ""C:\Windows\SysNative\wusa.exe" "C:\ProgramData\Package Cache\3ACBF3890FC9C8A6F3D2155ECF106028E5F55164\packages\Patch\x64\Windows8.1-KB2999226-x64.msu" /quiet /norestart")

02

Here is a hint to the solution: “Excel can’t start because api-ms-win-crt-runtime-l1-1-0.dll is missing”.

So it turns out that the Visual C++ Redistributable for Visual Studio 2015 silently fails to install on a fresh install of Windows Server 2012 R2 64 bit … So just update the OS and reinstall the Visual C++ Redistributable for Visual Studio 2015 update 1 from here:

https://www.microsoft.com/en-us/download/details.aspx?id=49984

Posted in Howtos, Uncategorized | Leave a comment

HOWTO make verbosity level accessible for users

LIBPF’s  4 diagnostic levels allow the model developer to fine-tune the verbosity at a global, compilation, unit, function and class instance level. Of these, two (verbosityGlobal and verbosityInstance) are also available at run-time, but normally not for the model user.

If you want to make them accessible to the user, there are several options. Here I’ll show you how to do it using a couple of real-valued Quantities that can be directly manipulated in the user interface or via the LIBPF™ Model User API.

NOTE: we use Quantities rather than Integers because currently there is no way for the user to change integers at runtime; the user will have to enter an integer value such as 1, 2 or 100 for really high verbosity.

Se here we go:

  1. declare two custom variables in your model class:
    Quantity global; ///< global verbosity
    Quantity instance; ///< instance verbosity
  2. initialize them in the model class constructor initializer list:
    DEFINE(global, "global verbosity", 0.0, ""),
    DEFINE(instance, "instance verbosity", 0.0, ""),
  3. register them in the model class constructor body:
    addVariable(global);
    addVariable(instance);
  4. make them user-modifiable in the setup method of the model class:
    global.setInput();
    instance.setInput();
  5. use them at model calculation to increase /decrease the verbosityGlobal and verbosityInstance variables, by implementing the FlowSheet::pre and FlowSheet::post overrides of your model class:
    void MyModel::pre(SolutionMode solutionMode, int level) {
      verbosityGlobal += global.toDouble();
      verbosityInstance += instance.toDouble();
    }
    void MyModel::post(SolutionMode solutionMode, int level) {
      verbosityGlobal -= global.toDouble();
      verbosityInstance -= instance.toDouble();
    }
Posted in C++, Chemeng, Howtos | Tagged | Leave a comment