Iperammortamento Industry 4.0 e superammortamento software

Le aziende fornitrici di macchinari assoggettabili agli iper/super-ammortamenti Industry 4.0 nei prossimi due anni (2017-2018) avranno un vantaggio competitivo rispetto ai concorrenti che non possano offrire queste caratteristiche. Infatti i clienti preferiranno l’acquisto di macchine, sistemi e dispositivi in grado di beneficiare del vantaggio fiscale.

Per queste aziende fornitrici di hardware industriale è quindi una priorità adattare le proprie forniture e integrarle con componenti software e “cloud” in modo che siano assoggettabili agli incentivi e quindi competitive sul mercato.

Qua sotto trovate una piccola guida alla lettura del decreto che introduce i iper/super-ammortamenti Industry 4.0, con evidenziate le parti dove simevo s.r.l. vi può affiancare, fornendo il software e servizi necessari a rendere le vostre macchine “smart” e allineate al piano nazionale Industria 4.0.

Non esitate a contattarci se desiderate avviare un progetto Industria 4.0 con noi !


industry4


Nell’ambito del “Piano nazionale Industria 4.0” il disegno di legge “Bilancio di previsione dello Stato per l’anno finanziario 2017 e bilancio pluriennale per il triennio 2017-2019” ha introdotto un iperammortamento al 250% per i beni digitali legati a Industry 4.0, ed un superammortamento del 140% per i relativi software, come da articolo 3 commi 2 e 3 rispettivamente:

Art. 3. (Proroga e rafforzamento della disciplina relativa alla maggiorazione della deduzione di ammortamenti).

  1. Al fine di favorire processi di trasformazione tecnologica e digitale secondo il modello «Industria 4.0», per gli investimenti, effettuati nel periodo indicato al comma 1, in beni materiali strumentali nuovi compresi nell’elenco di cui all’allegato A annesso alla presente legge, il costo di acquisizione è maggiorato del 150 per cento.
  2. Per i soggetti che beneficiano della maggiorazione di cui al comma 2 e che, nel periodo indicato al comma 1, effettuano investimenti in beni immateriali strumentali compresi nell’elenco di cui all’allegato B annesso alla presente legge, il costo di acquisizione di tali beni è maggiorato del 40 per cento.
  3. Per la fruizione dei benefìci di cui ai commi 2 e 3, l’impresa è tenuta a produrre una dichiarazione resa dal legale rappresentante ai sensi del testo unico delle disposizioni
    legislative e regolamentari in materia di documentazione amministrativa, di cui al decreto del Presidente della Repubblica 28 dicembre 2000, n. 445, ovvero, per i beni aventi ciascuno un costo di acquisizione superiore a 500.000 euro, una perizia tecnica giurata rilasciata da un ingegnere o da un perito industriale iscritti nei rispettivi albi professionali o da un ente di certificazione accreditato, attestante che il bene possiede caratteristiche tecniche tali da includerlo negli elenchi di cui all’allegato A o all’allegato B annessi alla presente legge ed è interconnesso al sistema aziendale di gestione della produzione o alla rete di fornitura.
  4. La determinazione degli acconti dovuti per il periodo d’imposta in corso al 31 dicembre 2017 e per quello successivo è effettuata considerando quale imposta del periodo precedente quella che si sarebbe determinata in assenza delle disposizioni di cui ai commi 1, 2 e 3.
  5. Resta ferma l’applicazione delle disposizioni di cui all’articolo 1, commi 93 e 97, della legge 28 dicembre 2015, n. 208.

Nel tomo II, Allegato A (Articolo 3, comma 2) pagina 237 sono dettagliati i beni assoggettati all’iperammortamento del 250% cioè i “beni funzionali alla trasformazione tecnologica e digitale delle imprese secondo il modello «Industria 4.0»“:

Beni strumentali il cui funzionamento è controllato da sistemi computerizzati o gestito tramite opportuni sensori e azionamenti:

  • ….

… segue un elenco onnicomprensivo di macchine utensili, di trasformazione, deformazione plastica, assemblaggio, giunzione, saldatura, confezionamento e imballaggio, de-produzione e riconfezionamento, robot, macchine per la manifattura additiva, macchine, fino ai magazzini automatizzati.

A queste macchine si deve applicare una prima serie di requisiti “base”, queste sono caratteristiche abbastanza scontate:

Tutte le macchine sopra citate devono essere dotate delle seguenti caratteristiche:

  • controllo per mezzo di CNC (Computer Numerical Control) e/o PLC (Programmable Logic Controller),
  • interconnessione ai sistemi informatici di fabbrica con caricamento da remoto di istruzioni e/o part program,
  • integrazione automatizzata con il sistema logistico della fabbrica o con la rete di fornitura e/o con altre macchine del ciclo produttivo,
  • interfaccia tra uomo e macchina semplici e intuitive, rispondenza ai più recenti parametri di sicurezza, salute e igiene del lavoro.

Infine segue un elenco di requisiti, sistemi e dispositivi “innovativi”.

Da qui in avanti ai punti evidenziati in giallo potrebbe entrare in gioco la collaborazione con simevo !

Innanzitutto tutte le macchine sopra citate devono essere dotate di almeno due tra le seguenti caratteristiche per renderle assimilabili o integrabili a sistemi cyberfisici (cyber-physical system = designa l’integrazione stretta e il coordinamento tra risorse di calcolo e fisiche):

  • sistemi di telemanutenzione e/o telediagnosi e/o controllo in remoto,
  • monitoraggio continuo delle condizioni di lavoro e dei parametri di processo mediante opportuni set di sensori e adattività alle derive di
    processo,
  • caratteristiche di integrazione tra macchina fisica e/o impianto con la modellizzazione e/o la simulazione del proprio comportamento nello svolgimento del processo (sistema cyberfisico),
  • dispositivi, strumentazione e componentistica intelligente per l’integrazione, la sensorizzazione e/o l’interconnessione e il controllo
    automatico dei processi utilizzati anche nell’ammodernamento o nel revamping dei sistemi di produzione esistenti,
  • filtri e sistemi di trattamento e recupero di acqua, aria, olio, sostanze chimiche e organiche, polveri con sistemi di segnalazione
    dell’efficienza filtrante e della presenza di anomalie o sostanze aliene al processo o pericolose, integrate con il sistema di fabbrica e in grado
    di avvisare gli operatori e/o di fermare le attività di macchine e impianti.

Sono poi trattati separatamente dalle macchine i sistemi per l’assicurazione della qualità e della sostenibilità:

  • sistemi di misura a coordinate e no (a contatto, non a contatto, multi-sensore o basati su tomografia computerizzata tridimensionale) e relativa strumentazione per la verifica dei requisiti micro e macro geometrici di prodotto per qualunque livello di scala dimensionale (dalla larga scala alla scala micro-metrica o nano-metrica) al fine di assicurare e tracciare la qualità del prodotto e che consentono di qualificare i processi di produzione in maniera documentabile e connessa al sistema informativo di fabbrica,
  • altri sistemi di monitoraggio in process per assicurare e tracciare la qualità del prodotto o del processo produttivo e che consentono di
    qualificare i processi di produzione in maniera documentabile e connessa al sistema informativo di fabbrica,
  • sistemi per l’ispezione e la caratterizzazione dei materiali (ad esempio macchine di prova materiali, macchine per il collaudo dei prodotti realizzati, sistemi per prove o collaudi non distruttivi, tomografia) in grado di verificare le caratteristiche dei materiali in ingresso
    o in uscita al processo e che vanno a costituire il prodotto risultante a livello macro (ad esempio caratteristiche meccaniche) o micro (ad
    esempio porosità, inclusioni) e di generare opportuni report di collaudo da inserire nel sistema informativo aziendale,
  • dispositivi intelligenti per il test delle polveri metalliche e sistemi di monitoraggio in continuo che consentono di qualificare i processi di
    produzione mediante tecnologie additive,
  • sistemi intelligenti e connessi di marcatura e tracciabilità dei lotti produttivi e/o dei singoli prodotti (ad esempio RFID – Radio Frequency
    Identification),
  • sistemi di monitoraggio e controllo delle condizioni di lavoro delle macchine (ad esempio forze, coppia e potenza di lavorazione; usura tridimensionale degli utensili a bordo macchina; stato di componenti o sotto-insiemi delle macchine) e dei sistemi di produzione interfacciati con i sistemi informativi di fabbrica e/o con soluzioni cloud,
  • strumenti e dispositivi per l’etichettatura, l’identificazione o la marcatura automatica dei prodotti, con collegamento con il codice e la
    matricola del prodotto stesso in modo da consentire ai manutentori di monitorare la costanza delle prestazioni dei prodotti nel tempo e di
    agire sul processo di progettazione dei futuri prodotti in maniera sinergica, consentendo il richiamo di prodotti difettosi o dannosi,
  • componenti, sistemi e soluzioni intelligenti per la gestione, l’utilizzo efficiente e il monitoraggio dei consumi energetici,
  • filtri e sistemi di trattamento e recupero di acqua, aria, olio, sostanze chimiche, polveri con sistemi di segnalazione dell’efficienza
    filtrante e della presenza di anomalie o sostanze aliene al processo o pericolose, integrate con il sistema di fabbrica e in grado di avvisare gli
    operatori e/o di fermare le attività di macchine e impianti.

… e i dispositivi per l’interazione uomo macchina e per il miglioramento dell’ergonomia e della sicurezza del posto di lavoro in logica «4.0»:

  • banchi e postazioni di lavoro dotati di soluzioni ergonomiche in grado di adattarli in maniera automatizzata alle caratteristiche fisiche
    degli operatori (ad esempio caratteristiche biometriche, età, presenza di disabilità),
  • sistemi per il sollevamento/traslazione di parti pesanti o oggetti esposti ad alte temperature in grado di agevolare in maniera intelli-
    gente/robotizzata/interattiva il compito dell’operatore,
  • dispositivi wearable, apparecchiature di comunicazione tra operatore/operatori e sistema produttivo, dispositivi di realtà aumentata e
    virtual reality,
  • interfacce uomo-macchina (HMI) intelligenti che coadiuvano l’operatore a fini di sicurezza ed efficienza delle operazioni di lavora-
    zione, manutenzione, logistica.

Infine sempre nel tomo II, Allegato B (Articolo 3, comma 3) pagina 240 sono dettagliati i software assoggettati al superammortamento del 140%, cioè “Beni immateriali (software, sistemi e system integration, piattaforme e applicazioni) connessi a investimenti in beni materiali «Industria 4.0»“.

Attenzione: ai punti evidenziati in arancione potrebbe entrare in gioco la collaborazione con simevo !

  • Software, sistemi, piattaforme e applicazioni per la progettazione, definizione/qualificazione delle prestazioni e produzione di manufatti in materiali non convenzionali o ad alte prestazioni, in grado di permettere la progettazione, la modellazione 3D, la simulazione, la sperimentazione, la prototipazione e la verifica simultanea del processo produttivo, del prodotto e delle sue caratteristiche (funzionali e di impatto ambientale) e/o l’archiviazione digitale e integrata nel sistema informativo aziendale delle informazioni relative al ciclo di vita del prodotto (sistemi EDM, PDM, PLM, Big Data Analytics),
  • software, sistemi, piattaforme e applicazioni per la progettazione e la ri-progettazione dei sistemi produttivi che tengano conto dei flussi
    dei materiali e delle informazioni,
  • software, sistemi, piattaforme e applicazioni di supporto alle decisioni in grado di interpretare dati analizzati dal campo e visualizzare agli operatori in linea specifiche azioni per migliorare la qualità del prodotto e l’efficienza del sistema di produzione,
  • software, sistemi, piattaforme e applicazioni per la gestione e il coordinamento della produzione con elevate caratteristiche di integrazione delle attività di servizio, come la logistica di fabbrica e la manutenzione (quali ad esempio sistemi di comunicazione intra-fabbrica, bus di campo/fieldbus, sistemi SCADA, sistemi MES, sistemi CMMS, soluzioni innovative con caratteristiche riconducibili ai paradigmi dell’IoT e/o del cloud computing),
  • software, sistemi, piattaforme e applicazioni per il monitoraggio e controllo delle condizioni di lavoro delle macchine e dei sistemi di
    produzione interfacciati con i sistemi informativi di fabbrica e/o con soluzioni cloud,
  • software, sistemi, piattaforme e applicazioni di realtà virtuale per lo studio realistico di componenti e operazioni (ad esempio di assemblaggio), sia in contesti immersivi o solo visuali,
  • software, sistemi, piattaforme e applicazioni di reverse modeling and engineering per la ricostruzione virtuale di contesti reali,
  • software, sistemi, piattaforme e applicazioni in grado di comunicare e condividere dati e informazioni sia tra loro che con l’ambiente e gli attori circostanti (Industrial Internet of Things) grazie ad una rete di sensori intelligenti interconnessi,
  • software, sistemi, piattaforme e applicazioni per il dispatching delle attività e l’instradamento dei prodotti nei sistemi produttivi,
  • software, sistemi, piattaforme e applicazioni per la gestione della qualità a livello di sistema produttivo e dei relativi processi,
  • software, sistemi, piattaforme e applicazioni per l’accesso a un insieme virtualizzato, condiviso e configurabile di risorse a supporto di
    processi produttivi e di gestione della produzione e/o della supply chain (cloud computing),
  • software, sistemi, piattaforme e applicazioni per industrial analytics dedicati al trattamento ed all’elaborazione dei big data provenienti
    dalla sensoristica IoT applicata in ambito industriale (Data Analytics & Visualization, Simulation e Forecasting),
  • software, sistemi, piattaforme e applicazioni di artificial intelligence & machine learning che consentono alle macchine di mostrare un’a-
    bilità e/o attività intelligente in campi specifici a garanzia della qualità del processo produttivo e del funzionamento affidabile del macchinario
    e/o dell’impianto,
  • software, sistemi, piattaforme e applicazioni per la produzione automatizzata e intelligente, caratterizzata da elevata capacità cognitiva, interazione e adattamento al contesto, autoapprendimento e riconfigurabilità (cybersystem),
  • software, sistemi, piattaforme e applicazioni per l’utilizzo lungo le linee produttive di robot, robot collaborativi e macchine intelligenti per
    la sicurezza e la salute dei lavoratori, la qualità dei prodotti finali e la manutenzione predittiva,
  • software, sistemi, piattaforme e applicazioni per la gestione della realtà aumentata tramite wearable device,
  • software, sistemi, piattaforme e applicazioni per dispositivi e nuove interfacce tra uomo e macchina che consentano l’acquisizione, la
    veicolazione e l’elaborazione di informazioni in formato vocale, visuale e tattile,
  • software, sistemi, piattaforme e applicazioni per l’intelligenza degli impianti che garantiscano meccanismi di efficienza energetica e di
    decentralizzazione in cui la produzione e/o lo stoccaggio di energia possono essere anche demandate (almeno parzialmente) alla fabbrica,
  • software, sistemi, piattaforme e applicazioni per la protezione di reti, dati, programmi, macchine e impianti da attacchi, danni e accessi
    non autorizzati (cybersecurity),
  • software, sistemi, piattaforme e applicazioni di virtual industrialization che, simulando virtualmente il nuovo ambiente e caricando le
    informazioni sui sistemi cyberfisici al termine di tutte le verifiche, consentono di evitare ore di test e di fermi macchina lungo le linee
    produttive reali.
Posted in Industry 4.0 | Tagged | Leave a comment

How to make the digital transformation of manufacturing happen ?

[Originally appeared as guest post on the Industrial IoT/Industrie 4.0 Viewpoints blog]

The Digital Transformation is there: devices, technologies and suppliers are ready to bring manufacturing enterprises to a new level with increased productivity and more added value.

But if you go for a walk in production now, you’ll probably not see smart sensors, edge analytics-packed machinery, forklift drivers with smart-glasses and foremen yelling orders to their assistive bots.

The macro-level reasons why it is not happening or it is just slowly happening are covered in many posts in the Industrial IoT/Industrie 4.0 Viewpoints blog: there are cultural issues:

a standards mess:

and more:

From a strategic point of view it’s really a war between the incumbent-technology ecosystem and the new ecosystem, rather than between the technologies themselves (see Ron Adner and Rahul Kapoor “Right Tech, Wrong Time” Harvard Business Review November 2016).

Besides those reasons, there are also micro-level reasons that come up along the business decision process when a specific digital transformation project is proposed within a manufacturing company.

In the end it’s just like any other project: you spend upfront a given amount of money and hope to get a return of investment (ROI).

The trouble is that an innovative Industrial IoT project will be complex and high-risk, and there are several ways it could fail:

  • it proves impossible to build (technology or organization issues);
  • it does not deliver the ROI promise (lower returns or higher costs than expected);
  • it does not get traction (the target users will not use it / buy it);
  • it turns out to be unmaintainable in the long term (workforce turnover, product cycles).

At least a third of all Information Technology (IT) projects fail (see Lessons From a Decade of IT Failures from IEEE Spectrum), and probably even more of those of the high-risk type. These rates of failure are unacceptably high for manufacturing, where the average project failure rate is probably less than 10%.

So if the proposed project has a good ROI on paper, how to keep these risks under control and make it a success ?

One approach we know from IT is outsourcing most of the trouble to a reliable supplier, and get the solution you need with a software-as-a-service (SaaS) arrangement. Off-the-shelf SaaS is currently booming in IT: see what Adobe, Autodesk http://www.autodesk.com/products/fusion-360/overview, Microsoft, Salesforce and Trello are doing.

With SaaS, there is no upfront cost, and pay-per-use scales linearly with the number of users; typical figures for consumer-oriented services are 5-10 €/per user/per month and for business-oriented services 10-200 €/per user/per month.

So SaaS is a perfect fit for a small enterprise without an IT infrastructure, or for a larger organization that prefers to keep the internal infrastructure slim. It is also suitable for an innovative project, where user acceptance progresses slowly and the numbers may be initially low.

But all of the SaaS vendors above offer off-the-shelf, standard tools – nobody is offering your-own-IIoT-as-a-service yet ! For that you need tailor-made SaaS.

Giovanni Battista Moroni “Il Tagliapanni”, circa 1570

Giovanni Battista Moroni “Il Tagliapanni”, circa 1570

With tailor-made SaaS, a solution provider will build the IIoT solution based on the requirements of the manufacturing company, with the agreement that:

  1. the provider will not apply their full margin on the upfront costs;
  2. the provider will keep the intellectual property of the solution;
  3. the manufacturing company will perpetually pay for the use of the solution in a SaaS fashion.

With the pay-per-use model the gain for the solution provider will come later if the project is successful, while the OPEX for the manufacturing company will grow gradually as the solution is deployed and gets traction. This setup can slash the CAPEX for the upfront costs due to the initial effort of adapting, integrating and customizing the chosen platforms / technologies by a factor of 2, while creating a strong commitment for the provider to make the project a success.

Of course it’s a generic framework that can be adapted with any option and variant your legal and financial advisors can imagine. For example the pay-per-use business models we know from IT SaaS can be creatively adapted to the OT environment by stipulating per-installation/per-hour fees.
The bottom line is that if you find an agreement with a trusted supplier, with tailor-made SaaS you can share the road towards digital transformation with them, and make it happen for real.

Posted in Chemeng, Philosophy, Uncategorized | Tagged , , | Leave a comment

HOWTO migrate tasks from kanboard to phabricator

Kanboard is a Kanban Project Management tool written in PHP. an excellent lightweight tool to quickly set up a project and organize tasks (think of it as a down-to-earth trello).

Phabricator on the other hand is a complete suite of software development collaboration tools, which among others includes a Kanban-like view of tasks tagged with each project.

If you happen to have to migrate tasks from Kanboard to Phabricator, this guide is for you. But beware in the spirit of Phabricator’s creators we have no well-tested tool to offer, just a semi-manual procedure based on Phabricator’s Conduit API.

Log into the server where you have installed Kanboard, and navigate to the data directory inside that:

cd /var/www/kanboard/data

open the sqlite database:

sqlite3 db.sqlite

explore the database schema:

.tables
.schema projects
.schema tasks

find the project (board) you’re interested in

select * from projects;

in our case it was project 3; list tasks and columns from that project:

select id,column_id,title from tasks where project_id=3 order by column_id;

recognize how the column_id field matches the board columns …

Now extract in Python dictionary format the list of tasks, one column at a time (you’ll have to do some manual escaping here !)

  • backlog (that was column_id 9 for us):
    select '{"title": "'||title||'", "description": """'||description||'"""},' from tasks where project_id=3 and column_id = 9;
    
  • ready (column_id 10):
    select '{"title": "'||title||'", "description": """'||description||'"""},' from tasks where project_id=3 and column_id = 10;
    
  • work in progress (column_id 11):
    select '{"title": "'||title||'", "description": """'||description||'"""},' from tasks where project_id=3 and column_id = 11;
    

now copy-paste those in the python script skeleton, replacing the ellipsis dots:

import requests

phabricator_instance = 'phabricator.example.com'
api_token = 'api-aaaaaaaaaaaaaaaaaaaaaaaaaaaa'
projectPHID = "PHID-PROJ-aaaaaaaaaaaaaaaaaaaa"
tasks_backlog = [ ... ]
tasks_ready = [ ... ]
tasks_wip = [ ... ]


def create_task(s, title, description):
    data = {'api.token': api_token,
            'title': title,
            'description': description,
            'projectPHIDs[]': [projectPHID]}
    url = 'https://' + phabricator_instance + '/api/maniphest.createtask'
    req = requests.Request('POST', url, data=data)
    prepped = s.prepare_request(req)
    resp = s.send(prepped)
    resp.raise_for_status()
    results = resp.json()
    error_info = results['error_info']
    if error_info:
        print 'internal: error while creating phabricator task: %s' % error_info
        return {}
    uri = results['result']['uri']
    task_id = results['result']['id']
    return {"uri": uri, "task_id": task_id}

In this script you also have to modify the phabricator_instance (should be the FQDN of the Phabricator instance where you want to file the task), the api_token (can be obtained as follows: as a Phabricator admin, create a bot account then “Edit Settings”, go to “Conduit API Tokens”, click “Generate API token”) and the projectPHID (the Phabricator ID of the project you want to file your task against; here is how you can find that out)).

Now you’re all set to manually execute one by one the imports, starting from the rightmost column:

s = requests.Session()
for t in tasks_wip:
    title = t['title']
    description = t['description']
    create_task(s, title, description)

each time go to the project work-board in Phabricator and move the newly created tasks in the right column.

This was the starting situation in Kanboard:
kanboard

And this is the final situation in Phabricator:
phabricator

Quite a lot of work still to do ! But at least we’ve got titles, descriptions and columns right !

Posted in Howtos | Leave a comment

Extend the system partition in a Windows virtual machine running within kvm with file-based virtual disk

The post Extend the system partition in a Windows virtual machine running within kvm/lvm is applicable if the virtual disk is on a LVM volume.

If the virtual disk is file-based, these are the required steps:

  1. Find out what is the file the virtual machine disk is attached to; assuming disk images are in /var/lib/libvirt/images:
    sudo grep 'var.lib.libvirt.images' /etc/libvirt/qemu/name_of_virtual_machine.xml

    You might see something like:

    <source file='/var/lib/libvirt/images/w7_64_cdev15.dd'/>
  2. Shut down the virtual machine
  3. Resize the disk image file:
    sudo qemu-img resize /var/lib/libvirt/images/w7_64_cdev15.dd +10G
  4. Restart the VM
  5. Extend the disk using the guest operating system specific tool; with Windows 7 and later use the extend command from within disk manager (should require no reboot).
Posted in Howtos, Uncategorized | Leave a comment

Effect of initial estimates for KLLs on the convergence of liquid-liquid equilibrium calculations

The initial estimates for the KLLs (equilibrium factors) have a big influence on the convergence behavior of liquid-liquid equilibrium calculations.

To highlight this effect, let’s try out something with LIBPF.

We choose the system H2O / 2-ethyl-1-hexanol.

Experimental measurements for the liquid-liquid equilibrium of this system can be found in the publication: Frank Gremer, Gerhard Herres, Dieter Gorenflo, “Vapour – liquid and liquid – liquid equilibria of mixtures of water and alcohols: measurements and correlations“, High Temperatures – High Pressures, 2002, volume 34, pages 355 – 362 (direct link). We are also grateful to the authors for providing additional data.

Let’s pick from from that source the data point at 120.384 – 129.785°C, here are the water molar fraction:

  • watery phase 0.9998 mol/mol
  • organic phase 0.334 mol/mol

Fitting this data-point we get these NRTL binary parameters:

  • alfa = 0.2
  • B12 = 3060.4764877456
  • B21 = -156.0296483827

These parameters can be used to reproduce the chosen experimental data point, see the HOWTO calculate a liquid-liquid separation tutorial.

The program yields these results:

------------------------------------------------------------------------------------
Phase   fraction        Water   ETEX    Water   ETEX
Name    mol frac        mol frac        ndot, kmol/s    ndot, kmol/s
STREAM:Vphase   0.278635989043  0.999484893116  0.000515106883773       27.8492 0.0143527
STREAM:Lphase   0.721364010957  0.30706763201   0.69293236799   22.1508 49.9856
STREAM:Tphase   1       0.5     0.5     50      50
------------------------------------------------------------------------------------
Water Kll: 3.25493405663 
ETEX Kll:  0.000743372524604

Here Vphase is the first liquid phase, and Lphase is the second liquid phase.

We have used KLL[0] = 1E5 and KLL[1] = 1E-5 as initial estimates, making the first component (the water) more affine with the first liquid phase. Consequently the solver converges to a solution (let’s call it solution B) where the organic phase is the second liquid phase.

If we set each KLL to its reverse (1E-5 and 1E5 respectively) we get the same results, but the phases are inverted:

------------------------------------------------------------------------------------
Phase   fraction        Water   ETEX    Water   ETEX
Name    mol frac        mol frac        ndot, kmol/s    ndot, kmol/s
STREAM:Vphase   0.721364010957  0.30706763201   0.69293236799   22.1508 49.9856
STREAM:Lphase   0.278635989043  0.999484893116  0.000515106883773       27.8492 0.0143527
STREAM:Tphase   1       0.5     0.5     50      50
------------------------------------------------------------------------------------
Water Kll: 0.307225886179 
ETEX Kll:  1345.22055484

here the organic phase is the first phase (let’s call it solution A) and the final KLLs are the reciprocal of those in solution B !

Now let’s try a scan of the range of possible initial estimates for the KLLs, sweeping each from 1E-5 to 1E5:

std::cout << "kll0\tkll1\terrors\titerations\tVphase.x[0]\tLphase.x[0]" << std::endl;
double factor(10.0);
for (double kll0=1E-5; kll0<1E5; kll0*=factor) {
  for (double kll1=1E-5; kll1setPristineRecursive();
    feed->resetErrors();
    feed->calculate();
    // TODO automate valid initial point table
    std::cout << kll0 << "\t" << kll1 << "\t" <errors.size() << "\t" <NITER_NLEFlash.value() << "\t" <Q("Vphase.x[0]") << "\t" <Q("Lphase.x[0]") << std::endl;
  } // loop over kll1
} // loop over kll0

This prints a 10×10 grid which looks like this:

kll0    kll1    errors  iterations      Vphase.x[0]     Lphase.x[0]

1e-05   1e-05   0       1       0.5     0.5 
1e-05   0.0001  2       500     0.454166789146  0.5 
...
1e-05	10000	0	4	0.143450341144 	0.999719891117 
...
10000	1e-05	0	4	0.999719891047 	0.143450278799 
...

There are four possible situations:

  1. the liquid-liquid split is found, with the 1st liquid being the organic phase: solution A
  2. the same liquid-liquid split is found, with the 2nd liquid being the organic phase: solution B
  3. we have errors: the initial estimate was off and did not enable the nonlinear algebraic solver to find the solution
  4. it converges, but the degenerate solution (with both phases having the same composition) is found

We can generate several of these grids, with the water content in the feed spanning the entire range of compositions.

This animated GIF shows a slideshow of the results:
anim

Here the blue color stands for solution A, green for solution B, red for errors, yellow for single-phase and gray for degenerate solution.

We note the following:

  1. when the initial estimates for the KLLs are close to one another, the degenerate solution is always found
  2. intermediate initial estimates typically cause convergence errors or a spurious single-phase solution
  3. if the alcohol fraction is greater than water:
    • if the difference between the initial estimate of the equilibrium factor for the alcohol (KLL[1]) and the one for the water (45° sloped boundary of the blue area) is higher than a certain difference threshold and the initial estimate of the equilibrium factor for the alcohol (KLL[1]) is higher than than another threshold, solution A is found
    • the sloped boundary advances to the right (i.e. the maximum difference threshold is decreased) as the water content in the feed increases
    • if the difference between the initial estimate of the equilibrium factor for the alcohol (KLL[1]) and the one for the water (45° sloped boundary of the blue area) is lower than a certain difference threshold and the one for the alcohol (KLL[1]) is lower than another threshold, solution B is found
    • the sloped boundary advances to the left (i.e. the maximum difference threshold is decreased) as the water content in the feed increases
  4. if the water fraction is greater than alcohol:
    • the shape of the blue / green areas flip
    • the 45° sloped boundaries recede to the bottom / top as the alcohol content in the feed decreases

Conclusions: the initial estimates of the equilibrium factors can be used to steer the solver towards making the first or the second phase the organic phase; for example to make the second phase the organic phase (solution B) a practical initialization strategy is to set the KLLs for the key organic components (those present in large amount) to small value such as 1E-5, and the KLL for for water to a large value such as 1E5. The KLLs for the trace components can be left uninitialized to their default value of 1.

Posted in C++, Chemeng, Uncategorized | Tagged , | Leave a comment

Running your own kernel from the LIBPF user interface on OS X

During model development, you rapidly produce new versions of the calculation kernel (the command-line executable version of your models).

The easiest thing to do to try them out is to run them from the LIBPF user interface.

Here is a step-by-step howto for running your own kernel from the LIBPF user interface on OS X.

I assume you have received a pre-packaged OS X disk image (dmg) file:

screenshot_osx1

If you mount it (by double-clicking) you’ll see that it contains the UIPF application package (that acronym stands for User Interface for Process Flowsheeting, it’s really just the LIBPF user interface).

Rather than dragging and dropping that from the mounted volume to the Applications folder as described in the LIBPF™ OS X Installation manual, drag and drop it inside your development folder (I assume it’s LIBPF_SDK_osx_1.0.2346 on the Desktop):

aaa

We now have to issue some command-line magic so open a Terminal and cd to the location of your development folder (you may need to adapt this command if your development folder is somewhere else):
cd Desktop/LIBPF_SDK_osx_1.0.2346

Now check the kernel currently configured with the UIPF application:
ls -l UIPF.app/Contents/Resources/kernel

this should return something similar to:
-rwxr-xr-x 1 paolog staff 6423572 24 Mar 23:19 UIPF.app/Contents/Resources/kernel

What we want to do is replace that with the kernel produced by Qt Creator, for example for debug mode:
ls -l bin/mcfcccs/debug/mcfcccs
-rwxr-xr-x 1 paolog staff 23013068 15 Giu 12:01 bin/mcfcccs/debug/mcfcccs

So now delete the currently configured kernel:
rm UIPF.app/Contents/Resources/kernel

and replace it with a symbolic link to the kernel produced by Qt Creator:
ln -s ../../../bin/mcfcccs/debug/mcfcccs UIPF.app/Contents/Resources/kernel

If you check now what kernel is currently configured with the UIPF application:
ls -l UIPF.app/Contents/Resources/kernel

it should return:
lrwxr-xr-x 1 paolog staff 34 15 Giu 12:04 UIPF.app/Contents/Resources/kernel -> ../../../bin/mcfcccs/debug/mcfcccs

So now it should be all set: when you open the LIBPF user interface double-clicking on the UIPF application package located in development folder, it will run your latest debug mode kernel !

Posted in C++, Chemeng, Howtos | Leave a comment

Impressions from the sps ipc drives Italia 2016 fair

The state of confusion that currently prevails when the Internet comes to manufacturing was confirmed at the sps ipc drives Italia fair that took place this week in Parma, Italy.

WP_20160526_002

The confusion starts from the terminology. If you view the encounter of Internet and the industry as dominated by the former, you will use as label IIoT (industrial Internet of things); this seems typical of American companies, especially with an IT (information technology) background.

If you think that the encounter should be dominated by the industrial culture you’ll use the Industrie 4.0 label, as most German companies and even the German government do. Digital manufacturing looks like a neutral term, but it is biased towards discrete manufacturing and not very popular in the process industry, which is already quite digital … albeit not connected ! There are also the CPS (cyber-physical systems) and cloud labels, or you can sprinkle some smart- prefixes here and there.

And finally, as a consequence of these technological transitions, a reconfiguration should ensue, driving everybody happily towards servitization i.e. renting their machines with a pay-per-use, machines-as-a-service business model.

As anybody who has been enthusiastic for SOA (Service Oriented Architecture) or the network computer (or for any of the dozens of buzzwords which have plagued the industry in the last decades) knows well, not everything that comes out of the marketing guru’s heads turns into reality. Or it might become real sometime, but who knows when ?

For this Internet + manufacturing thing there are many reasons for all stakeholders to be quite frightened of the consequences, which you can extrapolate from what happened since we as consumers have embraced the smart-phone revolution:

  1. I am actually dumber, as the phone tells me where to go, what to, how much to exercise etc.
  2. all my data are sucked out and sold multiple times by third parties
  3. rather than buying phones, I subscribe long-term service-access contracts bundled with some hardware
  4. the major European smart-phone producer Nokia has vanished because hardware is now a commodity
  5. the (American) platform owners Apple and Google win everything.

In the industry, secretive end users are scared of loosing the control on the data and on the know-how. Those who handle dangerous substances and processes fear the risk of hackers wreaking havoc. OEMs may sense the danger of being driven to compete on totally flat, global and frictionless digital marketplaces, where their service is totally replaceable by their competitors’, and the only winner is the one biggest player or the owner of the platform itself. And while small end users may benefit from the cloud and machines-as-as-service, because that lowers the cash-flow barriers for them, by buying smart machines they may actually become dumber, i.e. lose the control of how much value is added by those machines to their business.

Anyway whatever buzzword they choose to use, it is a fact that the marketing departments of the big automation and industrial IT providers are pushing hard on those, and the largest among their customers may soon decide to sail into these troubled waters: a large corporation may be confident that their sheer size will allow them to overcome the storm.
But the enthusiasm is markedly limited in European SMEs which stick to the generally accepted wisdom that what is good for the big fish is not good for the small fish; and Italian SMEs play even cooler, as they are conservative and followers by attitude.

There are exceptions though, and in certain niche applications the impression is that SMEs may actually be much quicker than anyone else in making the jump; if they overcome their fears, the flexibility of the SME wins.
Given the astonishingly quick rate of adoption among consumers, it would seem natural that end users with a contiguity with the consumer sector would have lower barriers against the cloud. Those may be for example OEMs who supply artisans, small food & beverage producers etc. – although I am not able to name examples or lay down quantified figures on the market penetration. What I do have are signals that some SMEs are already working with other SMEs around architectures and business models that you could label Internet + manufacturing, but they do so below the radar, and you wont’ find their success stories in the most exhaustive analyst reports.

In conclusion, if you are a SME and have a business case in mind, please drop us a line at info@simevo.com and we’ll find out together how we can turn your something into a smart-something, along a down-to-earth evolution path.

Posted in Uncategorized | Leave a comment

Debugging LIBPF applications with gdb

GNU debugger (gdb) is the standard command-line debugger on many Unix-like systems for troubleshooting C++ programs.

To prepare for debugging your application, compile it with debugging symbols enabled; for example assuming you want to debug Qpepper and use bjam to build:

cd ~/LIBPF/pepper
bjam debug Qpepper

or if you use qmake/make to build:

cd ~/LIBPF/pepper
qmake
make debug

A typical debugging session starts by launching gdb with the relative path to the executable as a parameter:

cd ~/LIBPF/bin
gdb ./pepper/gcc-4.9.2/debug/Qpepper

Next we typically want to set up a breakpoint at the Error::Error function, which is where the control flow will pass if an exception is thrown; to do that, use the b (breakpoint) command:

b Error::Error

Then you launch your application with the required command-line parameters with the r (run) command:

r new jjj

When the exception is thrown, the debugger will stop at the breakpoint:

Breakpoint 1, Error::Error (this=0xed2080, 
    cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)") at ../utility/src/Error.cc:56
56      Error::Error(const char *cf) : msg_("Error was thrown by function: ") {

From here you can:

  1. examine the call stack with the where command, which will return something like:
    #0  Error::Error (this=0xed2080, 
        cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)") at ../utility/src/Error.cc:56
    #1  0x00000000006097b2 in ErrorObjectFactory::ErrorObjectFactory (this=0xed2080, 
        cf=0xa03dc0  "Node* NodeFactory::create(std::string, Libpf::User::Defaults, uint32_t, Persistency*, Persistent*, Persistent*)", ty=0xed09e8 "type jjj not found")
        at ../utility/src/Error.cc:117
    #2  0x00000000007d30c1 in NodeFactory::create (this=0x7fffffffd7ef, type="jjj", defaults=..., id=0, 
        persistency=0x0, parent=0x0, root=0x0) at src/NodeFactory.cc:57
    #3  0x00000000004263ec in createCase_ (type="jjj", defaults=..., error=@0x7fffffffdffc: 32767, svgs=true)
        at src/Kernel.cc:228
    #4  0x0000000000427901 in Libpf::User::createCase (type="jjj", tag="jjj", description="", jcd="", 
        error=@0x7fffffffdffc: 32767) at src/Kernel.cc:317
    #5  0x000000000040e64d in main (argc=3, argv=0x7fffffffe158) at ../user/src/main.cc:189
    

    notice the first column that is the frame number, and the error message details found as ty parameter to the function call in frame #1: type jjj not found

  2. jump to the frame that occurred in your own code and not in the library, such as frame #5, using the f (frame) command:
    f 5
    
  3. list the source code around the current execution point with the l (list) command, which will return something like:
    189         Libpf::User::Handle caseHandle = Libpf::User::createCase(type, tag, description, options, error);
    (gdb) l
    184         std::string options("");
    185         if (argc > 5) {
    186           options = argv[5];
    187         } // if options are passed
    188
    189         Libpf::User::Handle caseHandle = Libpf::User::createCase(type, tag, description, options, error);
    190         if (error < 0)
    191           quitNow(error);
    192         else
    193           quitNow(caseHandle.id());
    (gdb) 
    

Issuing the same commands repeatedly at the gdb command prompt is common, therefore it’s handy to enable gdb command history:

cat >> ~/.gdbinit
set history save
set history filename ~/.gdb_history
^d

For more debugging tips, check the excellent RMS gdb tutorial or the gdb manual.

Posted in C++, Howtos | 2 Comments

Summary of the A&T fair, 2016 edition

Here is the Affidabilità e Tecnologie (A&T) fair, 2016 edition (held in , Torino April 20-21 2016) summarized by three audiovisual documents:

  1. Robot drives train:
  2. Robot plays golf:
  3. Robot brews coffee:
    WP_20160421_010
Posted in Philosophy | Leave a comment

Bash on Windows 10

This week at Build 2016, the yearly developer-oriented conference, Microsoft announced that Windows 10 will be able to run Linux’s Bash shell, by executing the native Ubuntu binary as-is.

Don’t stop at the news headline though: this is not just about Linux Bash, the command shell and scripting language.
All Ubuntu user space commands can potentially work, including the apt package manager with which you can tap from the 60000+ software packages available in the Ubuntu repos.

More technical details are found in two blog posts by Dustin Kirkland, an Ubuntu employee that worked with Microsoft on the magic behind.

This is no virtualization / container technology. It is more about API emulation: Linux system calls get translated in real time into Win32 API calls. No need to recompile the binaries.

P488

It’s an approach that resembles the POSIX subsystem that was part of Windows NT, whose latest (2004) denomination was “Subsystem for UNIX-based Applications” (SUA), deprecated with Windows 8 and Windows Server 2012 and completely removed in Windows 8.1 and Windows Server 2012 R2. I guess it its just a resurrection of this approach.

Even if this technology is aimed at developers, if you think about it, it has certain strategic implications.

On the Operating System competition landscape, this levels the field with Apple OS X, which already had Bash and several package managers (but not apt ! and the binaries had to be recompiled !). It is a praise to the outstanding technical excellence of the Debian Linux distribution, which lies at the foundation of Ubuntu. It lowers the attractiveness of Linux on the Desktop, as developers can run all their preferred tools from within Windows. It lowers the barriers against migrating to Windows services and solutions developed on Linux technologies and stacks (MAMP, LAMP …): not that this wasn’t possible before, but you had to depend on many more bits and pieces of uncertain trust-wordiness. Now it looks like a simpler and well supported path.

It obsoletes certain technologies designed for similar purposes such as Cygwin and MinGW. It also obsoletes the plethora of ad-hoc installers and Windows-specific binaries for tools such as ActivePerl, git, PostgreSQL, nginx, Ruby, Node.js et cetera.

Finally, on the Open Source / commercial software divide, it demonstrates once more (should there be any need for it) that business can benefit from Open Source: effective immediately, thousands of Open Source enthusiasts are working for the good of Microsoft, with no compensation.

ATM many questions are still open: when will this technology land on Windows Server (currently it requires to install an app from the Windows Store, which is not always possible) ? Will this be available on previous versions of Windows like Windows 7 and 8.1 ? Will this be integrated with system administration tasks such as installing / un-installing a service ?

Posted in Philosophy | Tagged | Leave a comment