about articles published in the last april june review

This forum is for discussion of topics related to System Dynamics. You must be logged in to post but anyone who has registered on the System Dynamics web portal may post. Note that there may be a delay of a working day or two after registering before you can post anything.
Jean-Jacques Lauble
Posts: 74
Joined: Fri Aug 23, 2013 3:49 pm

about articles published in the last april june review

Post by Jean-Jacques Lauble » Wed Oct 30, 2013 11:43 am

Hi everybody

In the last SD review, there were two articles about model quality.

Jack Homer cites Jay Forrester regretting the superficiality and mediocrity of the field.
Did Jay think about academic or practical work?

As a practitioner I am not able to judge academic works, but concerning practical ones I totally agree with him.

But is the problem solvable?

I regularly pointed out the bad quality of published models, especially at the annual SD conference.

Some years ago, I noticed that among 20 Vensim models published at the annual SD conference, 80% had no units or had unit errors, only one had a mass balance check (something that is necessary when physical quantities stock are varying partly due to inflow or outflow from outside the model). I have never seen any published model with tests embedded in the model that can be run automatically, like the Reality Check feature in Vensim. I personally do not use anymore this feature because it is not compiled and makes thorough testing to slow. It is then perfectly possible to embed these tests using any software.

Of course, whatever I posted as a single member had little effect.

When I started SD years ago, I was astonished to see the huge gap between what modelers were supposed to do and what was effectively done and published at the annual SD conference or elsewhere.
Up to the point that, as a beginner, I suspected the academic recommendations to be purely academic with no practical interest. With the time I changed my mind and follow most of them.

I wanted to ask Jack Homer, what was effectively done over the year to improve the quality of the works presented at the conference as he mentions it?

I think that if nothing is currently done to change this situation, I will have no reason to stay a member of the SD society. For two reasons: I am not interested by mediocrity and superficiality and I think that supporting an organization that does not succeed in changing this situation is not supporting SD in general.
I will follow Jay Forrester apparently negative advice, break away from the conventional institution and live on my own, the only thing that I hope may be considered and will certainly not hurt me.

Best regards.

Jean-Jacques Laublé
Last edited by Jean-Jacques Lauble on Tue Nov 12, 2013 9:12 am, edited 1 time in total.

Jack Harich
Posts: 54
Joined: Mon Jan 12, 2009 10:56 am
Location: Atlanta, Georgia US
Contact:

Re: about articles published in the last april june review

Post by Jack Harich » Wed Oct 30, 2013 1:52 pm

Dear Jean-Jacques Laublé,

Excellent points. But let's not forget that SD has had some outstanding successes. The field is moving forward, because there are some modelers who are producing perceptive, innovative, relevant work. The webinar series on "Big Data, System Dynamics, and XMIlLE" presents some of these models. I attended the latest one, Jack Homer's, and was duly impressed. This is state-of-the-art public interest problem solving at its finest.

Thus the "superficiality and mediocrity of the field" is not necessarily true. What is true is the field can do better. Young new fields like SD can always do better, until they mature. This necessarily takes a long time. Frustration and disappointment at slow progress is, in my opinion, not justified. Our progress is normal.
But is the problem solvable?
"The problem" you raise is some models are low quality. Some of this is obvious, such as units problems. But the most important problem to me is whether a model achieves its goal. Does it help a client? Does it solve a problem? Does it serve well to make an educational point? And so on.

The models in conference papers may tend to be the work of beginners, or the quick work of experts to make certain points. You might go back to a conference's papers and examine a random sample. What pattern does model quality follow? Is there any correlation with author SD expertise? Also, it may be that conference models are not screened and that few attendees care about issues like units. They do care about what points a model is used to make. They will rarely investigate a model used in a conference presentation, unless it has particular appeal.

Perhaps SD modelers save their better work for papers and clients.

I'm not an insider, but the SD society seems to making a serious effort to improve the quality of SD work.

Witness the new requirements for submissions to the Review. The author guidelines state that submitted articles must "Have impeccable SD (model, formulation, validation, interpretation, etc.) – all SD work should be provided in auditable format in order to allow a more streamlined review process." Furthermore "If possible, models should be documented using the SDM-Doc tool described in Martinez-Moyano I.J. 2012. Documentation for model transparency. System Dynamics Review, 28(2):199-208 and hosted by the System Dynamics Society."

I expect Jack Homer and others can address what else is being done to raise the bar.

Thanks,

Jack Harich

Sarah Boyar
Posts: 16
Joined: Sat Jul 18, 2009 5:22 pm

Re: about articles published in the last april june review

Post by Sarah Boyar » Wed Oct 30, 2013 2:42 pm

Dear Jean-Jacques,

Please do not leave. At least, speaking for myself, (a MSc in SD and now pursuing a PhD using SD...still a student...plus 8 years of using SD in business and consulting)...I may be mostly silent, but I really value your contribution and everything that you add to these forums.

I agree with you regarding rigor. I've seen through practical experiences that SD is relatively easy to appear proficient in - to the untrained eye- , and, through personal practice as well as through teaching others I know that it is extremely difficult to master SD modelling, even difficult to become proficient in this... It makes me appreciate how easy certain practitioners make it look...

Anyway, as someone who has been similarly disillusioned many times, I say forget what's low quality and focus on what's interesting. Don't dwell on mediocrity. But please stay on and keep posting here. : )

Best wishes,
Sarah

Jean-Jacques Lauble
Posts: 74
Joined: Fri Aug 23, 2013 3:49 pm

Re: about articles published in the last april june review

Post by Jean-Jacques Lauble » Thu Oct 31, 2013 1:46 pm

Hi Sarah and Jack

Thank you for your answers.

I must first say that my opinions are mostly due to the fact that I am at the same time the modeler and the client. This makes me particularly exigent about rigor. As a modeler I am particularly aware of the multiple possible errors in a model. I do not have the problem of making the client understand and interpret a model that he has not built himself or has seldom participated into its building.
This explains certainly my marginal ideas. If I adhere to some of Jay Forrester’s ideas it is not necessarily for the same reasons. This is why I would have liked to be better informed about what Jay really thought.

But I must still say that at least superficiality and by consequence mediocrity is very common in this field.

I do not think that SD is a new field after more than 50 years of existence.

Jack says that it is not unit that is important but the satisfaction of the client. I agree. But in the conference the readers will be mostly SD people interested in how the model has been built and analyzed and not so much by the conclusion unless they are particularly interested by the subject itself.
When I browse the 200 papers submitted at the conference I eliminate papers that do not comply to some elementary SD rules. Unit checking is one of them. I know that is does not prove that the model is good or bad, but it proves that the modeler has no experience of SD modeling.
Anybody realizes quickly when starting to study SD, the interest of unit testing. Knowing the difficulty of SD modeling, I am not very attracted by a work done by an SD debutant even if he is an expert in his field.
Secondly I feel very uncomfortable studying a model without units or worse with unit errors. ‘Unit errors’ proves that there is a great probability that the model is flawed.
I find it very difficult to understand a model that has no units. These two reasons make me discard automatically all work that do not comply to this elementary rule.

But I think that for a work to be professional this is not enough. Most models have physical elements flowing through the model. In this case it is too elementary to build ingrained mass balance checking as I explained in a preceding model. But this is rarely done.

These two rules are extremely easy to respect with a little of practice and are very useful to detect model errors. Why then not apply them? This is what I call, either lack of elementary knowledge or laziness. It is the work of people that largely underestimate the difficulty of the task. It is a perfect example of superficiality and mediocrity. But it is to my opinion still not enough.

There is an excellent feature in Vensim, named reality checks, started in 94, that unfortunately nobody uses and that has not been well supported and developed by Ventana. I personally do not use this feature anymore and I replaced it by my own automatic tests that are compiled and run much faster. I can launch them automatically from Excel VBA, varying structural parameters at will.
To resume it is quicker, more powerful, not bugged and safer. This can be done in any other languages.

I join again a paper by Peterson and Eberlein about reality checks. Everything in this paper makes sense but the last paragraphs about the number of RC required. I think that a maximum of a tenth of the number of equations is enough provided that they are well chosen. It is then possible to embed in a model all the recommended tests proposed by the SD literature. By experience I am totally unable to trust a model I have built myself without a minimum of such automatic tests. If so I know by advance that the model is bugged if it has a certain size. Building these tests requires of course the close participation of the people knowing the system and the decision makers.
The total absence of these tests in all the published models I have studied is again a proof of superficiality and mediocrity.

At the St Galen SD conference, at a plenary session, a guy from IBM, attacked severely SD, accusing the field to confuse scientific rigor with magic. He just did not consider the field as a scientific one.

Now about sheer superficiality, anybody having attended an SD conference has seen the huge amount of sessions, often 6 at the same time, lasting half an hour, where the speaker has barely the time to expose the subject of his study, when time has come for everybody to rush to the next meeting. At the St Galen conference, I had spotted someone unique in the SD community: a business man making his own models like me. I assisted to his presentation, and at the end of it, tried to exchange some words with him. But after thirty seconds of conversation, he escaped probably rushing to another meeting.

The Wednesday of the last Boston conference was named the ‘practitioner day’.
Fine, I thought, but the only session about practice is a roundtable of one hour and a half (which is marvelous compared to the common half an hour) but unfortunately at lunch time!
I wonder why it is named the practitioner day.

The SD society prefers quantity over quality. This automatically generates superficiality and mediocrity.

I think that it is enough of criticism. But I really regret that such a high potential method, totally scientific (it could be a branch of mathematics) is used so unscientifically.
Of course I judge things from my point of view, and do not have to cope with the constraints of the SD society. But I am a simple member, whose objective is to learn something practical about modeling. SD is not my job, I do not sell it. I buy it.

I want to add, that if there are some SD successes, all the better for the people concerned, but not selling SD, I am mainly concerned by my own successes.

To finish this post, with a note of optimism, I recognize one positive move: the ‘if possible’ recommendation to use SDMdoc for the papers published in the review. I personally would have suppressed the ‘if possible’. SDMdoc is a practical tool, that I use regularly.

Best regards.

JJ
Last edited by Jean-Jacques Lauble on Tue Nov 12, 2013 9:03 am, edited 1 time in total.

Richard Dudley
Posts: 65
Joined: Sun Jan 11, 2009 1:17 pm
Location: Etna, New York, USA
Contact:

Re: about articles published in the last april june review

Post by Richard Dudley » Wed Nov 06, 2013 9:09 am

Only slightly off topic I provide here a link to a paper I intended to present at our meeting in 2008 which I ended up not attending.

Implicit Minimum Requirements for First Draft Models also Provide a Starting Point for Better Reviews, and Quality, of Academic Models

The purpose was to separate requirements for a "perfect paper" from the minimum requirements for a paper that reviewers should be willing to examine. To a certain extent the SD meeting is a place where some people are trying out their first SD presentation. We should be trying to help them, and at the same time trying to make the conference papers better.

Eliot Rich
Posts: 20
Joined: Mon Jan 12, 2009 3:39 pm
Location: University at Albany, SUNY
Contact:

Re: about articles published in the last april june review

Post by Eliot Rich » Wed Nov 06, 2013 7:35 pm

Hello Richard:

Thank you for circulating your paper. I suggest a refinement of the concept of "clearly formulated equation" to include comments embedded in the model that indicate the intent of the modeler, sometimes aggregate, and sometimes at the equation level. Novice modelers should be encouraged to tell their intent, even if their formulations may not yet be robust. As the model progresses, I suggest capturing the source for parameters and structure.

Best,

Eliot Rich

Travis Franck
Posts: 34
Joined: Sun Jan 11, 2009 7:48 pm

Re: about articles published in the last april june review

Post by Travis Franck » Tue Nov 12, 2013 1:01 pm

Jean-Jacques,

I'm generally sympathetic towards your comments and critiques. I too am frustrated that SD, as a 50-year old field, doesn't have or use basic quality control procedures. With experience in computer science, I feel models should have unit checking (agree with you here) and "unit tests". Reality Checks in SD are similar in nature to Unit Tests (and also Exceptions) from CS. There needs to be a way to test functionality as the model calculates and sub-models/sub-functionality as the model develops. I'm slowly working on systems for Climate Interactive to use to achieve this level of quality control. If/when it is done, my plan is share the tools with the community as a resource for the field.

Travis

Leonard Malczynski
Posts: 96
Joined: Fri Jan 16, 2009 11:12 am
Location: Albuquerque, NM USA

Re: about articles published in the last april june review

Post by Leonard Malczynski » Tue Nov 12, 2013 3:04 pm

Everyone,
I have been thinking and working on model quality for several years. Basically a selfish attempt at making re-usable components that our team can understand and re-use.
There are lots of guidelines from the software engineering community on software (model) quality in construction.

Yes, there are excellent guidelines on the modeling process and presentation of results for publication but not much on the nuts and bolts of construction.
We have a Best Practices booklet, specific to a particular software platform (search my name in the 2013 conference procedeedings workshops).

I am working on an analysis similar to Jean-Jaques of past conference papers.
I hope to submit a paper on it to the next conference.

An aside: I am familiar with a model that was viewed by some of the highest levels in the US government.
Success was determined to be the fact that it was actually seen and used.
The model failed Geoff Coyle's test:
"To the degree that a model passes tests that it is ‘sound, defensible and well grounded’ it has that degree of validity and, hence, of being good enough for its purpose. If no tests were passed, the model would be completely invalid and hence useless. A model might, however, pass many tests but fail one that is absolutely essential, such as, in system dynamics, dimensional consistency. Such a model would be invalid as one would not know how much confidence could be placed in its outputs.” [Coyle and Exelby, 2000]

Let's keep improving.
Len

Leonard Malczynski
Posts: 96
Joined: Fri Jan 16, 2009 11:12 am
Location: Albuquerque, NM USA

Re: about articles published in the last april june review

Post by Leonard Malczynski » Thu Nov 14, 2013 11:57 am

Here is the link to the Best Practices booklet:
http://www.systemdynamics.org/conferenc ... index.html
Look for: Malczynski, Leonard Powersim Studio User Group and Advanced Techniques with Powersim Studio
and click on Supporting.

Jean-Jacques Lauble
Posts: 74
Joined: Fri Aug 23, 2013 3:49 pm

Re: about articles published in the last april june review

Post by Jean-Jacques Lauble » Sat Nov 16, 2013 8:50 am

Hi Travis

‘Generally sympathetic’ probably means ‘more often diverting than convincing’.

I do not understand what you mean by:

< I feel models should have unit checking (agree with you here) and "unit tests". Reality
< Checks in SD are similar in nature to Unit Tests (and also Exceptions) from CS. (Is CS computer science?)

What is the difference between ‘unit checking’ and ‘unit tests’?

Reality checks seem to be a peculiar case from Unit Tests that are more general than Reality Checks?

Can you post where Unit Tests are explained?

Best regards.
JJ

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest