Very Basic Question
Moderator: Jim Duggan

 Posts: 16
 Joined: Sat Jul 18, 2009 5:22 pm
Very Basic Question
Hi,
A very basic question for which I would love a succinct answer, or reference to an academic paper that deals with this topic explicitly in terms of how to explain to a client or non system dynamicist.
In the below model, if the timestep is '1', the stock drains completely in one period.
If the timestep is 0.0625, the stock never fully drains.
How to explain this in practical terms in the context of a concrete, physical model behaving this way?
Is the question I'm asking about how to explain the benefits of a simulation model versus an exact analytical solution?
~
Thanks,
Sarah
~~
drain=
Stock/time to drain
~ water/year
~ 
time to drain=
1
~ year
~ 
Stock= INTEG (
drain,
100)
~ water
~ 
Stock2= INTEG (
drain,
0)
~ water
~ 
A very basic question for which I would love a succinct answer, or reference to an academic paper that deals with this topic explicitly in terms of how to explain to a client or non system dynamicist.
In the below model, if the timestep is '1', the stock drains completely in one period.
If the timestep is 0.0625, the stock never fully drains.
How to explain this in practical terms in the context of a concrete, physical model behaving this way?
Is the question I'm asking about how to explain the benefits of a simulation model versus an exact analytical solution?
~
Thanks,
Sarah
~~
drain=
Stock/time to drain
~ water/year
~ 
time to drain=
1
~ year
~ 
Stock= INTEG (
drain,
100)
~ water
~ 
Stock2= INTEG (
drain,
0)
~ water
~ 
Re: Very Basic Question
Hi Sarah
I would ask the client the following question.
I give you two options:
The first one you have a 100 percent chance of dying in the following year.
The second option is you have a 50% chance of dying the next semester and a 50% chance of dying the next one. Which option do you chose?
It looks evident that with the first option he will be dead at the end of the year, while with the second one, he has always a 50 percent chance of being alive at the end of any period.
This illustrates that with a model, the result changes with the time step, and particularly in extreme conditions like the one with 100 percent chance of dying in a year.
Once he has well understood this, you can show him that the option of a 25% chance of dying per quarter, by setting the time step to a quarter, is still more interesting and so on.
Joined the simplistic model that you can show to the client. It is in Vensim but can be read with the Vensim PLE or Vensim reader.
Regards.
JeanJacques Laublé
I would ask the client the following question.
I give you two options:
The first one you have a 100 percent chance of dying in the following year.
The second option is you have a 50% chance of dying the next semester and a 50% chance of dying the next one. Which option do you chose?
It looks evident that with the first option he will be dead at the end of the year, while with the second one, he has always a 50 percent chance of being alive at the end of any period.
This illustrates that with a model, the result changes with the time step, and particularly in extreme conditions like the one with 100 percent chance of dying in a year.
Once he has well understood this, you can show him that the option of a 25% chance of dying per quarter, by setting the time step to a quarter, is still more interesting and so on.
Joined the simplistic model that you can show to the client. It is in Vensim but can be read with the Vensim PLE or Vensim reader.
Regards.
JeanJacques Laublé
 Attachments

 drain.zip
 (10.05 KiB) Downloaded 590 times

 Posts: 38
 Joined: Thu Nov 03, 2011 1:28 pm
Re: Very Basic Question
Hi Sarah,
the question that you have brought forward touches the heart of the SDMethodolody: Why should we use models in continuous time as opposed to models in discrete time (e.g. just setting dt to 1)? The answer is that the sometimes akward 'inaccuracy' will not matter or rather will be offset by the grip we will get upon complex reality. If in a system there are 'lots' of accumulation and draining processes and many interdependencies (e.g. feedback) the benefit of continuous time models with their ease of catching these system features should prove much greater than being off from an exact value that often never exists for the (unknown) model that produces reality.
That said I would in the example you have given be careful in what is meant with 'time to drain'. The term 'solution' with regard to a system of differential equations is mathematically speaking a function. In your case it is the exponential function and one could give an exact function that gives the result for what should be in the stock so one could then build this as an instantaneous process using just auxiliaries with no need to integrate. So the exponential decay process you are describing has a time constant (time to drain) that for a first order differential equation gives the mean residence time.
Your model is continuously compounding and the fractional rate of decay is a constant. This is the first thing that has to be explained to a customer not familiar with SD  SDmodels show whats in the stocks for any multiple of DT approaching continuos time dt = 1 IMHO is a very dangerous time step to use all to easily the model will not work properly at say dt = 0.5 or less. So the proper question for setting up an SDmodel is what do you know about the behaviour of the Stock  this amounts to the question for the order of the delay with the pipeline delay as an extreme  and what do you know about the value of the Stock after one year. If you know that the stock should be zero after one year and you know it is about a third order delay, then you might fit the time to drain accordingly so that the behavior approaches what you know about reality (you might use a threshold with a IFTHENELSE  function to force the stock value to zero while this is not neat of course).
As you can see the time to drain all of a sudden takes on a different meaning for physical, real world processes that behave like an exponential decay. If what you are trying to do is to build a model that represents linear depreciation in a continuous model (the business economics way of looking at depreciation). Then here is a good paper to have a look at:
http://www.systemdynamics.org/conferenc ... 6SCHWA.pdf
If you take a look at figre 4 on page 8 (left hand side) you can see that the time constant (e.g. your time to drain) is adjusted every DT so that the scrap value (e.g. zero) is reached at the eand of the economic life.
Hope that helps.
Kind regards,
Guido W. Reichert
PS:
@JJ
The comparison that you have given does illustrate the above quite nicely. The options you have given (100% probability of death vs. 2 times 50% probablity of death) take the probability (e.g. the time constant) as a constant. But it must be variable for the options to be equivalen: The probability for the first half year should be 50% and that for the second half year accordingly 100% (it is of course a conditional probability affecting those that have survived up to that point in time). In the end all are dead  as Keynes has also noted for the long run.
In a proper SDmodel the choice of DT should not affect the simulation outcome. It should be a purely technical construct that determines accuracy and speed for a given numerical integration technique.
the question that you have brought forward touches the heart of the SDMethodolody: Why should we use models in continuous time as opposed to models in discrete time (e.g. just setting dt to 1)? The answer is that the sometimes akward 'inaccuracy' will not matter or rather will be offset by the grip we will get upon complex reality. If in a system there are 'lots' of accumulation and draining processes and many interdependencies (e.g. feedback) the benefit of continuous time models with their ease of catching these system features should prove much greater than being off from an exact value that often never exists for the (unknown) model that produces reality.
That said I would in the example you have given be careful in what is meant with 'time to drain'. The term 'solution' with regard to a system of differential equations is mathematically speaking a function. In your case it is the exponential function and one could give an exact function that gives the result for what should be in the stock so one could then build this as an instantaneous process using just auxiliaries with no need to integrate. So the exponential decay process you are describing has a time constant (time to drain) that for a first order differential equation gives the mean residence time.
Your model is continuously compounding and the fractional rate of decay is a constant. This is the first thing that has to be explained to a customer not familiar with SD  SDmodels show whats in the stocks for any multiple of DT approaching continuos time dt = 1 IMHO is a very dangerous time step to use all to easily the model will not work properly at say dt = 0.5 or less. So the proper question for setting up an SDmodel is what do you know about the behaviour of the Stock  this amounts to the question for the order of the delay with the pipeline delay as an extreme  and what do you know about the value of the Stock after one year. If you know that the stock should be zero after one year and you know it is about a third order delay, then you might fit the time to drain accordingly so that the behavior approaches what you know about reality (you might use a threshold with a IFTHENELSE  function to force the stock value to zero while this is not neat of course).
As you can see the time to drain all of a sudden takes on a different meaning for physical, real world processes that behave like an exponential decay. If what you are trying to do is to build a model that represents linear depreciation in a continuous model (the business economics way of looking at depreciation). Then here is a good paper to have a look at:
http://www.systemdynamics.org/conferenc ... 6SCHWA.pdf
If you take a look at figre 4 on page 8 (left hand side) you can see that the time constant (e.g. your time to drain) is adjusted every DT so that the scrap value (e.g. zero) is reached at the eand of the economic life.
Hope that helps.
Kind regards,
Guido W. Reichert
PS:
@JJ
The comparison that you have given does illustrate the above quite nicely. The options you have given (100% probability of death vs. 2 times 50% probablity of death) take the probability (e.g. the time constant) as a constant. But it must be variable for the options to be equivalen: The probability for the first half year should be 50% and that for the second half year accordingly 100% (it is of course a conditional probability affecting those that have survived up to that point in time). In the end all are dead  as Keynes has also noted for the long run.
In a proper SDmodel the choice of DT should not affect the simulation outcome. It should be a purely technical construct that determines accuracy and speed for a given numerical integration technique.

 Posts: 152
 Joined: Thu Jan 15, 2009 6:55 pm
 Location: Bozeman, MT
 Contact:
Re: Very Basic Question
JJ & Guido, these are great answers.
For some reason I find that people have more trouble with the idea of a time constant than they do with a fractional rate of change. For example, if I tell people to model the outflow of customers from an installed base, with an average customer residence time of 3 years, they're stumped. But if I pose the same problem, with 1/3 of customers lost per year, they have no problem writing an equation.
I think part of the problem is that, as soon as you start talking about lifetimes, people jump to a discrete or pipeline delay perspective. Another part may be that I just don't have a really pithy explanation for the Little's Law equivalence between residence time and fractional loss rate. I've lost my Zen beginner's mind on this. Perhaps some of the educators on the forum can weigh in with helpful language?
For some reason I find that people have more trouble with the idea of a time constant than they do with a fractional rate of change. For example, if I tell people to model the outflow of customers from an installed base, with an average customer residence time of 3 years, they're stumped. But if I pose the same problem, with 1/3 of customers lost per year, they have no problem writing an equation.
I think part of the problem is that, as soon as you start talking about lifetimes, people jump to a discrete or pipeline delay perspective. Another part may be that I just don't have a really pithy explanation for the Little's Law equivalence between residence time and fractional loss rate. I've lost my Zen beginner's mind on this. Perhaps some of the educators on the forum can weigh in with helpful language?
Blog: http://blog.metasd.com
Model library: http://models.metasd.com
Work: http://ventanasystems.com/ & http://vensim.com/
Model library: http://models.metasd.com
Work: http://ventanasystems.com/ & http://vensim.com/
Re: Very Basic Question
Hi Guido and Tom
The paradox related comes from an ambiguity.
When someone says that there is a hundred chance of being dead at the end of the year it is not well expressed with an exponential decay, because it is varying with the time step and the model is not correct because at the end of the year there is a 25 % chance of still being alive if one uses a half year time step.
See the joined model that makes the correction that respects the 100 % probability to be dead at the end of the year whatever the time step.
Regards.
JeanJacques Laublé
The paradox related comes from an ambiguity.
When someone says that there is a hundred chance of being dead at the end of the year it is not well expressed with an exponential decay, because it is varying with the time step and the model is not correct because at the end of the year there is a 25 % chance of still being alive if one uses a half year time step.
See the joined model that makes the correction that respects the 100 % probability to be dead at the end of the year whatever the time step.
Regards.
JeanJacques Laublé
 Attachments

 draining2.zip
 (11.4 KiB) Downloaded 548 times

 Posts: 113
 Joined: Mon Feb 16, 2009 5:56 am
Re: Very Basic Question
I realise this very politically incorrect by our rules, but I have found realworld folk  even bright ones  are completely mystified by dT, and especially so when they see results for a known period, such as a month, don't make sense because the flows changed during that period. This leaves them completely disconnected from normal reporting they get, making SD models feel like something alien, rather than a more helpful way of looking at what they already understand.
I therefore [sorry] encourage them to model in smallenough timeunits that any dTrelated error will be small, relative to the uncertainty in their data.
And after all these years, I have still not had anyone tell me why an annual model with dT = 0.25 is better a quarterly model with dT=1, or a quarterly model with dT= 1/3 is better than a monthly model with dT = 1. Yes, yes  I get that strictly you should then test that with smaller timesteps, but as I say, by that point you are usually well inside the range of uncertainty in the data and the causal relationships in any case.
I'll get under the table now before the experts start throwing things at me
Kim
I therefore [sorry] encourage them to model in smallenough timeunits that any dTrelated error will be small, relative to the uncertainty in their data.
And after all these years, I have still not had anyone tell me why an annual model with dT = 0.25 is better a quarterly model with dT=1, or a quarterly model with dT= 1/3 is better than a monthly model with dT = 1. Yes, yes  I get that strictly you should then test that with smaller timesteps, but as I say, by that point you are usually well inside the range of uncertainty in the data and the causal relationships in any case.
I'll get under the table now before the experts start throwing things at me
Kim
Re: Very Basic Question
Hi Kim
I do not know what makes you fear expert’s judgment. It is the client that pays you.
Regards.
JeanJacques Laublé
I do not know what makes you fear expert’s judgment. It is the client that pays you.
Regards.
JeanJacques Laublé

 Posts: 152
 Joined: Thu Jan 15, 2009 6:55 pm
 Location: Bozeman, MT
 Contact:
Re: Very Basic Question
Coming back to the original question, I think you need to explicitly discuss the discrete vs. continuous problem with people, using concrete examples that are reasonably close to the kinds of systems that they might be thinking about. Draw some reference modes, e.g. for one light bulb burning out vs. hundreds. A lot of familiar processes will involve heterogeneous agents or chain processes, and therefore won't be first order, so it's useful to discuss that as well. Many things that appear to be discrete (say, bonds) actually aren't so much (due to diversity in terms and redemption behavior).
Water draining is a treacherous analogy, because the outflow behavior is nonlinear, so there is no long exponential tail. See the nice little paper by Pal Davidsen et al.  http://blog.metasd.com/2011/08/limitstobathtubs/
Water draining is a treacherous analogy, because the outflow behavior is nonlinear, so there is no long exponential tail. See the nice little paper by Pal Davidsen et al.  http://blog.metasd.com/2011/08/limitstobathtubs/
Blog: http://blog.metasd.com
Model library: http://models.metasd.com
Work: http://ventanasystems.com/ & http://vensim.com/
Model library: http://models.metasd.com
Work: http://ventanasystems.com/ & http://vensim.com/

 Posts: 152
 Joined: Thu Jan 15, 2009 6:55 pm
 Location: Bozeman, MT
 Contact:
Re: Very Basic Question
Re Kim's point, I agree that there's no difference in principle between dt = .25 year and dt = 1 quarter.
However, in practice, a lot of people who build models with dt = 1 aren't balancing their units, and therefore the time step enables them to get away with all kinds things that will blow up when dt <> 1, possibly with other horrible side effects.
So, as long as you check your units and test a smaller dt once in a while, there's no reason to avoid dt = 1. Some of my best friends use dt = 1.
Incidentally, a desire to get away from the discrete view of time, e.g. in the Euler syntax of DYNAMO equations, is part of the reason for Vensim's use of (implicitly continuous) integration to describe stocks.
Industrial Dynamics, Appendices D & O, has some useful comment on this topic.
However, in practice, a lot of people who build models with dt = 1 aren't balancing their units, and therefore the time step enables them to get away with all kinds things that will blow up when dt <> 1, possibly with other horrible side effects.
So, as long as you check your units and test a smaller dt once in a while, there's no reason to avoid dt = 1. Some of my best friends use dt = 1.
Incidentally, a desire to get away from the discrete view of time, e.g. in the Euler syntax of DYNAMO equations, is part of the reason for Vensim's use of (implicitly continuous) integration to describe stocks.
Industrial Dynamics, Appendices D & O, has some useful comment on this topic.
Blog: http://blog.metasd.com
Model library: http://models.metasd.com
Work: http://ventanasystems.com/ & http://vensim.com/
Model library: http://models.metasd.com
Work: http://ventanasystems.com/ & http://vensim.com/

 Posts: 38
 Joined: Thu Nov 03, 2011 1:28 pm
Re: Very Basic Question
Regarding JJ's injection "why fear experts if it is the client who pays" I tend to have quite different opionion which is  I am afraid  a bit less liberal. Does reason and reasoning of experts matter or is it "anything goes that is paid for"? As a consultant I tend to believe that in the long run it is truth and best knowledge that counts; the other way around: Does the client already know everything that he has to pay for  e.g. the solution to his problem? So if you are an expert in something and you are called by a client you will not have the client tell you how to apply the expertise. Even more to the point: Psychologists have made great progress at uncovering the many illusions that people and groups are falling victim to. Overconfidence, believe in "plausible stories", sticking to the facts that are more available or more easily understood (e.g. the model must be right because it "exactly" meats our numbers ... ) are all very bad reasons to do or advise something.
So the point is: Are there good reasons for or against using DT = 1?
Having people recognize "their" numbers and easy understanding is a good reason for using DT = 1. And yes in many cases this might help but then you are more or less starting to do discrete simulation (which of course somehow we are doing anyway  but that is sophism). You are starting to tell your clients that all it needs is doing it the discrete way and maybe he starts to believe that this is SD. So how do you appraoch the same client when things turn out the other way and DT needs to be well below as might turn out to be (one never knows the end before it comes in modeling)?
As I have tried to make clear at another post (http://www.ventanasystems.co.uk/forum/v ... =15&t=4473) there is the temptation to calculate averages of stocks in order to meet the numbers. This is all no danger if say you have monthly data and everything plays out well above a month. Then the model should approach what is called dimensionless time as far as I know (cf. Bossel's book on systems). If you have the number of customers at the beginning of a period and at the end then the sales revenue is avarage sales times average customers. If that matters then you will need some kind of average in the stock as Kim is using in the strategy dynamics method as explained in his latest book.
You do not have to worry about averages in stock if you use DT simply as what it is, a way to numerically approach accuracy of integration. I tend to use a macro to calculate the average of a flow so a client will not be confused by irregularities in flows. But I simply try to conceive the model in continuous time  I thus will be on the safer side. It does maybe need a bit of education of clients  but maybe a good way to start is to stop their overconfidence bias in the absolute truth behind their accounting numbers...
Kind regards,
Guido
So the point is: Are there good reasons for or against using DT = 1?
Having people recognize "their" numbers and easy understanding is a good reason for using DT = 1. And yes in many cases this might help but then you are more or less starting to do discrete simulation (which of course somehow we are doing anyway  but that is sophism). You are starting to tell your clients that all it needs is doing it the discrete way and maybe he starts to believe that this is SD. So how do you appraoch the same client when things turn out the other way and DT needs to be well below as might turn out to be (one never knows the end before it comes in modeling)?
As I have tried to make clear at another post (http://www.ventanasystems.co.uk/forum/v ... =15&t=4473) there is the temptation to calculate averages of stocks in order to meet the numbers. This is all no danger if say you have monthly data and everything plays out well above a month. Then the model should approach what is called dimensionless time as far as I know (cf. Bossel's book on systems). If you have the number of customers at the beginning of a period and at the end then the sales revenue is avarage sales times average customers. If that matters then you will need some kind of average in the stock as Kim is using in the strategy dynamics method as explained in his latest book.
You do not have to worry about averages in stock if you use DT simply as what it is, a way to numerically approach accuracy of integration. I tend to use a macro to calculate the average of a flow so a client will not be confused by irregularities in flows. But I simply try to conceive the model in continuous time  I thus will be on the safer side. It does maybe need a bit of education of clients  but maybe a good way to start is to stop their overconfidence bias in the absolute truth behind their accounting numbers...
Kind regards,
Guido
Who is online
Users browsing this forum: No registered users and 1 guest