Lecture – 12 Data Modeling – ER Diagrams, Mapping -Transcript

At each level of decomposition we should complete the data flow diagram in its all respects. We must clearly understand the data which is flowing. We must know what exactly goes from one process to another process, or what goes from one data store to a process. This must be properly labeled. We must also label processes very meaningfully. In fact, we had earlier mentioned that processes are best named by a verb and object, and we have seen examples of this while talking about function decomposition. So the same kind of naming rules or guidelines should be used for labeling of these processors as well as the data stores and data flows. So all components which appear in a data flow diagram must be named meaningfully in order to convey the purpose and the meaning of the diagram.

We continue decomposition and add more and more details. So when do we stop? We stop when the processes have become quite well defined. They are not too complex now. They can be developed and understood, can be briefly described. And we also stop when the control flow starts surfacing. So if we are now – if we subsequent – decomposition if it is going to introduce looping or repeated execution, or it is going to introduce conditional execution, then naturally now the control flow has started to surface. And at this point we can stop the decomposition, because the data flow diagrams do not show flow of control. It’s assumed that the processes are executing and they are receiving data and they are producing outputs. So there is no flow of control that is shown explicitly in the data flow diagram.

So we refine processes until they are well understood, then the processes are not complex. All the important data stores have been now created. And we have identified what they need to contain. So once we have reached this level we say that the process refinement is now complete. So in this successive decomposition, we may go through multiple steps and at each step we would be creating a data flow diagram for the process which we are focusing on for the purpose of decomposition.

ALSO READ:   Michael Pollan: How to Change Your Mind @ Talks at Google (Transcript)

DFDs do not show flow of control. This is a very important thing we must remember. DFDs also will generally not show one-time kind of things, like initializations. We do not show processes which initialize or create files, or create databases. They instead show processes which are running in a steady state. So data flow diagram can be imagined in terms of processes which are continuously executing. As soon as they receive the data, they produce their output and hand over that to the next process or update a data store or some such action takes place. So we do not generally show the one-time kind of activities but show processes in their steady state.

DFDs show only some important exceptions or errors. These are shown in order to introduce some specific business requirements to deal with that. So for example, if the inventory has fallen below a certain level, this may be treated as exception which is associated with some business rule that some reordering has to be done because our inventory has fallen very low. So such exceptions would be shown. But otherwise routine type of errors are generally not shown in the data flow diagram. So for example, we will not show things like the airline number which is given by the customer is wrong, or the destination that he has given, no such city exists in our database. Now we assume that such errors will naturally be handled by our software but they are routine type of errors where data validity has to be done. These are not shown as a part of data flow diagram so that we concentrate on main functions and main processes rather than get distracted by routine type of exceptions.

Processes must be independent of each other. So again here we refer to our thumb rule that cohesion and coupling are the guidelines we always use for any decomposition. So when we define sub-processes we should ensure that the new sub-processes that we have created are cohesive in terms of what they do and there is minimum interdependence between them. In fact, the only way the processes or sub-processes interact with each other is through data. So work of a process should depend only on its inputs and not on the state of another process. So processes are independent in that sense and this is an important point we must observe when we are doing the refinement. Only needed data should be input to the process. Now this is again an obvious requirement that a process should receive inputs which it needs for producing the outputs which are the responsibility of that process.

ALSO READ:   President Barack Obama’s Keynote at 2016 SXSW Interactive (Full Transcript)

As we do refinement we must also ensure consistency at different levels of refinement. Here is an example where on the top we show a data flow diagram in which process F1 has been defined as having input A and producing output B. Now this process F1 itself may be fairly complex and this process may be now decomposed into different sub-processes. Now this is a process one, so we are decomposing it into 1.1, 1.2, 1.3, and 1.4 as four sub-processes with this kind of relationships among them. So this decomposition here shows that a complex process such as F1 gets decomposed into four processes which have been now named as 1.1, 1.2 and so on to indicate that they are part of process one here.

And in this case, the consistency among the levels require that the inputs here should match with the inputs in this process. So inputs and outputs must match. On the other hand, new data stores may be shown. So for example, here in this level of data flow diagram, we did not show the data store but when we decompose F1, a new data store might surface because it needs to supply some history data or pass data to one of the processes. So important point in refinement is that there must be consistency among levels in terms of inputs and outputs on level 1 should be same as the inputs and outputs at level 2.

Pages: First | ← Previous | ... | 2 |3 | 4 | ... | Next → | Last | Single Page View

Leave a Comment

Scroll to Top