Home » Lecture – 12 Data Modeling – ER Diagrams, Mapping -Transcript

Lecture – 12 Data Modeling – ER Diagrams, Mapping -Transcript

Then there is another process to which the make reservation process supplies some output and that process is the billing system. Again we can really see that billing system must receive some inputs from make reservation so that the cost of the journey or the cost of the ticket can be calculated. And this billing system will produce a bill for the traveler and will also note that in the accounting file. Subsequently the billing system will also handle the payment from the traveler.

So in this airline reservation we have defined three processes called make reservation, prepare ticket and billing system. We have identified one external entity who is the user of this software or this application. And we have identified the data stores. These data stores contain the data relevant to the application. So these data may be related to the flights. The data may be related to the customer himself so that we can keep the billing information for him and also the data about the bookings that we have been – we have made. So airline reservation system would consist of such processes.

So you see here that the data flow diagram can be read in terms of external entities, the data that they supply or the results that they receive. And we can — through the names that we’ve selected for these processes we can try to understand what happens in this application. So it’s — again the naming is very important. We name the bubbles as well as the data sources properly, and when we do that, a data flow diagram can be understood easily without any additional explanation from the analysts. This is the advantage of the data flow diagrams. They are understandable on their own.

Now when we start the designing or developing the data flow diagram, we can generally show the entire application as a single process itself. This is the first step in preparing the data flow diagram. And such a diagram where the entire application is shown as a single process is called a context diagram. It identifies all the external interfaces of the application we are developing. So context diagram is a very important step and the focus here is not so much in the details of the process itself but its external interfaces. What are the external entities it is going to interact with? What are the outputs it will produce? What are the existing data stores that it might have to interface in terms of obtaining the data or updating that data? So this is usually the starting point and it’s also called fundamental system model or the Level 0 Data Flow Diagram.

So you do the data flow diagramming in steps by successively refining the different processes, by successively decomposing those processes and in this you add more and more details. But the starting point is always the context diagram in which the focus is on the external interfaces of the software.

Here is a simple example of a context diagram in which the whole software application that we are developing is shown as a single process or a single bubble. And we identify the users, the inputs and the outputs that the system either receives or produces. We also identify existing sources of data. These existing sources contain the data which is useful for our application but they exist outside. Now by showing it in the context diagram, we are clearly seeing that this data store will be assumed to be existing and it will not be part of our development and design effort. That is the boundary. We are defining clearly the boundary of the software that we want to develop. We also identify other external sources which may be necessary for interfacing our application with other applications. So these may be messages or they may be data stores which will be interfacing with external systems. So context diagram is a very important first step in preparing the data flow diagram.

After we have done the context diagram, we now decompose the process into its sub-processes. So here is the process decomposition now coming in the picture. When we do this, we replace the process by its constituents of processes. In this we may reveal additional data stores or additional external interfaces. S we are adding now more and more details. And we also develop some kind of a simple numbering system through which we can readily show the constituent processes of a process which we have decomposed. So generally we use the decimal numbering system. So if we are decomposing process one, then the sub-processes of that would be numbered as 1.1, 1.2, etc. exit. This is for ease of understanding the decomposition relationship between the processes.

At each level of decomposition we should complete the data flow diagram in its all respects. We must clearly understand the data which is flowing. We must know what exactly goes from one process to another process, or what goes from one data store to a process. This must be properly labeled. We must also label processes very meaningfully. In fact, we had earlier mentioned that processes are best named by a verb and object, and we have seen examples of this while talking about function decomposition. So the same kind of naming rules or guidelines should be used for labeling of these processors as well as the data stores and data flows. So all components which appear in a data flow diagram must be named meaningfully in order to convey the purpose and the meaning of the diagram.

We continue decomposition and add more and more details. So when do we stop? We stop when the processes have become quite well defined. They are not too complex now. They can be developed and understood, can be briefly described. And we also stop when the control flow starts surfacing. So if we are now – if we subsequent – decomposition if it is going to introduce looping or repeated execution, or it is going to introduce conditional execution, then naturally now the control flow has started to surface. And at this point we can stop the decomposition, because the data flow diagrams do not show flow of control. It’s assumed that the processes are executing and they are receiving data and they are producing outputs. So there is no flow of control that is shown explicitly in the data flow diagram.

So we refine processes until they are well understood, then the processes are not complex. All the important data stores have been now created. And we have identified what they need to contain. So once we have reached this level we say that the process refinement is now complete. So in this successive decomposition, we may go through multiple steps and at each step we would be creating a data flow diagram for the process which we are focusing on for the purpose of decomposition.

Pages: First | ← Previous | 1 |2 | 3 | ... | Next → | Last | Single Page View

Leave a Comment