Thursday, February 11, 2010

Data Flow Diagram

Data flow diagram is a description of the data with respect to the processes performed in a system

SYMBOLS
1. Process symbol has got the following entities, process number (tells the number of the process), locality (where activity is happening) and a process name.
Rules:
• Must represent transformation of data
• Must have data flows into/out the process
• Process with no out is said to be null

2. Data store Symbol comprises the data store number and name of data store. Rules of
Rules:

• The symbol and the numbering remain the same

3. Data flow symbol signifies the movement of data.
Rules:
• Double arrows indicates that activities occurs simultaneously
• Data flow in is not equal to data flow out

Extended entity symbol shows sources and destination of data.
Rules:
• External entity never communicate with each other
• External entity should not communicate directly with data store


In order to produce a quality dfd, it is important to analyze the data flow of the system. Data analysis comprises of four main questions the analyst ought to answer:


• What processes make up a system?
• What data are used in each process?
• What data are stored?
• What data enter and leave the system?

Data flow analysis tracks the flow of data through business processes and determines how organization objectives are met. It studies the use of data in each activity and documents the findings in data flow diagrams, graphically showing the relation between processes and data.

There are two types of data flow diagrams, namely physical data flow diagrams and logical data flow diagrams and it is important to distinguish clearly between the two:

Physical DFD
Using PDFD is desirable for analysts for the following reasons:

1. It is easier to describe the interaction between physical components than to understand the policies used to manage the application. Identifying people, what they do, which documents and forms trigger which activities and what equipment is used in the processing. The movement of people, documents and information between departments and locations is also identified.

2. Physical data flow diagrams are useful for communicating with users. Users relate easily to people, locations and documents as they work with these each day. Users may consider logical DFDs abstract as they do not contain these familiar components, however, with physical DFDs users can quickly identify incorrect or missing steps.

3. Physical DFDs provide a way to validate or verify the user's current view of the system with the way it is actually operating. If there are differences, they will be noted and discussed. It is not unusual to find that what a users thinks is happening differs in an important way from what actually occurs.


Steps in Developing a Physical DFD:
(using a top-down approach)
1. Make "a context level DFD" or Level 0. It illustrates the "interaction" (data flows) between "the system" (represented by one process) and "the system environment" (represented by terminators).
2. Have a system "decomposed in lower-level DFD (Level 1)" into a set of "processes, data stores, and the data flows between these processes and data stores".
3. Each process is then decomposed into an "even-lower-level diagram containing its subprocesses".
4. This approach "then continues on the subsequent subprocesses", until a necessary and sufficient level of detail is reached which is called the primitive process (aka chewable in one bite).

Steps in Developing a Logical DFD:
1. Develop a physical dfd
2. Explore the process
3. Maintain consistency between the process
4. Following meaningful leveling convention
5. Ensure that dfd diagrams clarifies what is happening in the system
6. Remember dfd audience
7. Add control on the lower level dfd only
8. Assign meaningful level
9. Evaluate dfd for correctness

Rules:

• Data flow output must be based on data input to the process.
• Data flows must have names; the name reflects that data flowing between processes, data stores, sources and sinks(destination).
• Input to the process are the only data needed to perform certain
• Process should be independent from other process in the system
• Process should depend only on its own input and output.
• Processes are always running; they do not start or stop.
• Output from processes can take one of several forms:

a) Input data flow with information added by the
b) A response or change of data form
c) Change of status
d) Change of content
e) Change in organization


Deriving the Logical View from Physical DFD

Physical DFDs are a means to an end, not an end in themselves. They are drawn to describe an implementation of the existing system for two reasons:

•To guarantee a correct understanding of the current implementation (users are generally better able to discuss the physical system as they know it through people, workstations and days of the week.)

•The implementation itself may be a problem or limiting factor; changing the implementation, rather than the system concept may provide the desired results.

A logical view steps back from the actual implementation and provides a basis for examining the combination of processes, data flows, data stores, inputs and outputs without concern for the physical devices, people or control issues that characterise the implementation.

Steps:

•Show actual data needed in a process, not the documents that contain them.
•Remove routing information; that is, show the flow between procedures, not between people, offices or locations.
•Remove references to physical devices.
•Remove references to control information
•Consolidate redundant data stores.
•Remove unnecessary processes, such as those that do not change the data or data flows.
• Maintain Consistency between Processes


When developing DFDs in more detail it is important to maintain consistency between levels. Consistency means reliability and reliability in dfd is significant in developing a system. IN other words, consistency is one of the feature-factors that can affect the success of the system through the dfd.No new inputs or outputs to a process should be introduced at a lower level that was not identified at a higher level. However, within a process, new data flows and data stores may be identified.

• Follow Meaningful Levelling Conventions


Levelling refers to the handling of local files (those that are used within a process). The details that pertain only to a single process on a particular level should be held within the process. Data stores and data flows that are relevant only to the inside of a process are concealed until that process is exploded into greater detail.

• Add Control on Lower-Level Diagrams Only


The logical data flow diagrams developed to this point do not include control information. No mention has been made of how to handle errors or exceptions. Although this information is necessary in the final analysis, it should not be a concern while identifying the overall data flow. The secondary diagrams (below second or third level) show error and exception handling in the process being exploded.

Some physical control information is unnecessary in logical DFDs. Copy numbers or annotations for documents (e.g. Copy 1, Copy 2, Shipping copy or Accounting copy), procedural orders (e.g. Find the record; Review the record; Annotate the record), or day-of-the-week triggers (e.g. Do on Monday; Do the last day of the month) do not belong with the logical and data aspects of requirements determination. The important elements for understanding a process during logical data flow analysis are not document copy numbers but descriptions of the data needed to perform the process.


Evaluating Data Flow Diagrams for Correctness

A quality DFD should consider correctness as its primary characteristic. Correct DFD indicates that the analyst understand the data flows in the business processes of its clients. Understanding between the clients and the developer is the key factor in producing a good system. And developing a good system starts from its developing particularly in creating diagrams such as the DFD.

It is essential to evaluate all DFDs carefully to determine if they are correct. Errors, omissions and inconsistencies can occur for several reasons, including mistakes in drawing the diagrams. But the presence of what appears to be an error may in fact point out a deficiency in the system or a situation in which users are not aware of how certain processes operate.

These questions are useful in evaluating data flow diagrams:

• Are there any unnamed components in the data flow diagram (data flows, processes, stores, inputs or outputs)?

• Are there any data stores that are input but never referenced?

• Are there any processes that do not receive input?

• Are there any processes that do not produce output?

• Are there any processes that serve multiple purposes? If so, simplify by exploding them into multiple processes that can be better studied).

• Are there data stores that are never referenced?

• Is the inflow of data adequate to perform the process?

• Is there excessive storage of data in a data store (more than the necessary details)?

• Is the inflow of data into a process too much for the output that is produced?

• Are aliases introduced in the system description?

• Is each process independent of other processes and dependent only on the data it receives as input?


References:
http://en.wikipedia.org/wiki/Data_flow_diagram
http://hubpages.com/hub/What-is-a-data-flow-diagram
http://74.125.155.132/search?q=cache:U6PxQ9Uy2iQJ:repository.binus.ac.id/content/D0602/D060265475.doc+evaluating+data+flow+diagrams&cd=6&hl=en&ct=clnk&gl=pha

No comments:

Post a Comment