With reference to assignments 8 and 9, what characteristics does an analyst(you) examine when evaluating DFD quality? (1500 words)
The Data Flow Diagram is the most commonly used process model. A Data Flow Diagram is a graphical system model that shows all of the main requirements for an information system in one diagram; inputs and outputs, processes and data storage. Everyone working on a development project can see all aspects of the system working together at once with the DFD. DFD is also easy to read because it is a graphical model and because there are only five symbols to learn. End users, management and all information systems workers typically can read and interpret the DFD with minimal training.
In evaluating DFD quality, a high-quality set of DFD is readable, is internally consistent and accurately represents system requirements. Accuracy of representation is determined primarily by consulting users and other knowledgeable stakeholders. A project team can ensure readability and internal consistency by applying a few simple rules to DFD construction. Analysts can apply these rules while developing the DFDs or during a separate quality check after preparing DFD drafts.
An analyst must know how to minimize complexity. People have a limited ability to manipulate complex information. If too much information is presented at once, people experience a phenomenon called information overload. When information overload occurs, a person has difficulty in understanding. The key to avoiding information overload is to divide information into small and relatively independent subsets. Each subset should contain a comprehensible amount of information that people can examine and understand in isolation. A layered set of DFDs is an example of dividing a large set of information into small independent subsets. Each subset can be examined in isolation. The reader can find additional detail about a specific process by moving down to the next level or find information about how a DFD relates to other DFDs by examining the next-higher level of DFD.
An analyst can avoid information overload within any single DFD by following two simple rules of DFD construction: the 7 + 2 and interface minimization. The rule of 7 + 2 also known as Miller’s Number derives from psychology research which shows that the number of information ‘chunks ’ that a person can remember and manipulate at one time varies between five and nine. A larger number of chunks causes information overload. Information chunks can be many things including names, words in a list, digits or components of a picture. Some application of the rule of 7 + 2 to DFDs includes the following: that a single DFD should have no more than 7 + 2 processes, no more than 7 + 2 data flow should enter or leave a process, data store or data element of a single DFD. These rules are general guidelines not unbreakable laws, DFDs that violate these rule may still be readable but violations should be considered a warning of potential problems.
Minimization of interfaces is directly related to 7 + 2. An interface is a connection to some other part of a problem or description. As with information chunks, the number of connections that a person can remember and manipulate is limited, so the number of connections should be kept to a minimum. Processes on DFD represent chunks of business or processing logic. They are related to other processes, entities and data stores by data flows. A single process with a large number of interfaces maybe too complex to understand. This complexity may show up directly on aprocess decomposition as a violation of the rule 7 + 2. An analyst can usually correct the problem by dividing the process into two or more subprocesses,each of which should have fewer interfaces.
Pairs or groups of processes with a large number of data flows between them are another violation of the interface minimization rule. Such a condition usually indicates a poor partitioning of processing tasks among the processes. The way to fix the problem is to reallocate the processing tasks so that fewer interfaces are required. The best division of work among processes is the simplest and the simplest division is one that requires the fewest interfaces amongprocesses.
An analyst can detect errors and emissions in a set of DFDs by looking for specific types of inconsistency. Three common and easily identifiable consistency erros as follows; differences in data flow content between a process and its process decomposition, data outflows w/o corresponding data inflows, data inflows w/o corresponding outflows. A process of decomposition shows te internal details of a higher level process ina more detailed form. In most cases, the data content of flows to and from a processes at one DFD level shuld be equivalent to the content of data flows to and from all processes in a decomposition. The equivalency is called balancing and the higher level DFD and the process decomposition DFD are said to be in balance. Data flow names can vary among levels for a number of reasons including decomposition of one combined data flow into several smaller flows. Thus, the analyst must be careful to look at the components of data flows,not just data flow names. For this reason, detailed analysis of balancing should not be undertaken until data flows have been fully defined. Unbalance DFDs may be acceptable when the imbalance is due to data flows that were ignored at the higher levels. For example, diagram 0 for a large system usually ignores details of error handling such as when an item is ordered but it is later determined to be out of stock and discontinued by its manufacturer.
Anothertype of DFD inconsistency can occur between data inflows and outflows of a single process or data store. By definition, a process transforms data inflows and data outflows. Analysts sometimes can spot black holes nad miracles simply by examining the DFD in other cases, close examination of data dictionary or process descriptions is required.
In a logical DFD, data should not be needlessly passed into process. The following consistency rules canbe derived from these facts: that all data flows into a process must flow out of the process or be used to generate data that flows out the process, all data that flows out process must have flowed into the process or have been generated from data that flowed into process.
DFD have symbols and these are; Consist of the following entities, data store number and name of data store. The function of data store is to designate the storage of data in a dfd diagram.
• The square is an external agent (a person or organization, outside the boundary of a system that provides data inputs or accepts data outputs)
• The rectangle with rounded corners is a process (named “Look up item available” and can be referred to by its number, 1)
• A process defines rules (algorithms or procedures) for transforming inputs into outputs
• The lines with arrows are data flows (represents movement of data). Slide shows two data flows between Customer and process 1: a process input “Item inquiry” and process output named “Item availability details”
• The flat three-sided rectangle is a data store (a file or part of a database that stores information about data entity)
Data flow is a collection of data elements . Data flow definition is a textual description of data flow’s content and internal structure. Lists all the elements, e.g. a “New Order” data flow consists of Customer–Name, Customer-Address, Credit-Card-Information, Item-Number and Quantity. Often coincide with attributes of data entities included in ERD plus computed values. Algebraic notion is alternative to the list. Describes data elements on data flow plus data structure.
Data type description can be a String, integer, floating point, Boolean. Sometimes very specific written description e.g., special codes (e.g. code A means ship immediately, code B – hold for one day and code C – hold shipment pending confirmation). Length of element (usually for strings). Maximum and minimum values (for numeric values). Data dictionary – repository for definitions of data flows, data stores, and data elements.
A data store on the DFD represents a data entity on the ERD (so, no separate definition is needed, just a note referring to the ERD for details). If a data store are not linked to an ERD, a definition is provided as a collection of elements (like did for data flows).
Guidelines/Gumption Traps:
(Places where DFDing can go astray)
1. System boundary establishment is an important judgment call. External entities aid in determining where the boundary is established. An interfacing system can be shown as an external entity. It may be necessary to dictate the input of the external entity to assure system control. For example, customers may be required to submit orders or refund requests containing specific information which may require that the system aid in completion of a form. Use of output such as reports by management may re quire some agreement on tactics to be performed which may mean the entity becomes part of the system, not external to it. When in doubt, include the external entity as processes within the system and then evaluate with those concerned.
2. Label your processes carefully and vividly. A process that is labeled "Produce Report" and has the output of "Report" tells a reviewer very little. If you have trouble labeling anything on the diagram, it often is because you do not have adequate un derstanding. Choose names carefully.
3. Think logical, not physical. Ignore media, color, font, layout, packaging, time, sequencing, etc. Think "what", not "how". Something logical can be implemented physically in more than one way. Including "when" and "where" and "how" means you are g etting physical.
4. Think data, not control, flow. Data flows are pathways for data. Think about what data is needed to perform a process or update a data store. A data flow diagram is not a flowchart and should not have loops or transfer of control. Think about the data flows, data processes, and data storage that are needed to move a data structure through a system.
5. Concentrate first on what happens to a "good" transaction. Systems people have a tendency to lose sight of the forest because they are so busy concentrating on the branches of the trees.
6. Reviewers will not be convinced by confusion. A quality data flow diagram will be so simple and straightforward that people will wonder what took you so long.
7. Data store to data store, external entity to external entity, or external entity to data store connection usually do not make sense. Data flows with an arrowhead on each end cause confusion in labeling. Do not use them.
8. Do not try to put everything you know on the data flow diagram. The diagram should serve as index and outline. The index/outline will be "fleshed out" in the data dictionary, data structure diagrams, and procedure specification techniques.
What are some of the advantages of using DFD analysis? Here are several:
• Data flows and process consequences. Note how this representation of the data characteristics of banking operations enables us to start at any point in the operation (e.g., deposits, withdrawals, or bill payment), and follow the consequences of that activity through to the point where all appropriate account balances have been adjusted and reconciled. Wherever we start in the process, we can understand the processing steps that the bank would need to take to complete the relevant transaction(s) and to inform its constituents of the results.
• Data inputs and outputs. The DFD also makes it possible to understand what data are needed to provide appropriate inputs to any processing step. If, for example, we were to build an information system to support this individual's banking activities (in the days before Quicken and/or Microsoft Money), we would need to understand exactly what data items are represented by data flows such as "Monthly Statement", "Pay earned", "Withdraw or transfer", and other arrows shown in the diagram.
• Simplifying complexity by isolating process components. Note how the DFD would make it easier to capture the detail of such data flows. By isolating "Withdraw or Transfer" within the larger scheme of the banking process, the DFD makes it possible to consider the details of the data items included in this flow without reference to the flows affecting other processing steps. All of the flows affecting withdrawals (e.g., processing step 3.0, "Withdraw funds from account") are isolated as entering or leaving processing step 3.0. At the time that DFDs were developed, this shift towards modularizing data flows and processing elements represented a major step forward in enabling systems analysts to add useful structure to process representations rapidly and easily.
Disadvantages of data flow diagram
• DFD is likely to take many alteration before agreement with the user
• Physical consideration are usually left out
• It is difficult to understand because it ambiguous to the user who have little or no knowledge
Step in drawing dfd diagrams
1. Make a list of all business activities and use it to determine the various external entities, data flows, process and data store
2. Create a context diagram which shows external entity and data flows to and from the system
3. Do not show any detailed process or data store
4. Draw diagram zero or the next level to show process but keep them general. Show data stores and the level
5. Create a child diagram for each of the process in diagram zero
6. Check for errors and make sure the levels you assign to each process and data flow are meaningful
7. Develop a physical dfd diagram from the logical dfd and distinguish between the manual and automated protocol, describe actual files and report by name and controls to indicate when the process are complete or errors occurs
8. Portion the physical DFD by separating or grouping parts of the diagram in order to facilitate programming and implementation
How to develop Logical data flow diagram
Below are the guidelines in developing data flow diagrams
1. Develop a physical dfd
2. Explore the process for more details
3. Maintain consistency between the process
4. Following meaningful leveling convention
5. Ensure that dfd diagrams clarifies what is happening in the system
6. Remember dfd audience
7. Add control on the lower level dfd only
8. Assign meaningful level
9. Evaluate dfd for correctness
Dos and Don’ts of external entity
• External entity never communicate with each other, this signify that there is no need for the process
• External entity should not communicate directly with data store because external entities can be identifier with the record of files and databases
Evaluating Data Flow Diagrams for Correctness
It is essential to evaluate all DFDs carefully to determine if they are correct. Errors, omissions and inconsistencies can occur for several reasons, including mistakes in drawing the diagrams. But the presence of what appears to be an error may in fact point out a deficiency in the system or a situation in which users are not aware of how certain processes operate.
These questions are useful in evaluating data flow diagrams:
•Are there any unnamed components in the data flow diagram (data flows, processes, stores, inputs or outputs)?
•Are there any data stores that are input but never referenced?
•Are there any processes that do not receive input?
•Are there any processes that do not produce output?
•Are there any processes that serve multiple purposes? If so, simplify by exploding them into multiple processes that can be better studied).
•Are there data stores that are never referenced?
•Is the inflow of data adequate to perform the process?
•Is there excessive storage of data in a data store (more than the necessary details)?
•Is the inflow of data into a process too much for the output that is produced?
•Are aliases introduced in the system description?
•Is each process independent of other processes and dependent only on the data it receives as input?
Diagramming mistakes: Black holes, grey holes, and miracles
A second class of DFD mistakes arise when the outputs from one processing step do not match its inputs. It is not hard to list situations in which this might occur:
• A processing step may have input flows but no output flows. This situation is sometimes called a black hole [3].
• A processing step may have output flows but now input flows. This situation is sometimes called a miracle.
• A processing step may have outputs that are greater than the sum of its inputs - e.g., its inputs could not produce the output shown. This situation is sometimes referred to as a grey hole.
When one is trying to understand a process during the course of an interview (and consequently drafting DFDs at high speed), it is not hard to develop diagrams with each of the above characteristics. Indeed, scanning DFDs for these mistakes can raise questions that provide questions for use in further process analyses (e.g., "Where do you get the data that allows you to do such-and-such...").
DFDs are not flow charts
A last class of DFD mistakes are somewhat more difficult to identify. Many of us have had prior experience developing flow charts. Flow chart diagrams can be useful for describing programming logic or understanding a single sequence of process activities. It is important to recognize, however, that DFDs are not flow charts. Flow charts often show both processing steps and data "transfer" steps (e.g., steps that do not "process" data); DFDs only show "essential" processing steps. Flow charts might (indeed, often do) include arrows without labels: DFDs never show an unnamed data flow. Flow charts show conditional logic; DFDs don't (the conditional decisions appear at lower levels, always within processing steps). Flow charts show different steps for handling each item of data; DFDs might include several data items on a single flow arrow.
Data flow diagrams can assist in
• Isolating the component parts of a business process, reducing the analytical complexity involved in determining the specifications that process support software would have to meet.
• Shifting the focus of a process description to the data flows and processing steps that the process represents.
• Identifying data-related process characteristics that could be candidates for process design improvements.
• Identifying data stores that isolate entities that could be further developed using entity-relationship analysis.
General Data Flow Rules
1. Entities are either 'sources of' or 'sinks' for data input and outputs - i.e. they are the originators or terminators for data flows.
2. Data flows from Entities must flow into Processes
3. Data flows to Entities must come from Processes
4. Processes and Data Stores must have both inputs and outputs (What goes in must come out!)
5. Inputs to Data Stores only come from Processes.
6. Outputs from Data Stores only go to Processes.
ref:
http://books.google.com.ph/books?id=-ot62DeCKO4C&pg=PA234&lpg=PA234&dq=characteristics+of+an+analyst+in+evaluating+dfd&source=bl&ots=V0yZMyRzSx&sig=1mBZj2FGtrLcwIjEzazBzwQNmDM&hl=tl&ei=8RURTIKEKca5rAffh4jaBA&sa=X&oi=book_result&ct=result&resnum=5&ved=0CCkQ6AEwBA#v=onepage&q=characteristics%20of%20an%20analyst%20in%20evaluating%20dfd&f=false
0 comments:
Post a Comment