Manual and Automated Systems Examination

Contact Martin Modell   Table of Contents


If the determination has been made to develop an automated system, the analyst must evaluate the various issues of automation. These include among others: Should micro, mini, or mainframe systems be developed? Should they be on-line or batch? Should they be stand-alone or integrated? Should the firm build the application in-house or should it attempt to buy a package commercially? Other issues include resident machine size, package evaluation, and selection.

This chapter discusses the various parameter decisions, tradeoffs, advantages, and disadvantages for the issues discussed above. In addition, where appropriate, there are lists of questions and issues which need to be addressed.

Top-Down versus Bottom-Up Analysis

Top-down analysis

Top-down analysis is a term used to describe analysis which starts with a high level overview of the firm and its functional areas. This overview should be a complete picture of the firm, but it should be general rather than very detailed. Once the overview has been developed, the analyst produces successively more detailed views of specific areas of interest. This process of developing more and more detailed views is called decomposition.

Top-down analysis is the method used by most major commercial methodologies and is considered to be the most thorough form of analysis. The order of the life cycle phases discussed earlier is based on a top-down analysis.

The difference between top-down and bottom-up analysis can be illustrated in the following manner (see Figure 18.1). Top-down analysis takes a finished product and attempts to find out how it works. The product is taken apart; the atomic parts used to create it are examined and documented, subassembly by subassembly. Bottom-up analysis starts with the gathering of all the atomic parts; the analyst then attempts to figure out what they will look like when they are all assembled and what the completed product can and should do.

The top-down method of analysis is usually accomplished in a phased manner. Many times the various phases are worked on by different teams, or by different people from the same teams. The work is assigned to correspond to the perceived skill levels required (Figure 18.2). The senior analysts work on the higher levels, while the junior analysts typically work on the lower levels.

The advantages of top-down analysis are

  1. The detailed analysis tends to be more complete and provides greater opportunity for identification of the interweaving of processing threads.
  2. Duplication of activity, overlapping function and processing, and inconsistency of activity are more readily apparent when looking at overviews than they are when starting with the detail level.
  3. Top-down tends to provide more perspective and to highlight problems of organization and overall work flow, as well as opportunities for work flow streamlining.
  4. Top-down tends to highlight overall data usage and data needs more easily than does the bottom-up approach.
  5. Once completed the overview analysis can serve as the basis for many differing application development projects, and it usually requires little more than periodic updating.

The disadvantages of top-down are:

  1. It tends to be more difficult and time consuming, and thus more expensive. This is due to the additional levels of analysis and the additional work.
  2. It looks at areas outside that of the user-sponsor, which is difficult for the user-sponsor to deal with and fund.
  3. The benefits to the user-sponsor tend to be less obvious and longer in coming than with bottom-up.
  4. Top-down tends to require more contact, support, and information from senior management, since the highest levels of analysis concentrate at their level. Senior managers sometimes express impatience with this kind of "fishing" activity, regarding it as something with little relevance which could be dealt with at lower levels.

Bottom-up analysis

Since many application projects are very specific in their focus and are operational in nature, the analysis for these projects may start with the clerical or operational activities which are their primary focus. Here the functions are well known as are the problems and user requirements.

Bottom-up analysis and development is aptly suited to the operational environment and is the favored method for organizations in the first and second stages of data processing growth.

Bottom-up development has the following advantages.

  1. Since the work is localized and focused, it is much more limited in the early stages. Bottom-up projects are able to home in quickly on satisfying the user-sponsor's needs.
  2. Being more focused, bottom-up development tends to stay within the bounds of the user area, further limiting the amount of work necessary. This limitation on work makes bottom-up projects faster and less expensive.
  3. It is more closely suited to user-initiated user-specific application development projects.

The disadvantages of bottom-up projects are

  1. Their limitation of scope tends to preclude activities which cross user process and functional boundaries. This also limits the analyst's ability to identify and correct processing redundancies and data usage anomalies.
  2. Because the focus of any given project is narrow, there tends to be significant rework in related projects as the incomplete pictures of previous analyses are updated and viewed from the differing perspectives of new user areas.
  3. Since bottom-up analysis focuses on the operational areas of the firm, the analyst's view is both narrow and vertical; it does not permit the analyst to view the impact of the particular operational area on other operational areas.

The advantages and disadvantages of top-down and bottom-up analysis are presented in Figure 18.3.

On-line versus Batch Systems

Early systems development, being limited by both technological availability and analytical experience, tended to duplicate the sequential batch processing which itself was a holdover from the early manufacturing experiences. Work flows were treated as step-by-step processing governed by strict rules of precedence.

The methods of data entry and automated input required strict controls to ensure that all inputs were received and entered properly. These controls were necessary because of the time delays between acquisition of the source data and their actual entry into the automated files.

Additionally because of the number of processing steps, such as keypunching, sorting, and collating of the data, occurring prior to the actual machine processing, the possibility and probability of data being incorrect or of items being lost were rather high. To alleviate these problems, analysts developed additional manual and automated steps and inserted them into the processing streams for collecting input items into groups, called batches, developing control totals on both the number of items and the quantity and dollar amounts of those items. As each batch was collected and verified it was processed against the master files. This type of processing tended to become somewhat start-stop in nature. Batches tended to be processed together, which required further controls on the number of batches and on overall batch totals. See Figure 18.4 for a description of the sequence of batch processing.

Since processing could not be complete until all batches were processed, any activities on the file as a whole, or on all transactions, tended to wait for the last batch. In some cases, the batch processing only verified the inputs, and transaction-to-file processing waited until all batch work was completed. This was necessary because the master files were usually maintained in an order different from that of the randomly collected transactions. These randomly accumulated transaction files themselves had to be sorted into the same order as the master files before processing.

This mode of processing became so ingrained into the development mentality that initial on-line processing continued to mimic batch processing, in that groups of transactions were aggregated and entered on a screen by screen basis.

Batch transaction processing is suited to both sequential and direct processing; however any environment where the files are maintained in a sequential medium mandates that all processing be in batch form. Where the master files are maintained in random-access--based files, true on-line processing can occur.

On-line processing is usually characterized by transaction-at-a-time designs, where the transaction data directly updates the master file in a random manner (Figure 18.5). In this mode, the user is presented with a screen which allows the entry of a single transaction of data. That data is verified independently and applied to the master file. On-line processing is random in nature and is based upon transaction arrival, while batch processing waits for a sufficient number of transactions to arrive to make up a batch.

Batch processing takes advantage of the fact that task setup usually takes as much as if not more time than does the actual processing. Thus if multiple transactions can be processed with one setup, time will be saved. This is the assembly line theory. On the other hand, on-line is similar to the artisan method where one person performs a complete sequence of tasks.

In many cases however, batch processing is required by business rules and policies. For instance:

  1. The business may dictate that certain orders have priority when going against inventory. Since orders are received on a random basis during the day, it is necessary to collect all orders, and just before going against inventory, sort them into priority order.
  2. In demand deposit accounting, the business rules usually state that all deposits are processed first, followed by any special instructions (i.e., stop orders) followed by certified checks, followed by normal checking activity. Again since these transactions are received randomly during the day, they must be accumulated, and sorted and processed in the correct sequence at the end of the day.

Issues Affecting the On-line versus Batch Decision

  1. Any business rules or conditions which necessitate either batch or on-line processing<
  2. Any transaction type priorities which may affect the decision
  3. Any user needs for rapid access to data during the working day<
  4. Any data movement problems
  5. User proximity to the processing center or to other users
  6. User computer sophistication or computer literacy
  7. Any "windowing" requirements or other timing requirements
  8. Availability and quality of communications facilities
  9. Volume of data to be processed
  10. Complexity of the data to be processed
  11. Cleanliness of the data to be processed
  12. Any resource constraints which preclude either on-line or batch processing

Mainframe, Mini, and Micro Systems

Prior to the late 1970s and early 1980s, the choice of automated system implementation environment was limited to centralized hardware, which bore the labels "mainframe" and "minicomputer." These labels usually referred to distinctions in both size and power. The "mini" label was usually applied to that hardware which was obtained for standalone or "turnkey" systems. A turnkey system was a standalone system which was acquired as a complete package of hardware and software.

As the performance of systems increased in terms of both throughput and capacity, the labels referred to the size of both of the machines themselves and to the vendors who manufactured them. Generally, minicomputers were manufactured by the smaller hardware vendors. What these boxes had in common, however, was their need for special conditioned rooms and operations staffs. As the technology of the manufacturing process improved, even the size distinction blurred. "Mainframe" became the label for very large powerful boxes and "mini" became the label for all others. Figure 18.6 shows mainframe price, size, and performance curves.

With the advent of the micro- or desktop computers, new, previously infeasible applications and uses for computers became practical. "Micro" is the label applied to that machine known variously as the personal workstation, personal computer, or desktop. These machines, which originally were little more than glorified calculators or intelligent terminals compared to the mini and mainframe, and which were isolated from the mainstream data processing environment, were designed to run packaged products, such as word processing, database, and spreadsheet packages.

The versatility and relatively low cost of these small machines have made them ideal for user applications. While small in size, they have achieved extensive power and capacity. The evolution of the mainframe and minicomputers took about 25 to 30 years to reach their present state. Figure 18.7 shows microcomputer price, size, and performance curves. The microcomputers by contrast have taken less than 10 years to reach a point where they rival their larger cousins in terms of speed, capacity, and availability of software. In the next 10 years, one can expect that these microcomputers will exceed all but the largest and most powerful supercomputers in speed and capacity and will have data storage capability to rival many present-day machines. Figure 18.8 is a comparison of microcomputer versus mainframe price-performance curves.

The rapid development of these very small machines has opened up new areas of automation within companies and has placed many firms back into the first stages of a new data processing growth cycle. Additionally, applications which were previously only available on mainframes have been made available on the micros, leading to intense reautomation efforts in an effort to take advantage of these inexpensive, personal machines.

The development of packaged applications for record keeping and analysis and for routine and highly specialized business processing support has made these machines relatively common office tools. As their capacity and speed increase and as their cost decreases, more and more applications will be found for them. The analyst must seriously consider this new and wide range of machinery when looking to create a practical business solution for the client-user area.

Although microcomputers were originally designed as stand-alone, personal machines, the tediousness of manual data entry has caused both the business and personal user communities to demand and get the capability to move data to and from the mainframes on a direct, automated basis. Although, currently there are format and speed restrictions, it is conceivable that in the very near future data will move freely and quickly between the two environments, opening up vast opportunities for cost-effective automation for the user areas. The dual mode capability, local and remote, plus the growing ability to network, that is, interconnect, these machines, many of them with common libraries and common data storage, will further open up these machines to application use. Figure 18.9 shows the multiple modes of microcomputer environments.

Micro, Mini, or Mainframe Issues

  1. Volume of data
  2. Size of ongoing files
  3. Type of processing
  4. Number of potential users
  5. User location
  6. Estimated length of processing cycle
  7. Existing mainframe capacity
  8. User sophistication and computer literacy
  9. Special software or hardware requirements
  10. Reporting volumes
  11. Type and location of existing hardware
  12. Internal expertise
  13. Any data sharing requirements
  14. Data entry volumes
  15. Any special communications requirements
  16. Processing complexity

Integrated Systems versus Stand-Alone

Integrated systems are those which attempt to look at the corporate environment from a top-down viewpoint or from a cross-functional and cross--business-unit perspective. To illustrate, an integrated system would be one which looks at human resources, rather than treating payroll and personnel as separate processes, or at general ledger rather than at balance sheet, accounts payable, and accounts receivable, etc. Integrated systems are modeled along functional, business, and strategic lines rather than along process and operational lines. Figure 18.10 compares stand-alone and integrated systems.

Integrated systems recognize the interdependency of user areas and try to address as many of these interrelated interdependent areas as is feasible. Integrated systems are usually oriented along common functional and data requirement lines. Integrated systems require top-down analysis and development since it is easier to determine overall requirements, and also because integrated systems development makes it necessary to understand the interdependencies and interrelated nature of the various applications which must be hooked together to achieve integration. The scope and requirements of integrated systems are difficult to analyze and generally require more time to develop.

Since integrated systems cross functional, and thus user, boundaries, many user areas must be involved in both the analysis and subsequent design and implementation phases. A multi-user environment is much more difficult to work with because even though the system is integrated, the users normally are not. Each user brings his or her own perspective to the environment, problems, and requirements, and these differing perspectives may often conflict with each other. The analyst must resolve these conflicts during the analysis process, or during the later review and approval cycles. In addition to the conflicting perspectives, there are normally conflicting system goals, and, more important, conflicting time frames as well.

Stand-alone systems, by contrast, are usually those which are self-contained and are designed to accomplish a specific process or support a specific function. Stand-alone systems are usually characterized by a single homogeneous user community, limited system goals, and a single time frame.

There are no guidelines which distinguish stand-alone systems from integrated ones. In fact, stand-alone systems may also be integrated in nature. There are also no size or complexity distinguishing characteristics.

The decision as to whether to attempt to develop an integrated system or to develop a stand-alone system is dependent upon the following issues.

  1. The location of the firm on the growth cycle. Firms in the early stages of the cycle should not attempt integrated systems, whereas firms in the late stages should always strive for them.
  2. The ability to assemble all interested and relevant users, and the ability to get them to agree to participate, to compromise where necessary, and to jointly fund the project.
  3. The commitment from user management to devote the time and resources to a complete top-down analysis.
  4. The ability to get all users to share the stored data and the tasks of data acquisition and maintenance.
  5. User understanding of the processes and functions which the system would ultimately be designed to service.

Make versus buy decisions

Once the list of change requirements and specifications is completed and the list proposed systems change solutions (not the design itself) have been identified and prioritized, one last set of tasks faces the analysis team. That is, to determine whether it is feasible to obtain from a commercial vendor, a prepackaged application system which will accomplish a large enough number of changes, in a viable, acceptable and cost effective manner, as opposed to attempting the design and development of a system using internal resources.

This analysis is called the "Make versus Buy" decision.  In many cases the feasibility of obtaining a pre-built package is extremely cost-effective.

The make or buy decision may also extend to retaining an outside firm to custom-build a package to the user's specifications.

Using the analysis documentation of the current system, which is the base upon which to apply the changes, and the change requirements and specifications developed in the Examination and Study phase, the analysis and design team (including the user representatives) must examine and arrive at an answer to each of the following make versus buy issues.

Make-versus-Buy Issues

  1. Costs associated with developing the system in-house
  2. Costs associated with acquiring a system externally
  3. Any exceptional or specialized requirements which are unique to the firm
  4. Number of externally available packages
  5. Time frame in which the user needs the system
  6. Development time estimates versus time to evaluate, select, install, and modify an external package
  7. Availability of internal personnel to develop the needed system in-house
  8. The degree of comfort the analyst and user have in the firmness of the specifications for the system
  9. The expected life of the proposed system
  10. The presence of any proprietary information about the firm's operations or the user's system, which the firm may be unable or unwilling to release to outside firms or non-employees
  11. Expected volatility of proposed system
  12. Absence of specialized development or functional expertise within the firm which might be needed to developed the proposed system

Package Evaluation and Selection

If the decision has been reached to buy, the analyst must recognize that any system that is acquired, rather than custom-built, will not fully meet the firm's needs. Pre-built packages are by their very nature generalized to suit the largest number of potential customers. This implies that while most packages will address the basic functional requirements, some percentage of the needed user functionality will be missing. Each package will have its own configuration of supported functions and these may not be the same from package to package. Those functions which are supported, basic or otherwise, can also be implemented in sometimes radically different ways, and the depth and comprehensiveness of the functional support can also differ radically.

Additionally there may be some specialized company functional needs which may not be addressed by any vendor package. The analyst can also expect that each package will support not only different functional requirements but may also be designed to operate in markedly different types of businesses. For instance a financial package designed for manufacturing organizations will have different design characteristics from one designed for a financial or service organization. This difference in functionality may make comparative product evaluation difficult if not impossible.

Most packages were originally designed for a specific company in a specific industry and were then generalized for commercial sale. These packages may have been designed by the specific firm itself or designed for the firm by an outside service consultant. Depending upon its origins, the implementation may vary from very good to very poor, and the documentation can be expected to vary a great deal. In any case, because of their origins as custom systems, one can expect the implementation to bear a strong imprint of the original users.

Industry surveys indicate that a functional and procedural "fit" of between 30 to 40 percent is considered average. This means that 30 to 40 percent of the package capability will exactly match the company's requirements. The analyst must assess the closeness of the fit between desired and needed functionality. The analyst must also assess the closeness of the procedural implementation to the way the firm currently does business and assess the impact of either modifying the firm to conform to the package requirements or modifying the package to conform to the firm's requirements.

In addition, the analyst must "look beneath the covers" at how the system works, not only what it does. Many packages come with their own forms, coding structures, processing algorithms, and built-in standards and policies. Many of these package internals are not changeable, and the analyst must determine the degree to which the user is willing to accept them (Figure 18.11). The analysis must also examine in detail the vendor of the product, the vendor's service and reliability, and any vendor restrictions on the company's use of the product. The lack of available literature and of available detailed evaluation information may make this evaluation process very time-consuming.

Examining the Buy Option

Some issues to consider when examining the buy option are

  1. The system's maintainability
  2. The availability of training
  3. The level and quality of the documentation
  4. Viability of the vendor
  5. The experiences of other customers
  6. Frequency of vendor maintenance and modification
  7. Vendor's support capability, including problem resolution, "hotline" service, etc.
  8. The ease with which the system can be enhanced by internal personnel
  9. The impact of any company modification on any system warranties, guarantees, or service contracts
  10. The relative currency of the software and hardware base of the package
  11. Any related offerings by the same vendor
  12. Any items or functions promised for future delivery
  13. Testing and benchmark periods
  14. Acquisition options, i.e., lease, franchise, license, purchase
  15. The impact of any proprietary products or processing on the company
  16. Any restrictions on disclosure, resale, etc.
  17. Site licensing versus single Central Processing Unit (CPU) licensing versus companywide licensing
  18. Duplication restrictions on software, documentation, or manuals
  19. The willingness of the vendor to "customize" the package to the firm's specifications

Client server specific issues:

  1. Availability of the product for current operating environments.
  2. Availability of the product for network installation
  3. Availability of vendor maintenance or third party maintenance
  4. Vendor development assumptions
  5. Vendor product upgrade history
  6. Compatibility of proposed product with other client server tools and applications
  7. Vendor use of standard versus non-standard development tools
  8. Product “look and feel”
  9. Product use of Graphic User Interface (GUI) facilities
  10. maximum user restrictions, if any
  11. Data sharing restrictions, if any<
  12. User concurrent access or use restrictions, if any
  13. Stability of the vendor, and vendor workforce.
  14. Number of copies of product documentation supplied
  15. Quality and depth of on-line help facilities for the product
  16. Quality and depth of context dependent help facilities, if any
  17. Availability of on-line tutorial facilities for the product
  18. Number and type of user modifiable product options
  19. Product use of network facilities such as printers, file servers, e-mail, schedulers
Contact Martin Modell   Table of Contents

A Professional's Guide to Systems Analysis, Second Edition
Written by Martin E. Modell
Copyright © 2007 Martin E. Modell
All rights reserved. Printed in the United States of America. Except as permitted under United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a data base or retrieval system, without the prior written permission of the author.