A SYSTEMS APPROACH TO FOLLOW-UP

Synopsis

The purpose of this chapter is to explain how follow-up fits into the "big picture." To do so, the co-authors differentiate the collection and analysis of outcomes data from several other activities and services. To understand where follow-up fits, it also is important to understand what should not be expected of the lead agency. This kind of differentiation is necessary when initially implementing follow-up and defining the central entity's boundaries. By taking a systems approach to follow-up, stakeholders will be alert not only to the potential range of the central entity's functions but also to its limitations.

Brief Introduction to Systems Theory1

In Chapter I we suggest that virtually all efforts to deliver services can be evaluated most effectively on the basis of relevant outcomes achieved by former program participants or customers. In this chapter and all subsequent ones, the co-authors focus more narrowly on follow-up for education, training, workforce development and welfare-to-work programs. Thus, the balance of this Guide will concentrate on follow-up as an integral part of an overall employment and training system. But first, we provide a brief introduction to Systems Theory as an analytic frame-work that should help explain how follow-up activities fit into the "big picture" of employment and training.

Systems Theory is a way of organizing information to help explain and predict organizational behavior. Systems Theory uses basic constructs of structure and function to illuminate relationships and processes in the social sciences. This approach breaks down complex human systems into smaller elements that are easier to observe then explains them in terms drawn from more mathematically precise or mechanistic physical sciences.

Basic terminology of Systems Theory

All systems have boundaries. Everything outside a system's boundaries is called its environment. Many aspects of a system's external environment also may have order and purpose; they, too, may be conceptualized as separate systems. In defining a particular system, it is necessary to show how its boundaries separate it from the environment and differentiate it from other, closely related systems.

Several units inside a system may have distinct identities of their own. They may have well-ordered relationships, clearly delineated lines of authority, well-established decision rules and routine operating procedures. They may be highly visible to people in the system's external environment. Just as an understanding of a system's boundaries is important, it also is critical to identify subsystems, their roles within the larger system and how they differ from other subsystems.

Boundaries are not impermeable. Every system -- every subsystem -- is susceptible to influences from its external environment. Stress and disturbances enter from the environment and must be processed.2 Four analytic constructs are used to explain how complex and purposeful interactions take place among actors in a system or between those actors and people in the external environment. The first analytic construct of Systems Theory is inputs. Inputs consist of demands and supports (or expectations and resources respectively).3 The second concept is conversion. This construct deals with the way actors inside the system receive inputs, interpret them, interact with each other to evaluate inputs and weigh their alternatives. Outputs are the resultant decisions, behaviors or services (i.e., the outwardly observable actions of a system). The fourth construct is feedback which deals with the reactions of people in the external environment to consequences and impacts wrought by a system's outputs. Feedback is especially crucial in Systems Theory because -- along with new disturbances in the environment -- external reactions to a system's outputs alter the relative balance of demands and supports. Feedback, in turn, becomes another kind of input that requires a system to respond. The way it reacts to stress, disturbances and feedback shapes a system's nature and the probability that it will achieve its purpose.

The Employment and Training System4

The first order of business in centralizing follow-up is to decide who's in and who's out. Answers to several filtering questions will help each state set criteria for deciding which specific programs should be served by its central follow-up entity. Which programs can be served together logically and which cannot? Which programs share a common mission? Which promise or are intended to produce similar outcomes? Which are expected to respond to the same kinds of stress and disturbances emanating from the external environment? Which coordinate their responses to mutually perceived external demands? Which programs currently exchange participant information or demonstrate effective collaboration in other ways? Which programs are required by law or regulation to show evidence of articulation? Which share resources or are tied together in a common funding stream? Which are linked closely together -- even informally -- in the public's mind?

Logical considerations

Answers will vary from one state to another. However, Florida and Texas conduct follow-up on virtually the same set of programs. That is because efforts are underway in both Florida and Texas as well as in other states to create a seamless employment and training system. All states are being encouraged to do so through federal initiatives. Recent amendments in federal legislation, for example, now bring workforce development programs funded under the Job Training Partnership Act (JTPA) and programs funded under the Carl D. Perkins Vocational and Applied Technology Education Act much closer together. Federal and state legislative initiatives to create consolidated Human Resource Investment Councils and to integrate service delivery at the local level through One-Stop centers bring additional programs into the logical mix: Employment Services, Job Corps, Food Stamp Employment and Training (FSE&T), Adult Education, Literacy and programs funded under the Job Opportunities and Basic Services Act (JOBS). Both Texas and Florida also include inmate training in their centralized follow-up efforts.

Programs that any state chooses to serve through centralized and integrated follow-up should share a common mission -- though they may differ according to specific goals, objectives, strategies, funding streams and target populations. The mix of state and federal programs listed above seems to meet this minimal criteria for inclusion. Taken together, they constitute an employment and training system. Insofar as they share a common mission and deliver logically related services, the programs listed above can be distinguished readily from other clusters of programs such as health care delivery, transportation, public safety and national security. The boundaries of this collection of programs are defined clearly by statutes and regulations. There is a large degree of consistency from one state to the next in the connectivity among the programs listed above. Their interrelationships are set forth in conforming amendments and cross-references, legislatively or administratively-mandated collaboration and articulation, and common funding streams.5 In some states, turfism may cause a few stakeholders to quibble about what should be covered by centralized follow-up, but -- by and large -- the programs listed above are perceived by taxpayers, elected public officials and prospective customers as logically related.

Practical considerations

We have presented this brief overview because the boundary or "domain" issue must be settled in order to reduce a follow-up entity's duties and tasks to manageable proportions.6 While it might be desirable in theory to establish core performance measures and a common data collection methodology across all publicly-funded programs, there are practical limits on how far the principle of integration can be stretched. We strongly suggest starting small then cautiously and deliberately expanding a central entity's customer base and breadth of services. Grand schemes for all-inclusive follow-up should be tabled -- at least during the first few years of operation -- to avoid unnecessary confusion. Integrating and automating follow-up for a handful of employment and training programs will be a huge undertaking without trying to make the central entity "all things to all people." Focus first on a logically related set of programs and serve them well.

The long-term effectiveness of a central follow-up entity depends in large part on establishing a solid reputation for customer service. An entity that is limited to doing follow-up for closely related programs is more likely to have the sharpness of focus required when reputation-building. A follow-up entity that fails to establish a solid reputation from the outset will find itself concerned constantly with its own survival. If that occurs, questions of expansion become moot. On the other hand, if it does its job well from the outset, a central follow-up entity inevitably will find other partner agencies knocking on its door. Once a solid reputation has been established for serving the major employment and training programs, the management team can revisit the boundary and domain issues and consider expanding the lead agency's services and activities.

By taking a patient and cautious approach, the central follow-up entity will be in a better position to control the terms and conditions of its own growth. In this Guide, the co-authors call this slow and deliberate approach "system building."

Subsystems within the larger employment and training system

The most conspicuous feature of the employment and training system is the effort to impart knowledge, skills and abilities required in the workplace (i.e., the education and training delivery subsystem). This particular service delivery subsystem can be divided further according to various concepts. For example, distinctions often are made:

between public sector service providers and private for-profit service providers;

between first chance and second chance programs;

by mode of delivery (e.g., classroom versus on-the-job training); and

by level of service (e.g., secondary versus postsecondary).

The second prominent element of the employment and training system is the labor exchange subsystem. This includes job-development activities by partner entities to get employers to post job openings with the state's employment security agency (SESA) or through publicly supported "real-time" forums such as America's Job Bank and to encourage job-seekers to use a SESA's services such as individual job-matching and referral as well as publicly-supported electronic resume repositories such as the Talent Bank. Labor exchange activities also may include job-search training for groups or on an individual basis.

Third is the transition support subsystem which includes efforts to help students and program participants make a successful transition into the world of work or from one job to another. Transition support includes temporary income maintenance through Unemployment Insurance (UI) and ancillary services such as life-skills training (e.g., training to prepare and stick to a household budget), substance abuse counseling, uniform and/or tools-of-the-trade purchases, child care assistance and transportation vouchers or tokens. Most of these transition support services can be provided to dislocated workers, displaced homemakers, former welfare recipients, exoffenders and first-time labor force entrants to help them as they look for work. With the exception of UI benefits, these services also might be provided to customers for a specified period after they find work. Such supportive services often can help those who are employed keep their jobs long enough to achieve financial independence under a "work first" model while they are weaned gradually from welfare dependency.

Education and training delivery, labor exchange and transition support are only the tip of the iceberg -- activities at the core of the employment and training system that are highly visible to people in the external environment. This system also includes other activities that may be less obvious to external observers. On the front end, policy-makers engage in conversion as they translate demands and supports into action plans and resource allocation decisions (herein, the planning subsystem). Planning activities may be nearly invisible to casual observers outside the employment and training system. Another subsystem consists of arrangements for holding service providers accountable for their performance. This accountability subsystem also may be relatively invisible to casual observers. Lastly, there are mechanisms for communicating information about the employment and training system to people in the external environment. These mechanisms are called the feedback loop or information delivery subsystem. A citizen will pay more or less attention to this information as his/her need for employment and training services wanes and waxes under a life-long learning model and as his/her economic security fluctuates in an increasingly volatile labor market.

A state's central follow-up entity probably will have no direct involvement in the three aspects of the employment and training system that are most conspicuous to external observers. Follow-up staff do not actually deliver education and training services nor do they provide transition support for participants moving into the world of work. Seldom will they meet customers of the SESA's labor exchange services face-to-face. The central entity will work "at arm's length" from actual customers as it gathers data about each of these functions in order to explain results achieved by participants who flow through the system. On the other hand, the lead agency's follow-up efforts will be integral to any state's accountability subsystem and -- to a lesser extent -- to the information delivery and planning subsystems.

The Integral Role of a Central Follow-Up Entity in the Accountability Subsystem7

"Program accountability" is deeply rooted in and analogous to general accounting. In fact, the oldest and most rudimentary form of follow-up is the standard audit. In the private sector, an auditor's first task is to determine if a firm's financial transactions were legal and recorded properly. To remain viable, however, any business must do more than meet its legal requirements and keep accurate books. Therefore, audits in the private sector also are done to assess a company's overall productivity, its management's effectiveness, how much profit was made and what the shareholders are owed in return for their investments.

There are definite parallels between follow-up for publicly-funded programs and conventional audit practices in the private sector; there also are notable differences. Administrators of publicly-funded programs are required to keep financial records and participant files. Both sets of documents are subject to audit. Just as in the private sector, publicly-funded programs are audited to detect fiscal improprieties and monitored for technical compliance with applicable laws and regulations.8 However, if public review is limited to issues of fiscal management and technical compliance, an employment and training program could fail to meet its objectives yet survive an audit so long as proper procedures were followed.9 An old adage in the social science says, "What you measure is what you get." If all you measure is accuracy and technical compliance, you may get tidy bookkeeping; you don't necessarily get results. The problem is that publicly-funded programs are not designed to produce profits -- the yardstick by which efficiency and effectiveness are measured in the private sector. In the absence of profits, former participants' outcomes may be construed as the "currency" of publicly-funded programs. Outcomes rather than profitability are the subject of performance-based audits -- the yardstick of program accountability in the employment and training system.

Déjà vu

In a sense, program administrators and follow-up staff are retracing ground that already has been covered. During the 1960s and 1970s, the Johnson Administration gave federal grants-in-aid to state and local governments to conduct a "War on Poverty" and President Nixon promoted revenue sharing through block grants as part of the "New Federalism." Déjà vu! Block grant proposals were introduced in 1995-1996 by Representative William Goodling and Senator Nancy Kassenbaum for funding workforce development and related educational programs. Although the Goodling/Kassenbaum proposal did not pass, various educational and workforce development reauthorization proposals pending before the 105th Congress again envision program consolidation and funding through block grants. (See, for example H.R. 1835, the Employment, Training and Literacy Enhancement Act of 1997.) These proposed changes in intergovernmental fiscal arrangements and resurrection of the block grant concept have caused public servants and citizens once again to rethink the way federally-funded/state-administered programs are reviewed.

A look backward at the seminal developments in program accountability at the federal level in the 1970s can shed light on where follow-up at the state level might be headed. In particular, it is useful to examine changes in the role the General Accounting Office (GAO) assumed regarding program accountability in the early 1970s. By reviewing the GAO's evolution, one can get a feel for the connection between closely related elements of accountability. On that basis, a state can develop criteria for drawing the boundaries of a central follow-up entity that can respond effectively to external demands while earning the support of citizens and service providers alike.

The GAO was created by the Budget and Accounting Act of 1921. Its primary function is to "authenticate" financial claims and demands made on or by the federal government as a necessary step in the "settling of accounts." That is, the GAO verifies accounts and authorizes disbursements from the United States Treasury. The GAO gradually assumed increased responsibility -- particularly with passage of the Legislative Reorganization Act of 1946 -- for examining government procurement practices. Such an expansion made sense because so much more was at stake. The federal government had become one of the private sector's biggest customers during World War II. As federal expenditures continued to grow in the post-war era, so did the opportunities and temptations for misappropriation. Therefore, the GAO performed spot audits of specific programs under suspicion at the request of Congress as a whole or at the request of Congressional subcommittees. These spot audits were seen as a means of detecting and preventing embezzlement and theft.

As grants-in-aid and revenue sharing came in vogue, Congress passed another Legislative Reorganization Act. Comptroller General Elmer Staats interpreted the 1970 Act as a call for the GAO to do "more than certify the completeness and adequacy of financial statements" submitted on federally-funded programs. Staats asserted that accountability requires a review and analysis of the results achieved by government-sponsored programs relative to their costs. During Staats's tenure as Comptroller General, the scope of GAO reviews included:

fiscal accountability which is concerned with fiscal integrity, disclosure and compliance with applicable laws and regulations;

managerial accountability which deals with the economical and efficient use of personnel and resources; and

program accountability which is concerned with the results or benefits being achieved and whether programs are meeting their intended objectives with due regard to costs.10

Moving from top to bottom in that list, each aspect of accountability in the public sector represents an incremental expansion of the audit function -- albeit that each is analogous to some aspect of the auditor's role in the private sector. Each is a logical extension of the one preceding it. There is only a fine line between performing an audit to detect fraud (per fiscal accountability) and an effort to detect waste and inefficiency (per managerial accountability) and from there to a cost-benefit analysis of the results achieved (per program accountability).

Expansion of the GAO's role in the 1970s was not without controversy. Each step carried the GAO's activities further from a narrow definition of the audit function and beyond the typical auditor's technical expertise in accountancy. Each step expanded the criteria for determining what kinds of data, documents and observations are relevant. Ironically, the materials considered relevant became more ambiguous with each step. Fiscal audits, for example, typically deal with the simple arithmetic of ledgers, the fixed and widely accepted practices of accountancy and the very precise and detailed circulars that govern procurement and disbursements made by public entities. On the other end of the scale, program accountability requires a determination of "intent" where, in many cases, authorizing legislation does not define desired outcomes clearly and/or desired outcomes expressed in one piece of legislation are contradicted by those expressed in other acts and regulations.

To put the controversy over the Comptroller General's authority in perspective, Ira Sharkansky placed audit activities of the GAO during Staats's tenure along a one-dimensional array. (See Table I .) As Dr. Sharkansky noted, "The move from 'fiscal' to 'managerial' to 'program' accountability involves increasing emphasis on operations at the far end of the scale. This array also coincides with the range of antagonists aroused by each activity. The broader the function -- and the more it differs from the narrowest meaning of 'fiscal audit' -- the more actors feel that the auditor is overstepping his proper function."11

Table I

Ira Sharkansky's Analysis of Activities Undertaken by GAO Auditors in the 1970s:
a functional approach to drawing the boundaries of an accountability subsystem

Scope of the
Auditor's Role
Activity or Function
Narrowest Construction



                         
                           
                         
                           
                         
                           
                         
                           
                         
                           
                         
                           
                         

Broadest Construction

1) verifying financial reports;

2) approving accounting procedures;

3) identifying expenditures that exceed or lie outside of authorizations;

4) preventing or recovering payments made for illegal purchases;

5) assessing the accomplishment of administrative goals;

6) operationally defining program goals for the purposes of systems
analysis and assessment of accomplishments;

7) advising policy-makers about program accomplishments as part of their concern with reauthorization or reappropriations for existing programs;

8) advising policy-makers with respect to initial program planning;

9) taking the lead in advising decision-makers about the overall direction policy should take (i.e., becoming involved directly in defining desirable ends).

The boundary issue posed for the GAO by Sharkansky in the early 1970s was never resolved formally through clarifying legislation or case law. Rather, a combination of factors led the Comptroller General to focus more on fiscal and managerial audits than on program accountability. 12 Successors to Comptroller General Staats continued to interpret the GAO's mission broadly to include program accountability but they engaged in performance audits far less frequently -- usually only at the specific request of Congress, a Congressional committee or an individual member of Congress.

Several trends and initiatives in the late 1980s and the 1990s have revived national interest in program accountability per se.13 Taxpayers increasingly are concerned about getting their money's worth from employment and training programs.14 Once again, the GAO is in the thick of things as it issues episodic but widely read reports on inefficient program management and the ineffectiveness of some in meeting their performance objectives.15 Nonetheless, the GAO is not in a position to satisfy redoubled public demands that programs be held accountable. It lacks the resources to investigate every federally-funded program in sufficient depth to do much beyond assuring their fiscal integrity. At the same time, the public has grown skeptical about the capacity and objectivity of service providers to police themselves -- despite rhetoric to the contrary advocating a return to local control and site-based decision-making.16

Given renewed public demand for program accountability, the central questions as we approach the 21st Century are: 1) Who should audit program performance? 2) How often should performance audits be done? and 3) How should performance audits be conducted? Questions 2 and 3 are relatively easy to answer and are addressed in other chapters of this Guide. It will suffice at this point to say that program performance should be monitored as often as resources permit but no more often than the longest data collection cycle among the participating agencies' management information systems. (We call this the principle of the "slowest common denominator."17) Secondly, the process should be valid, reliable and cost-effective.

The hardest of the three questions by far is: "Who should audit program performance?". There probably is no single correct answer. In a federal system predicated on the principle of checks and balances, it is inevitable that several entities will presume they have the authority to review program performance.18 While the GAO, the Office of Management and Budget (OMB) and the Inspectors General for various cabinet-level departments are likely to keep federal fingers in performance monitoring, state officials probably will have to play a larger role in program accountability.19 In recent years, states have been granted more latitude to experiment and innovate to determine how to serve a wide variety of employment and training program customers better. (As David Osborne noted, states have become "laboratories of democracy."20) With greater latitude for planning and innovation extended to the states comes greater responsibility for documenting what they accomplished in programs where federal dollars flowed through their hands.

Several entities in each state already have entities with authority to do episodic performance audits parallel to those done by the GAO: a State Comptroller of Public Accounts; an audit division under the State Treasurer; a legislative budget board (LBB); audit and/or management information system divisions of the respective partner agencies engaged in employment and training, etc. Their overlapping jurisdictions and the episodic nature of their reviews may result in unnecessary duplication of effort while failing to ensure that no employment and training program falls through the gaps.21 Thus, unresolved boundary issues posed by Sharkansky for the GAO at the federal level in the 1970s must now be addressed by states interested in implementing integrated and centralized follow-up.

Criteria drawn from the GAO experience

The co-authors of this Guide share the opinion that a central follow-up entity should be created in each state. That entity should be responsible for gathering outcomes data for all employment and training programs that are funded to any extent with tax dollars (including non-public institutions and private service providers whose tuition charges and fees may be covered by federally-guaranteed student loans or state-issued vouchers). States are advised to review Sharkansky's analysis of the politics of accountability when deciding what kind of entity is best suited to monitor program performance and where that entity should be housed. Arguments for a separate, centralized follow-up entity based on efficiency, objectivity, detachment and cost-effectiveness are addressed elsewhere in this Guide. In this section, we focus on "relevance-of-expertise" as the determining factor in deciding where to house follow-up and what functions it should perform.

Avoid pure accounting functions

Of the nine GAO activities identified by Sharkansky (on page 35), the first four are clearly in the accountant's realm of expertise: verifying financial reports; approving accounting procedures; identifying expenditures that exceed or lie outside authorizations; and preventing or recovering payments made for illegal purchases.

At the state level, these functions are in the domain of a Comptroller, a Treasurer, the LBB and/or partner agencies' fiscal audit divisions. While follow-up may be housed within any of these entities along side accountants and lawyers, the GAO experience indicates that research done under the rubric of program accountability requires different kinds of expertise: economists, statisticians, polictical scientists, computer programmers, systems analysts and substantive area specialists.22

The lead agency also may require a wholly different kind of ethos; that is, a follow-up entity needs to be perceived as a partner in continuous program improvement. That necessary posture might be undermined if the entity chosen to house follow-up is stereotyped with the "I gotcha" mentality commonly associated with those who do audits to detect fraud or to uncover legal activities. Our analogy to the GAO experience is worth mining further for additional gems of insight into the spirit or outlook a state's central follow-up entity should adopt. According to a comprehensive review of the GAO, "The alleged emphasis on [identifying deficiencies] was for many years the underlying criticism directed at the GAO's reviews. . . it had been felt that the degree to which staff members were able to come up with agency shortcomings had a significant bearing upon their advancement in the organization. . . [No matter how] sincere GAO executives might have been in their intention to deemphasize the faultfinding approach, the stress on [uncovering deficiencies] was deeply inbred and resistant to significant change."22 States are advised to consider the reputation of existing entities engaged in related audit-like functions before housing follow-up activities with them. The follow-up entity needs to view partner agency or service provider inadequacies in perspective and avoid "deficiency findings" as its raison d' être. The central entity should -- in research terms -- entertain the null hypothesis that any given program is working as intended until overwhelming evidence is amassed to the contrary. Staff should not turn a blind eye to any faults they uncover through follow-up nor should they assume a posture of complacency. Rather, they should assume that even the best performing program can be improved continuously if only management decisions are driven by valid and reliable outcomes data.

The presumption held by follow-up staff ought to be that local providers are driven by a bona fide desire to serve employment and training customers well. While substandard program performance might be uncovered through analysis of out-comes data, the follow-up staff's first instinctive reaction should be to search for contextual variables (such as labor market conditions in the service provider's locale and the mix of population served) or other extenuating circumstances that might explain why a program did not meet its performance standards. On the assumption that poorly performing programs can be improved and salvaged, follow-up staff should believe that the best ways they can help their partners is by: a) giving early warnings of potential problems; and b) sharing information about the "best practices" of programs that exceed standards.

Avoid overtly political activities

Activities at the other end of Sharkansky's scale (i.e., initial program planning and defining the overall direction policy should take) require a special expertise that is sensitive to the needs of citizens, public opinion and the interplay of partisan politics. Responsibility for functions on the political end of Sharkansky's scale should remain in the hands of elected officials who are answerable directly to the voters. Unless specifically asked for their opinions, follow-up staff should not encroach on the domain and prerogatives of legislators and elected officers or policy-advisory councils in the executive branch of state government.

Drawing lines between collecting data, analyzing data and redressing program deficiencies

The primary function of a central follow-up entity is to document the outcomes achieved by persons served through employment and training programs. Collection of outcomes data falls squarely within the fifth function of accountability on Sharkansky's scale (i.e., "assessing the accomplishment of administrative goals"). Outcomes are the name of the game; thus, follow-up is integral to efforts to hold service providers accountable for program performance.

This fifth function of accountability in Sharkansky's scale implies a broader range of activities than merely collecting outcomes data. Before service providers can be held accountable, they must be made aware of the outcomes their customers are expected to achieve. That means someone has to set and disseminate information about performance standards. Next, outcomes data must be compared to performance standards to determine if a program's former participants did achieve desired results. This is called "program evaluation." Last, but certainly not least, "accountability" implies that mechanisms are in place to redress every detected failure or substandard performance. For example, the fourth function of accountability identified by Sharkansky (i.e., preventing or recovering payments for disallowed costs) is the means of redressing illegal procurement found during a fiscal audit. Corrective action plans, sanctions (such as forced reorganization) and closure are some of the common means of redress in program accountability. They are the functional equivalents of restitution in fiscal accountability.

Each state must decide what services (if any) the central follow-up entity should provide beyond data collection. Should it merely collect data then pass them to other entities to use in their program evaluations? Each state also must decide what authority the follow-up entity should have for redressing program shortcomings and failures. That is, how deeply involved should the central follow-up entity be (if at all) in setting performance standards for the programs it studies? What role should the lead agency have (if any) for closing programs, imposing sanctions, devising corrective action plans and/or determining how to reward programs with bonus and incentive dollars? Lastly, follow-up will be caught up in an even larger question. Some will ask if it is even possible to structure a follow-up entity (or any entity at the state level) with the political will and/or political clout to confront well-entrenched local service providers and actually deactivate their programs or cut their funding if they perform poorly?

A state might limit its central follow-up entity's activities to data collection and release of non-evaluative/non-judgmental, descriptive reports. That is, the central follow-up entity's role in the accountability subsystem might culminate in generating descriptive statistics without rendering evaluative judgments about program outcomes. In Texas, for example, the follow-up entity gathers outcomes data then describes what happened to former students and program participants only in terms of post-exit employment rates, average earnings, transfer rates into postsecondary education and training, etc. -- without indicating whether or not any given program met applicable performance standards. The central follow-up entity in Texas refrains from classifying outcomes as "successful" or "unsuccessful" (e.g., former program participants are listed only as "located" or "not located" via automated record linkages).

Evaluations, per se, in Texas are the responsibility of various policy-advisory councils, the Legislative Budget Board, partner state agencies and the State Comptroller. Those other entities have the authority in Texas -- both de jure and de facto -- as well as the political will and clout to redress problems detected through program follow-up. Those entities set and enforce standards, allocate resources, award bonus and incentive dollars, devise corrective action plans and sanctions, reorganize or deactivate poorly performing programs. The central follow-up entity in Texas lacks authority in all of these areas. It merely provides the data that drive policy without getting drawn into the politics of decision-making.24

Florida's central entity, on the other hand, may be thrust further into the arena where poor program performance is redressed. The Florida legislature recently passed an act that ties vocational education and training funds directly to service provider performance.25 Under such legislation, Florida's lead agency will be asked to engage in activities comparable to the seventh accountability function identified by Sharkansy (i.e., "advising policy-makers about program accomplishments as part of their concern with reauthorization or reappropriations for existing programs"). That makes follow-up in Florida more of a "high stakes" game than it currently is in Texas.

Anticipate unintended consequences

As performance-based funding is implemented in Florida and other states, some programs that prospered under enrollment-driven funding formulae may find their dollars reduced significantly.26 On the other hand, programs that perform well despite being underbudgeted may realize dramatic funding increases under a revised formula. While the intent of performance-based funding is to hold vocational education and training providers accountable for serving the career development needs of student and adult learners, service providers are apt to preceive themselves either as "winners" or "losers" under any new formula for allocating dollars. (To paraphrase an old axiom from the social sciences, "It's a matter of whose ox is being gored.") Some service providers will respond by discontinuing poorly performing programs voluntarily. Others may take corrective action on their own to improve program performance. However, every state that switches from enrollment-driven to performance-based funding should expect that service providers who perceive themselves on the losing end may search for a convenient scapegoat.

The question eventual boils down to this: "Who will take the heat?". Legislators commonly pack their bags and leave town after each session. Administrators and executive officers in charge of allocating vocational dollars can assert correctly that their hands are tied by an allocation formula set by the legislature. By default and because of its visibility, an entity that gathers outcomes data (namely the lead follow-up agency) may be the lightening rod for protests against changes wrought by the adoption of performance-based funding. The amount of "heat" a follow-up entity is likely to take depends on how its role in the accountability subsystem is defined. (See Table II.)

Table II
The Connection Between a Follow-Up Entity's Role in the Accountability Subsystem and the Likely Reactions of Service Providers

Alternative Scenarios Likely Service Provider Reaction
Scenario #1: The follow-up entity is limited to publishing descriptive statistics without issuing evaluative findings or judgments while partner agencies and other stakeholding groups actually evaluate programs, allocate resources and terminate or sanction poorly performing programs (acting wholly or in part on the basis of out-comes data provided by the follow-up entity). If a program's performance descriptions are disputed, the service provider is likely to engage in supplemental follow-up to fill gaps in the data collected by the central entity. The service provider would then introduce evidence from its own supplemental follow-up to entities which actually evaluate programs, allocate resources and close or sanction programs.
Scenario #2: The follow-up entity is responsible for not only describing program outcomes but also for evaluating programs and publishing findings regarding program performance relative to applicable standards. Service providers are apt to lobby the central follow-up entity to change its evaluations of poorly performing/substandard programs based on outcomes data it gathered through supplemental follow-up.
Scenario #3: The follow-up entity participates directly with partner agencies in allocating funds on the basis of the outcomes data it collects and analyzes. Service providers whose poorly performing programs face budget cuts or termination may use administrative processes to dispute the central entity's data collection and analysis methods.
Scenario #4: The follow-up entity participates directly with partner agencies in closing or imposing sanctions against poorly performing or substandard programs. The central follow-up entity may have to justify its data collection procedures and analytic methods as well as its involvement in and authority for policy-making should a service provider elect to litigate over issues of program closure and/or sanctions.

In sum, other states may choose to give their central follow-up entities varying degrees of responsibility in the accountability subsystem for interpreting follow-up data and acting on their analysis to redress poorly performing programs. In part, this decision may depend on whether or not the lead follow-up agency or some other entity (if any) in the state has sufficient political will and clout to put real teeth into the principle of program accountability. In part, this decision may depend on the expertise of the individuals who initially staff the central follow-up entity. Follow-up staff undoubtedly will have the knowledge, skills and ability to go beyond descriptive statistical analysis. They may be fully qualified to render evaluative judgments if given standards by partner agencies to apply to the results they uncover. As policy-makers contemplate the reauthorization and reappropriation of existing programs, they my have sufficient regard for the expertise and reputation of follow-up staff to ask them to rank-order service providers or classify specific programs as meeting, exceeding or failing applicable performance standards.

We strongly recommend, however, that the follow-up entity should not be responsible for directly redressing problems through enforcing standards, allocating resources, devising corrective action plans or sanctioning poorly performing programs. The lead agency should not even be involved directly in awarding bonus and incentive dollars because to do so may cause hard feelings among those who were not rewarded. Such activities might put the follow-up entity in an adversarial role vis a vis the partner agencies and service providers on whom it depends for former participants' seed records. In short, a follow-up entity can be insulated from combative relationships with partners agencies and service providers if its role in the accountability subsystem is limited (as in either Scenario #1 or Scenario #2) to handing off outcomes information to other parties who, in turn, allocate resources or impose sanctions.

Additional responsibilities in the accountability subsystem that may be assigned to the central follow-up entity by default

On page 38 of this Guide, we advise follow-up staff against encroaching on the prerogatives of elected officials and policy-advisory councils. A central follow-up entity should not get involved in initial program planning, setting program performance standards and determining the overall direction policy should take. Nonetheless, a central follow-up entity may be drawn by default into grey areas between "searching for legislative intent" and policy-making. In particular, the lead agency may be called upon to "operationally define program goals for the purposes of systems analysis and assessment of accomplishments" (i.e., the sixth function of accountability identified by Sharkansky -- p. 35). While it is within the purview of elected officials and advisory councils to decide conceptually what should be measured, politicians and blue-ribbon panels are prone to describe desired outcomes in sweeping and catchy terms that "play well in Peoria." They may express their intentions broadly and in non-technical terms that the average voter can understand and appreciate. Meanwhile, they may leave it to others to "sweat the details" -- such as actually deciding how to measure performance.

We underlined the word "operationally" in the paragraph above to emphasize the narrow latitude that legitimately might be given to a central follow-up entity. This usage connotes that the lead agency will take its cues from high level policy-makers regarding what to measure. However, it assumes that authority inevitably will be delegated to follow-up staff expressly or implicitly -- by necessity or by default -- to determine precisely: what indicators are most valid and reliable; what units of analysis will balance usefulness and cost-effectiveness; what sources can be tapped efficiently for relevant data; and what timeframe will prove most practical. As the litany above indicates, operationalizing broad expectations is a technical function that follow-up staff may be better suited to do because of their training in empirical research methods and their intimate knowledge of the contents of partner agencies' management information systems.

As a state sets up its central follow-up entity, it will see wide variance in the degree to which desired outcomes have been defined operationally for its employment and training programs. A state's JTPA program, for example, probably will have well-defined performance measures and a well-developed traditional survey method for collecting outcomes data. That is because the U.S. Secretary of Labor is required by statute and regulation to specify the minimum performance measures each state must address in its annual Standard Program Information Report (SPIR) as a condition of receiving funds under Titles IIA and III.27 Operational definitions of performance measures used by community and technical colleges also may be specified fully because public postsecondary institutions are required to report student outcomes under Perkins and Student Right-to-Know legislation as well as under guidelines issued by regional accreditation boards.

On the other hand, a central follow-up entity may need to take a proactive role in operationally defining the way desired outcomes will be measured for some of the programs it studies. Heretofore in Texas, for example, performance measures for Adult Education have been overly broad and ill-defined.28 Under a legislative mandate to conduct follow-up on behalf of Adult Education in Program Year 1995-1996, the CDR (formerly Texas SOICC) had to act on its own to interpret what out-comes adult learners are expected to achieve. The officially announced objectives of Adult Education in Texas offer little guidance. See, for example, the language below taken directly from the state's master plan for Adult Education:

Outcome: Adult learners [should be able to] demonstrate increased proficiency in the academic skills needed to enter the workforce and/or progress in the high performance workplace of the 21st Century.

Measure: Assessment demonstrates student progress toward collaboratively defined workforce proficiencies.

Outcome: Adult learners [should be able to] demonstrate improved capacity to participate responsibly and productively as lifelong learners.

Measure: Assessment demonstrates student progress toward collaboratively defined real world competencies.

Nowhere in any official publication could Texas's central follow-up entity find operational definitions for several key terms such as "high performance workplace" or "real world competencies." Nor has anyone at the state level decided precisely who should be involved in "collaboratively defining" these terms. Although research has been funded by the National Skills Standards Board (NSSB) and the Texas Skills Standards Board, these efforts have not yet resulted in a definitive list of "academic skills needed to enter the workforce." Follow-up staff could not find an authoritative list of "workforce proficiencies."

In the absence of clear operational definitions, Texas's lead agency gathered the same kind of outcomes data for Adult Education as it did for the other programs it studied. Although staff could not determine if former Adult Education students were employed in "high performance workplaces," standard record linkage techniques could be used to determine if they were employed and how much they earned per quarter. These data could be treated as crude indicators that adult learners had acquired suitable "workforce proficiencies" and necessary "academic skills to enter the workforce" without staff getting caught up in constructing and validating skill-assessment instruments. Staff also avoided the fruitless and thankless task of determining what might constitute "responsible and productive" participation in lifelong learning. Standard record linkages already at the central follow-up entity's disposal also could be used to determine which former Adult Education students passed the GED exam and subsequently sought additional education and training at public postsecondary institutions.

The Texas experience is not unique. Although the National Literacy Act of 1991 (NLA) requires states to submit a detailed plan -- including what it intends to use as "Indicators of Program Quality for evaluating adult education programs," this authorizing legislation does not provide any precise criteria to federal officials for determining the adequacy of a state's proposed "quality indicators." Thus, desired outcomes for Adult Education across the nation seldom are defined operationally. The ambiguous descriptions of desired outcomes and ill-defined measures for Adult Education in Texas were sufficient to survive scrutiny by the U.S. Department of Education.

Prominent groups of practitioners -- including the Institute for the Study of Adult Literacy at Penn State University and the National Institute for Literacy (NIFL) -- admit that there are problems with the operational definitions of desired Adult Education outcomes. In the words of Andy Hartman, Executive Director of the NIFL, "A system that cannot assess its own progress cannot improve itself and we cannot measure progress without first agreeing on what we are trying to achieve."29 But at this point, stakeholders in the Adult Education community are still studying the problem. A National Adult Literacy Survey (the NALS) and several parallel state surveys have been conducted to establish benchmarks and to stimulate "coherent discussion of appropriate assessment strategies." As of this writing, however, Adult Education stakeholders have not achieved consensus. At best, definitive steps are promised in the near future. For example, the NIFL's timetable indicates that in the summer of 1997 it will begin "develop[ing] performance indicators that translate . . . into a clear set of results for adult literacy."

Legislators appear to be impatient with on-going studies and pleas that they not "micro-manage" Adult Education.30 Congress is unlikely to wait patiently for the Adult Education community to complete on-going studies. On behalf of the voters and taxpayers, they demand immediate accountability and seem inclined to base judgments on the best available evidence. If the bill proposed by Representatives McKeon and Kildee to reauthorize Adult Education is passed by the 105th Congress, language therein will endorse a pragmatic approach much like that taken by Texas's central follow-up entity. House Resolution 1385 would require Adult Education programs to document "placement or retention in or completion of postsecondary education, training or employment" as core performance indicators. Thus, while practitioners continue to study the situation, construct and validate tools to assess relevant learning gains, and strive for collegial consensus, state follow-up entities across the nation may be given the green light by Congress to look at labor market outcomes and pursuit of postsecondary education and training as crude indicators of "preparedness" achieved by former Adult Education students.

While it would be preferable for partner agencies to hand over clear definitions of desired outcomes and well-developed performance measures to guide a central follow-up entity's collection of outcomes data, it appears that staff may have to supply operational definitions in the interim for some programs they are required to follow. While we have used examples from Adult Education to illustrate our point, be advised that other programs participating in centralized follow-up also may lack adequate operational definitions of their desired outcomes.

This is especially likely to be the case for any program in the first few years after the implementation of systemic reform such as welfare-to-work, school-to-work, etc. 31 No matter how well-meaning its authors, nearly every piece of reform-minded legislation:

a) documents poor results achieved by old ways of delivering a service; then
b) "rallies the troops" to innovate and experiment;
c) demands swift results -- all without fully specifying desired outcomes; and
d) leaves it to entities "downstream" to figure out how to achieve results.

Most reforms must go through several reiterations before arguments can be settled regarding what reasonably can be expected and how those expectations can be quantified.32 The more sweeping the reform, the longer it takes for the dust to settle.

It is inevitable that each state's lead agency will be handed the task of conducting follow-up on well-meaning programs before they are fully fleshed out. Follow-up staff, therefore, must walk a fine line between "discovering legislative intent" and making policy in a vacuum. Hopefully, decisive action by Congress, state legislatures, partner agencies and service providers will relieve follow-up entities of the need to operationally define key terms before they can get down to business. But until others take decisive action, follow-up entities will probably continue to perform this essential function by default.

Subsystem Interactions and Constraints

Subsystems within the employment and training system are interdependent. Decisions made in one realm constrict the parameters for decision-making and action in every connected subsystem. A state will design its central follow-up entity first and foremost as an instrument of program accountability but the boundaries and limits it draws around the lead agency's primary role will constrain its options in other domains. For example, a central follow-up entity's ability to contribute to the strategic planning process and to deliver information to people in the external environment through the feedback loop will be affected by the role it is assigned in the accountability subsystem. Keep the idea of interactions and constraints in mind as we describe the additional functions a lead agency might fulfill in the subsystems listed below. Be advised that some options for participating in strategic planning and information delivery may be eliminated by virtue of more fundamental decision made regarding a central entity's role in program accountability.

Follow-Up as Part of the Information Delivery Subsystem or Feedback Loop33

People in the employment and training system's external environment (taxpayers, potential customers, prospective students, economic development specialists, etc.) need to be informed. Before they can react rationally to the employment and training system, they must know what happened as a result of decisions that were made and the services that were delivered to prior cohorts.

To meet the information needs of its citizens and policy-makers, each state must decide for itself how much responsibility to give the central entity for disseminating outcomes data:

-- under what authority
-- how authoritatively
-- to whom
-- in what format(s) and
-- through what channel(s).

In defining the follow-up entity's role in information delivery, each state must take into consideration the needs of its target audience(s) as well as the nature of its competition.

Implications of "official" recognition (or lack of official recognition)

In Florida, the central entity publishes reports and makes them public. In fact, Florida law stipulates that while education and training providers have the option of publishing results from their own supplemental follow-up, they must reference the FETPIP's reports as official and authoritative along side their own "unofficial" studies. In short, the legislature intentionally thrust Florida's follow-up entity into a prominent role in the information delivery subsystem by giving it a "leg up" on the competition.

In Texas, follow-up reports issued by the central entity have not yet been declared "official" by the state legislature. Its reports have no "leg up" with regard to other parties who release information that competes for public attention. For now, each entity that provides employment and training services in Texas is allowed to generate its own performance reports based on outcomes data gathered by the central follow-up entity in combination (at the service provider's discretion) with any auditable and verifiable data gathered via its own supplemental follow-up. Service providers also are allowed to make claims about their program performance in separate marketing materials without reference to any data whatsoever from Texas's central follow-up entity. In fact, where neither the results of automated follow-up nor their own supplemental follow-up indicate a high probability of successful outcomes, some service providers in Texas still base their recruitment literature on anecdotal information about the stellar (but non-representative) achievements of individual former participants.

Other ways of ensuring that outcomes data are used to facilitate informed choice in policy-making and individual decision-making

Because it has no legislatively-created advantage in competing for public attention, Texas's central follow-up entity takes a different approach than does Florida's lead agency. Given that it is older and its role in information delivery is defined by explicit legislative mandate, Florida's central follow-up entity produces highly standardized reports. Texas, on the other hand, continues to experiment with a variety of report formats in an effort to determine which will be most effective in getting stakeholders to use outcomes data voluntarily to drive policy-making and personal choices. As the Texas follow-up entity matures and as employment and training professionals come to appreciate the value of valid and reliable outcomes data, the weight and importance accorded its reports grows by custom and usage. Nonetheless, service providers in Texas, for now, are not required to "go public" with the information gathered on their behalf and delivered to them by the state's central follow-up entity.

This does not mean that reports by Texas's central follow-up entity are ignored in official circles. Without the express backing of its legislature, the central follow-up entity in Texas must rely on other ways to ensure that valuable follow-up information gets into the hands of policy-makers and individuals making personal choices. Taking a direct route, the lead agency publishes and distributes a limited number of copies of its annual reports that describe program outcomes in very broad terms. Distribution of these final reports is limited largely to service providers, legislators, policy-advisory councils and partner agencies at the state and substate level. Printed copies of the central follow-up entity's annual reports are made available to the public through the Texas State Library System and the national archives maintained by the Ohio State University's Education Department (called ERIC).

In addition to direct deliveries to policy-makers and service providers and general public release of its reports, Texas's central follow-up entity is developing an automated Consumer Report System (CRS). The CRS will deliver service providers' performance history data directly to prospective customers in user-friendly formats to facilitate fair and meaningful comparisons and to promote informed choice. The first version of the CRS was developed for stand-alone computers and local area networks and is being installed in One-Stop centers across the state. A second version will be developed for the InterNet to make outcomes-based performance data more universally available.

Thus, service providers in Texas are not totally at liberty to publish whatever they please about their own outcomes-based performance. Knowing that the lead agency disseminates outcomes data and service provider performance information widely -- especially to prospective students and adult learners, service providers understand that their own reports would be challenged publicly if the information they release differs significantly from the results of automated follow-up. Knowing that informed consumers may "vote with their feet," service providers try diligently to correct problems uncovered via follow-up even though findings reported by Texas's central entity compete with the service providers' own reports for public attention without the advantage of being labeled "official."

Implications of targeting different audiences through the information delivery subsystem

The primary audience of the Texas follow-up entity's information delivery efforts differ from those primarily targeted by Florida's lead agency. Policy-makers constitute the primary target audience for reports generated by Florida's lead agency. The Texas follow-up entity's primary target audience consists of prospective students and adult learners, dislocated workers and welfare-to-work program participants. While Florida's policy-makers are presumed capable of interpreting information in statistical and tabular formats, the primary target audiences for follow-up information in Texas are less likely to be statistically literate or to have a sophisticated understanding of the broader issues that can be addressed with follow-up data. Moreover, the primary audience in Texas will access follow-up information directly via the InterNet or in self-directed mode in a One-Stop center's resource room. Therefore, a great deal of attention is paid in Texas to developing multiple presentation formats to address a few simple, commonly asked questions in ways that are suited to a wide variety of learning styles and lower levels of statistical literacy. By contrast, the role of Florida's lead agency in information delivery is more focused on developing a wide variety of very sophisticated and indepth statistical "applications" to answer the kinds of questions policy-makers are more likely to ask.

This is not to say that Florida ignores the needs of prospective students, welfare-to-work clients and workforce development program participants nor does Texas ignore the needs of state and substate planners, program administrators or service providers. Rather the two states' follow-up entities have slightly different emphases in their information delivery efforts because the different status accorded their reports by the respective state legislatures dictates different strategies to ensure that outcomes data are used to drive rational decision-making. These differences in information delivery roles are accentuated by the fact that Florida's central follow-up entity is housed in that state's Department of Education while Texas's follow-up unit is operated by the Career Development Resources. The former has well-established channels for communicating information to policy-makers and service providers; the later is more deeply involved in disseminating information directly to prospective students, workforce development program participants, welfare-to-work clients and intermediaries in counseling and case management.

Any other state that adopts a centralized and automated approach to follow-up probably will emphasize delivery of information to a specific customer group from the outset. If states follow our suggestion (p. 30), each new central follow-up entity will not attempt to be "all things to all people." Rather, each will have a particular focus or emphasis that is determined largely according to: a) which group of stakeholders was first to promote or provide start-up dollars for centralized and automated follow-up; b) where the central follow-up entity is first housed; 34 and c) the degree to which the reports it issues are accorded official legislative recognition. After an initial start-up period, a state's central entity gradually will diversify the audience(s) it targets for information delivery. Meanwhile, it probably will continue to improve the sophistication of its reports and vehicles for delivering follow-up information to its first group of customers. In short, what the state's central entity chooses or is required to do first probably will remain its forte and the hallmark of its public perception until it has matured fully as an institution. For several years, then, the central entity may be perceived by people in the external environment as concentrating its efforts to deliver follow-up information to one particular type of customer -- even as management and staff make conscientious attempts to serve other customer groups equally well.

Judging only on the basis of the way outcomes data are packaged and delivered, an outside observer would assume that there are vast differences between the central follow-up entities in Florida and Texas. In truth, these differences are rather trivial -- simply matters of emphasis. Both entities were conceived to play virtually the same role in their respective state's accountability subsystems. Therefore, on a day-to-day basis, the two entities function in remarkably similar ways. Any major differences will be found only at the margins where the two entities are involved to varying degrees in their respective state's planning and information delivery subsystems -- secondary functions which, by and large, are derivative and incidental to their common primary function.

The co-authors of this Guide see diversity of emphases among all follow-up entities as advantageous. In effect, it creates an informal division of labor. Each state's lead agency can concentrate its development efforts along the lines of its forte and, thereby, establish a "best practice" model that it can share with other states. The various lead agencies should enter into voluntary reciprocal arrangements. So long as their respective states' lead agencies share ideas with each other, states are spared from reinventing the wheel and duplicating the efforts of other states working to establish best practices in their particular niches.

Florida, for example, generously shared its policy-maker oriented "applications" with Texas; Texas eagerly adopted and adapted many of them. Texas will reciprocate by making the shell of its customer-oriented CRS available to Florida and other states. Meanwhile, Oregon's Shared Information System (SIS) is developing a "mobility continuum" 35 to identify the relative contributions of each service in the mix provided to welfare clients with multiple barriers that qualify them for participation in several employment and training programs. Florida and Texas anxiously await the chance to feast on the fruits of Oregon's efforts.

The Role of Follow-Up in the Planning Subsystem36

Data on program performance and former program participants' outcomes may be of great value to planners in the employment and training system -- for both long-range, strategic purposes and for effectively targeting delivery of services to eligible participants in the near-term. Participation in the planning process by follow-up staff again will depend on: a) where the central entity is housed; b) previous decisions about the central entity's other roles in the accountability and information delivery subsystems; and c) the degree to which a state's strategic and operational planning guidelines are data-driven.

Data-driven planning

Planning means formulating a course of action in an orderly fashion to achieve desired outcomes at some point in the future. The conversion process in the employment and training system consists of two types of planning. Operational planning means deciding how to deliver services to eligible customer groups in the near future. Strategic planning involves anticipating who will need services and what kind of services they will need in the long run. These two types of planning also are distinguished according to latitude given planners and the scope of the issues they address.

The "near future" horizon for operation planning usually is construed to mean "during the current program or fiscal year" or "only so far into the future as funds have, in theory, been authorized -- even if those funds have not yet be appropriated." Since many federal programs are budgeted on a biennial cycle, operational plans may set a course of action for two years in advance. Plans for the second year of the biennium may consist of a straight-line extension of the first year's plan with contingency statements primarily hedged on the anticipation that funding will be provided at the current or promised level.

Operational planners work within established parameters of: 1) existing eligibility criteria that define which customers must be served; 2) the current repertoire of services; and 3) known budgetary constrains. Operational planning focuses on: "who can we afford to provide what services, when?".

Strategic planning usually looks beyond the period for which funds already have been budgeted. Moreover, strategic planners are not necessarily constrained by existing parameters. They may anticipate changes in eligibility criteria and concomitant changes in customers' needs and expectations. The existing repertoire of services might not address the needs of new customers or the changing needs of existing customers; therefore, strategic planners are given more latitude to think about adding, deleting and revising the services to be offered. To some extent, they also are free to think about ideal courses of action "as if money was no object," calculate the likely costs then "back into" a proposed budget.

Strategic planning also embraces broader issues. How can capacity be built to serve more customers? How can current requirements be waived or revised so we can experiment with innovative suggestions for providing customers better services? How can our services be coordinated with the services of partner agencies so we can avoid duplication of effort while ensuring that none of our mutual customers fall through the cracks?

Whether for strategic or operational purposes, planning a course of action for the future must be done in an orderly fashion. Unfortunately, much of what purports to be planning consist of speculation and conjecture -- known euphemistically as "scientific wild ass guessing" (SWAG). In other cases, well-entrenched service providers muscle their way through the planning process to ensure that funds continue to flow to them without being tied directly to any definitive course of action -- much less to prescribed outcomes or rigorous performance measurement. In still other cases, plans amount to wishful thinking. Some employment and training professionals, for example, have issued high-sounding and well-meaning "plans" akin to the tag line in a popular movie, The Field of Dreams: build a highly skilled workforce and they come; i.e., employers will create new high-wage jobs to utilize the knowledge, skills and abilities you impart. The problem is that wishful thinking rests on untested assumptions and "plans" based thereon often express desired outcomes without providing a coherent course of action to achieve them.

Given that all data - by their very nature - describe past events or conditions, how can they be used to drive plans for any point in the future, near-term or long-range? The use of hard data allows us to proceed in an orderly fashion, to engage in educated guessing. Historic data in time-series are used to identify broad trends and to build empirical models inductively that explain what happened in the past. A planner devising a course of action for the future can begin by extrapolating from current trends. ("All other things being equal, here is what is most likely to happen.") Our models tell us deductively where to look for change factors37 and interaction effects among multiple trends. For example, we can plot the aging of the "Baby-Boomers" and we know where most members of that generation are employed currently. Then we can use extrapolations of about retirements and worker mortality to make reasonable forecasts about the different impacts these factors will have on various sectors of the economy and in specific occupational clusters. Laying these data-driven analyses along side trend lines that extrapolate occupationally-specific demand growth and turnover rates into the future, we can refine our forecasts about the labor market's need for replacement workers as well as net changes in occupational employment. Lastly, we can simulate alternative futures by inserting hypothetical data on change factors into "what/if" adjustments to forecasts derived from our original empirical models. By combining these data-driven techniques, our educated guesses usually can be expressed in terms of defensible forecasts at an acceptable level of confidence (i.e., within a reasonable margin of error).38

The difference between planning as educated guessing and SWAG-style planning is that a conscientious effort is made in the former to: 1) identify one's underlying assumptions; 2) specify the confidence interval or error term; and 3) continuously improve one's model to reduce the error term and increase everyone's confidence in the resultant forecasts. Any improvement in our knowledge and understanding of past events, present conditions and change factors should result in a proportionate reduction of error in our forecasting and planning models. The accuracy of our knowledge and the depth of our understanding hinge on data issues. Accuracy can be increased in part by imposing tighter quality controls on the variables being collected pursuant to the model currently in use. Our depth of understanding can be enhanced by testing the underlying assumptions and subsequently refining our models to include variables that prove to be more pertinent40 or by using more discriminating units of measure for some of the variables.41

Effectively targeting the delivery of services

Planners are required to use labor market information (LMI) in targeting the delivery of federally-funded/state-administered employment and training services in each substate region for optimal effect. The object is to create a better match between the supply of appropriately trained workers and occupational employment demand. In the 1960s, the Manpower Development and Training Act (MDTA) called for the use of "information regarding reasonable prospects of employment in the community and elsewhere. . . in providing vocational guidance and counseling to students and prospective students in determining the occupations for which persons are to be trained." As MDTA was replaced in turn by CETA and JTPA, similar requirements have been carried over from one program to the next. Now, according to §141(d)(1) of the Job Training Partnership Act,

"Training provided with funds made available under this Act shall be only for occupations for which there is a demand in the area served or in another area to which the participant is willing to relocate, and consideration in the selection of training programs may be given to training in occupations determined to be in sectors of the economy which have a high potential for sustained economic growth."

Provisions in §125(b)(1) of that Act imply that education and training programs funded with Perkins dollars should be responsive to the same occupational employment demand projections that are done for JTPA planning purposes. Even in programs where participants receive no education and training services (e.g., in the labor exchange operated by a state's employment security agency), sound forecasts of labor market demands help job-developers determine what size classes of employers in which sectors of the economy are most likely to have job openings to post. To that end, §125(b)(1) of the Act requires that the JTPA system and programs operated with Wagner-Pyser funds should work from the same labor market forecasts.

To help states meet their requirements for targeting the delivery of JTPA, Perkins and Wagner-Pyser services effectively, federal funds are allocated each year for: collecting relevant data; infusing those data into forecasting and planning models; conducting research and pilot projects designed to improve those models; automating and distributing the forecasting model; and rendering technical assistance to states to encourage them, to the extent possible, to standardize their efforts to estimate future training supply and occupational employment demand. Many of the core data elements used in a standardized approach to labor market forecasting are collected through joint federal-state initiatives overseen by the Bureau of Labor Statistics under authority delegated to it by the Secretary of Labor pursuant to §462(a) and §462(c)(3) of the JTPA.

The core data elements have been organized by the NOICC into a common file structure called the Occupational Labor Market Information Database (OLMID).42 The NOICC-SOICC network distributes that database to the states along with an application layer (the micro-OIS) designed for use by program administrators and planners. Some states have refined the micro-OIS application layer and have added data elements to the OLMID to improve their forecasting and planning capabilities. Other states have developed their own automated LMI tools.

Whether using the micro-OIS, a derivative of the micro-OIS or comparable tools of their own, most states work from a common model for forecasting labor market demands and planning the delivery of employment and training services. 43 44 The model uses a logical succession of filters to target areas of occupational employment demand that should be addressed according to current employment and training system guidelines. First, using time-series data on multiple indicators, the model rank-orders segments of the economy according to their probability of exhibiting sustained demand growth. The next filter is applied to each industry that is likely to meet or exceed an employment demand growth threshold set by the user according to planning guidelines. In the second filter, an industry-to-occupation crosswalk (the SIC-to-OES matrix) is used to identify staffing-patterns in these growth industries. Occupations with significant employment in growth industries are passed through the next filter. In the third stage, occupations are ranked according to prevailing statewide wages.45 The model filters out those which are unlikely to pay enough to allow caretakers to support their families at or above a particular level of economic security input by the user according to the guidelines for the particular program being planned. 46 Next, the model eliminates occupations for which education and training requirements exceed the upper threshold of the program being planned.47 Jobs which can be performed on the basis of a simple demonstration or less than one month of specific vocational preparation also are eliminated on the assumption that clients can obtain employment in low-skill occupations without receiving education and training at public expense. 48 Lastly, the model uses the State Training Inventory (STI) to identify which institutions in a particular planning region purport to offer education and training related to the occupations that make it to the target list.

Follow-up data can be used to improve this basic labor market forecasting and planning model chiefly because the unit of analysis therein is the individual rather than county, region or state. Moreover, follow-up efforts focus on recent exit cohorts from specific employment and training programs rather than on the universe of workers or job-seekers.

In forecasting which segments of the economy will exhibit sustained employment demand growth, the most commonly used planning model does not differentiate between the industrial employment of senior incumbent workers and those who recently exited education and training programs. While that lack of differentiation may be acceptable for forecasting labor market activities in general, a model intended for use in targeting the delivery of education and training services ought to include data on and assign more weight to labor market experiences of individuals who exited the pipeline recently. Their experiences are more relevant to the outcomes likely to be achieved by subsequent exit cohorts than are experiences of senior incumbent workers. The point is that curriculum development (or revision) must respond to current and future employment demands as indicated by the job placements of recent exit cohorts rather than to circumstances that prevailed at a time further in the past when senior incumbent workers first obtained employment or conditions they faced over the years as they advanced up an industry career ladder. Follow-up data on the industry of placement for recent exit cohorts can be used as an additional variable in the forecasting model and can be assigned more or less weight depending on the mission and eligible customer base of the program being planned.

The SIC-to-OES staffing-pattern matrix also is based on state-level data which combines the experiences of senior incumbent workers and those who recently exited the education and training pipeline. The staffing-pattern of new hires across all industries may look quite different. Moreover, staffing-patterns in any given segment of the economy may be subject to regional variation. To the extent that a particular planning effort is intended as a guide for targeting the training of entry-level or first-time job-seekers in a specific region, the distribution of job placements by SIC code for a recent exit cohort of that region's program exiters may paint a more realistic portrait of potential outcomes.

Conditions of occupational employment experienced by recently hired persons may differ significantly form those experienced by senior incumbent workers. Without benefit of prolonged experience on the job and/or accrued earnings gains, their wages may fall short of prevailing wages. Moreover, prevailing wages for any given occupation may vary widely from one substate region to the next. Among senior incumbent workers, historic patterns of biases in female and minority hiring may still be evident in aggregate statistics despite legislation, case law and incentives to eliminate such biases in recent years. More detailed knowledge of the experiences of recent exit cohorts is essential if the planning model is used to drive regional curriculum development and informed choice in career decision-making where individuals may be as concerned with the substate location of and regional variance in occupational employment opportunities.

A State Training Inventory that merely lists the fields of study offered institution-by- institution cannot tell planners where they can get "the most bang for the buck" as they try to determine where eligible clients should be referred for education and training services. Follow-up data on placement rates, earnings-at-placement and the training-relatedness of placements -- if organized by service provider and field of study -- can increase the utility of the STI and, in turn, can improve operational planning.50


Improving strategic planning

Whereas an operational planner must focus on devising a course of action that will meet today's performance requirements, a strategic planner may question if existing standards are unattainable or too low to prod program improvement. Data provided by a central follow-up entity on the actual outcomes achieved by recent cohorts of program completers and leavers can serve as benchmarks for realistic expectations. If the central entity uses a longitudinal design, follow-up data also can be used to keep planners from misjudging program results in the short run by focusing more attention on likely "delayed" and "long-range" benefits of the services rendered. Strategic planning models can be built to take follow-up data into account in: a) setting and periodically revising performance standards; b) adjusting standards to regional realities; and c) establishing general guidelines for procuring education and training services on behalf of clients through performance-based contracts.

Whereas an operational planner plots the delivery of a particular existing service, strategic planners look at the bigger picture. What is the appropriate mix of services? Are current service locations optimally distributed to avoid both unnecessary duplication of effort and geographic gaps in reasonable access? Are any labor market needs left unaddressed by the current total mix of services?

Where follow-up for all employment and training is integrated and where common definitions and data collection methods are used, fair comparisons can be made across programs and providers. For example, results achieved by the lowest-cost service (i.e., labor exchange) can be treated as a baseline. All other things being equal, what was the value added by more expensive services such as vocational training or transition support? Such information is vital to efforts by strategic planners to determine the optimal mix of services. This will become an increasingly important consideration if categoric grants that fund specific services are replaced by block grants with latitude given to each state in deciding how to allocate those funds most effectively across the entire gamut of services.

Follow-up studies can gather information on the service delivery location and subsequent job placements by worksite location. Comparisons of these two geographic variables provide valuable information about the mobility of recent program completers and leavers as they search for work. Is training provided close to where the related jobs are? Proximity between the two is important if education and training providers are expected to collaborate with employers to keep the curriculum responsive to the needs of the latter. Proximity also is important if students are expected to incorporate related work-based experiences into their education and training. Conversely, analysis of geographic mobility is important to strategic planners if relocation assistance is to be part of state-provided transition support. This information also is important to economic development specialists. Evidence of industries "importing" workers trained elsewhere can alert planners to missed opportunities to serve their own students and adult learners. In industries where availability of a trained workforce is a key to site-selection for business start-up or expansion, current "importation" of workers also may alert entrepreneurs to limitations on their potential for further economic development. Lastly, an excessive exodus of program completers may have a bearing on support for local decision-makers and service providers (and, perhaps, the employment and training system as a whole). Residents of a taxing district, for example, might resent having their dollars spent by a local community college to train students and adult learners for jobs in some other region. They may want to "see their tax dollars working closer to home" -- benefits to their statewide economy notwithstanding.

In gathering information on the employment of former program participants, a central follow-up entity inevitably will come across jobs that do not fit into any existing occupational taxonomy. 51 After eliminating misspellings, abbreviations and acronyms, or firm-specific idiosyncratic titles applied to conventional occupations, some job titles may, in fact, represent emerging employment opportunities. These residual titles can be used in data-driven curriculum development to ensure that sufficient numbers of skilled workers are trained by the time demand in an emerging occupation reaches critical mass. In this way, follow-up can provide early warning -- as a barometer of changes wrought by the deployment of new technologies, shifting consumer tastes, recent decisions in labor law and/or widespread adoption of the human resource management theory d' jour.


Endnotes

1 By way of analogy, the human body could be considered a system. An individual body is composed of subsystems that overlap (e.g., the cardiopulmonary "subsystem" and the respiratory "subsystem," etc.). The body works as a system to respond to inputs from its environment. When the nervous "subsystem" senses a decrease in a room's temperature (a "stress" or "disturbance") and sends a message to the digestive "subsystem" to help maintain normal body temperature by burning calories (a "conversion process") contained in the food (a "support") that has been consumed. It sends another message to the skeletal-muscular "subsystem" to walk to the thermostat and turn up the heat (an "output"). The furnace reacts by heating the room thus affecting the body's external environment, perhaps altering the person's comfort level again to the point where additional response may be required (i.e., "feedback").

See D.F. Aberle, et. al., "The Functional Prerequisites of a Society" in Ethics Vol. 60 (1950) pp. 100-111; Talcott Parsons, Structure and Process in Modern Science (Glencoe, IL: The Free Press, 1950); Robert K. Merton, Social Theory and Social Structure (Glencoe, IL: The Free Press, 1957); David Easton, "An Approach to the Analysis of Political Systems" in World Politics Vol. 9 (1957) pp. 383-400; also see Easton's A Framework for the Analysis of Political Systems (Englewood Cliffs, NJ: Prentice Hall, 1965); William C. Mitchell, "The Polity and Society: a Structural-Functional Approach," in the Midwest Journal of Political Science Vol. 2 (1958) pp. 403-420. These materials are summarized effectively in the Introduction to Thomas Jahnike and Sheldon Goldman, The Federal Courts as a Political System (New York City, NY: Harper and Row, 1971).

2 Ignoring stress and disturbances emanating from the external environment certainly is an option, but not a particularly viable one according to Systems Theory.

3 Some "supports" can strengthen a system's capacity to respond to demands (e.g., fiscal resources, the skills of professionals recruited to serve in the system and diffuse popular opinion). However, "supports" - according to Systems Theory - can be absent, withdrawn or even "negative" (as in adverse public opinion). While the concept of "negative supports" sounds contradictory, it is useful in understanding forces which diminish a system's capacity to respond to its environment.

4 The co-authors construe "employment and training" broadly to include "first chance" programs (i.e., public education/K-12, plus postsecondary education and training) and "second chance" programs (such as JTPA, JOBS, Adult Education and Literacy, prison-based education and training and Food Stamp E&T as well as the job-search and job-matching activities of the Employment Service division of each state's employment security agency).

5 Some of the programs may be known by state-specific acronyms. In Louisiana, for example, programs funded under the Job Opportunities and Basic Services Act once were known as "Project Independence." Recently, Louisiana renamed them as the Family Independence and Work (or FIND Work) program. This Guide, however, is not intended to inventory state acronyms. As follow-up is centralized in each state, its design team is advised to trace the flow of dollars to their source in order to distill program intent from the original authorizing legislation or appropriations bills and the minutes of related Congressional hearings. Regardless of unique acronyms, each state should be able to spot which of its programs fit logically into the employment and training system as described in this chapter.

6 The co-authors envision two possible approaches to follow-up. The first scenario described herein is the least likely and is offered as a "straw man." In the first approach, a single, centralized follow-up entity would be created with broad responsibilities across programs that may or may not share roughly the same mission. In this scenario, one central follow-up entity would be responsible globally for facilitating record linkages as a means of gathering outcomes data. The entity probably would be comprised of several subdivisions or "silos" with each focusing on its own set of very closely related programs.

The other scenario would unite public education (K-12), postsecondary education and training, workforce development, the Employment Service and welfare-to-work programs into a common follow-up system. This more limited combination of programs in a single follow-up system seems logical because they share a common mission (i.e., to provide education and training, remediation, and retraining in a nearly seamless fashion under a "lifelong learning" model). Administrators and practitioners from these programs are accustomed to interacting with one another. Moreover, federal laws and regulations require articulation among these programs. (Indeed, federal funds are set aside expressly for their articulation.) While other types of programs could benefit from using comparable record linkage techniques, they might be better served by different follow-up entities. For example, a state's commission for the deaf, its commission for the visually impaired, its rehabilitation department and its mental health and mental retardation agency might be served jointly by a follow-up entity other than the one that serves employment and training programs. Corrections, probation and parole might be served by another follow-up entity; public health and vital statistics by another; and public safety (e.g., state and local law enforcement agencies, fire departments, emergency medical service providers and disaster relief agencies) by yet another.

The boundary issue can be confusing because various programs' services and activities often overlap. Education and training services, for example, are offered to eligible participants through a state's schools for the deaf and visually impaired, the prison system and the rehabilitation department as well as through the public schools and postsecondary institutions. While it is not our place to specify the scope of another state's follow-up efforts, we recommend that each should define its central entity's boundaries and domain clearly to ensure that its duties and tasks are confined to manageable proportions.

Adding to the confusion over the boundary issue is the distinction between public and private-for-profit service providers (not to mention the fact that a myriad of employment and training services are provided through joint public/private ventures or by quasi-public entities and non-profit community-based organizations that occupy the grey area between the public and private sectors). In drawing the boundaries of the central follow-up entity's operations, each state must decide if it has the authority to compel participation by private-for-profit service providers, the capacity to entice their voluntary participation or the resources to serve them in either case.

7 This analogy showing the parallels to bookkeeping and audit practices in the private sector draws heavily on several articles in Bruce Smith (ed.), The New Political Economy: The Public Use of the Private Sector (New York City, NY: John Wiley and Sons, 1975): Ira Sharkansky, "The Politics of Auditing" pp. 278-318; Elmer B. Staats, "New Problems of Accountability for Federal Programs" pp. 46-67; Michael D. Reagan "Accountability and Independence in Federal Grants-in Aid" pp. 181-211; and Joseph Pois, "Trends in General Accounting Office Audits" pp. 245-318.

8 Intake information is reviewed, for example, to determine if every participant who received services truly had been eligible. Other common questions include: Were services procured competitively and without conflict of interest? Was the rate-of-expenditure paced properly during each phase of the program year. Did expenditures in any category, particularly for administration, exceed the budgeted line-item? Have backup documents been submitted in support of every voucher? Was all travel necessary, duly authorized and reimbursed at or below permissible government rates? None of these questions, however, relates directly to program performance.

9 This presumes that a program's goals and objective are defined clearly in the first place. In many instances, even this most basic presumption is false.

10 Paraphrasing Staats in Bruce L.R. Smith, op. cit. at pp. 63-64 (emphasis added).

11 Paraphrasing Sharkansky in Bruce L.R. Smith, op. cit. at p. 284.

12 First and foremost, funds and staff were stretched to their limits in meeting the GAO's longest standing and least disputed obligation to do fiscal audits of and settle accounts for all federally-funded programs at regular intervals. Managerial accountability and (to an even greater extent) program accountability required the Comptroller General to hire other kinds of experts (E.g., economists, statisticians, systems analysts and subject matter specialists) rather than accountants and lawyers. Funds for such experts usually were made available to the GAO on an ad hoc basis whenever Congress made special requests for performance audits.

Secondly, by the mid-1970s there were more than enough financial scandals and instances of waste to preoccupy the GAO. Vice President Agnew, for example, resigned and was indicted for fiscal improprieties that occurred when he was governor of Maryland. At the same time, rumors of $400 toilet seats and $1,000 crescent wrenches brought the Pentagon's procurement practices under close scrutiny. As one former Comptroller General once wrote, under the crush of issues clamoring for their attention, it is easier for the general public to understand a count of paper clips than the performance objectives of complex programs jointly operated under federal and state initiatives. Having exhausted its attention span by demanding investigations of fraud and waste, the general public had little energy left to pressure their Congressional representatives to get the GAO to investigate program effectiveness.

Thirdly, much of the burden for performance monitoring has been shifted to program administrators in the field through regulations that specify the collection of outcomes data in the respective management information systems and inclusion of such items in annual compliance reports. See, for example, the annual Standard Program Information Reports (SPIR) required by the Secretary of Labor for JTPA programs.

Fourthly, other entities stepped into the void to do performance audits as GAO concentrated more on fiscal and managerial accountability. The Office of Management and Budget (OMB) operates at the request of the executive branch while the Inspectors General do performance audits at the request of their respective cabinet secretaries. State comptrollers, legislative budget boards and state agency audit divisions, etc. have become more active in monitoring program performance. Service providers also attempted to police themselves either on their own or through professional association and accrediting board standards.

13 Part of the movements for both budget tightening and welfare reform is the insistence that employment and training programs accomplish more with less. The public is relatively unmoved by "good intentions" and inclined now to apply the "bottom line" mentality of the marketplace to employment and training programs. See, for example, David Osbourne and Tom Gaebler, Reinventing Government (Reading, MA: Addison-Wesley, 1992) especially chapters 5 and 10.

14 Increasing demands for accountability in employment and training programs in the 1990s will be discussed in greater detail in the next chapter.

15 For a recent example of a managerial audit by the Comptroller General, see "Multiple Employment Training Programs" (Washington, DC: General Accounting Office, 1994); for a recent example of a performance audit, see "Proprietary Schools: Millions Spent to Train Students for Oversupplied Occupations" (Washington, DC: General Accounting Office, 1997).

16 The Pew Higher Education Roundtable (Robert Zemsky, Senior Editor), "To Dance with Change," Vol. 5, Number 3, Section A of Policy Perspectives (April 1994) and the charge given to the National Postsecondary Education Collaborative by the National Center for Education Statistics, 1996. See also Marc Anderberg and R.D. Bristow, "Career Majors in Texas Public Education" (Austin, TX: Texas State Occupational Information Coordinating Committee, 1996) pp. 11-13.

17 For example, if the UI wage records are the primary source of labor market outcomes data and if employers submit reports on covered workers on a quarterly basis, then it does no good to schedule follow-up on any shorter interval -- even if, hypothetically, the central entity is asked to study self-paced, modularized short-course training programs that batch process their reports on completers and leavers once a month or on a weekly basis.

18 See the chapters by Michael D. Reagen and Joseph Pois in Bruce R.L. Smith, op. cit. and the chapter by Harvey C. Mansfield entitled, "Independence and Accountability for Federal Contractors and Grantees," pp. 319-335.

19 Rather than looking at sloppy record-keeping and management practices that leave too much room for error and the potential for misappropriation, Inspectors General typically look at overtly illegal activities such as fraud. See, for example, the summary of recent studies in "OIG Proposals: 1998 Reauthorization of the Higher Education Act" (Washington, DC: United States Department of Education, Office of Inspector General, 1997). On the other hand, the Office of Management and Budget usually looks at broader issues such as revision of classification systems or procurement guidelines across the board in order to reduce duplication of effort, achieve more consistency among like programs and gain tighter control over all programs.

20 David Osbourne, Laboratories of Democracy (Boston, MA: Harvard Business School Press, 1990) especially Chapter 10.

21 In many cases, state agencies that funnel federal employment and training dollars to local entities accept service providers' evaluations of their own performance; e.g., Adult Education in Texas. A unit within the responsible state agency may verify service providers' self-evaluations or perform on-site monitoring on an ad hoc basis (where misappropriation is suspected) or on a random/rotating basis (getting around to review each service delivery site once during some predetermined -- but rather lengthy -- interval). A state's legislative budget board may want common data elements reported on several programs at regular intervals but, more often than not, they look at gross indicators such as number of participants served. They tend to concentrate on processes and outputs rather than on outcomes. State Comptrollers General may do performance audits of specific programs on an ad hoc basis. The point is that few states continuously review all employment and training programs using common operational definitions of the outcome variables and a uniform data collection methodology.

22 Comptroller Staats, for example, discovered that his staffing-pattern needs changed as the GAO assumed a greater role in program accountability. He had to hire economists, computer specialists, systems analysts and substantive area experts, etc.

23 Joseph Pois, op. cit. at pp. 263-264 (passim).

24 For a more detailed description of the boundaries of Texas's central follow-up entity, see Marc Anderberg and Richard Froeschle, "Roles and Responsibilities in a Performance Measurement System: Description, Prescription and Policy-Making" Vol. 1, Number 4 of the Beyond the Numbers Occasional Paper Series (Austin, TX: Texas SOICC, June 1997).

25 Senate Bill 1688 (Florida State Legislature, 1997).

26 Service providers can "borrow" the difference between their annual allocation under the new rule and their prior year's funding level on the proviso that they "earn back" the loaned amount by improving their outcomes-based performance. This provision may postpone but will not eliminate adverse reactions by service providers who ultimately lose funding and perceive themselves to be on the losing end.

27 Directives from the U.S. Secretary of Labor regarding JTPA reporting procedures are known as Training and Employment Information Notices or TEINs. These notices are used to flesh out the details of requirements the Secretary is authorized to impose on state grant recipients either by statute or regulation.

28 For a more detailed analysis of Texas's performance measures for Adult Education and the degree to which they have been operationally defined, see Marc Anderberg, "Final Report on Automated Student and Adult Learner Follow-Up for Program Year 1995-1996" (Austin, TX: State Occupational Information Coordinating Committee, 1997) at pp. 8-12.

29 Andy Hartman, "Equipped for the Future," NIFL News Vol. 4, Number 2 (Spring 1997) p.2.

30 Staff report, "Voc Ed Directors Watch Governors' Authority," Vocational Training News Vol. 28, No. 18 (May 1, 1997).

31 In Texas, service providers first perceived the Tech Prep initiative as a means of attracting students to vocational and technical education. There was a rush to recruit large numbers of students for Tech Prep before genuine changes were made in the curriculum. Early in the implementation of School-to-Work programs in Texas, this "rush to recruit" appears to be happening again. See Marc Anderberg, "Waiting for Data to Ripen: The Case of Premature Expectations for Tech Prep Programs" (unpublished presentation to the National Tech Prep Network Conference in Nashville, TN, October 3, 1997; forthcoming as a monograph in the Texas State Occupational Information Coordinating Board's Beyond the Numbers Occasional Paper Series. See also Anderberg op. cit. (Final Report, 1997) at pages 103-109.

32 Per conversations and email exchanges between Marc Anderberg and William Morrill, former Assistant Secretary of Education, now President of MathTech, Inc. and consultant to Mathematica on the national evaluations of both Tech Prep and School-to-Work. See also, William A. Morrill, "UI Wage Data Workshop" (unpublished minutes of the September 3, 1997 meeting sponsored by MathTech, Inc. of Princeton, NJ.).

33 For an expanded discussion of follow-up as integral to the feedback loop, see Marc Anderberg, "Automated Student and Adult Learner Follow-Up: Final Report for Program Year 1992-1993" (Austin, TX: Texas SOICC, August 1993) pp. 13-18; Richard Froeschle, "Creating an Information-Based, Market-Driven Education and Workforce Development System: The Role of Labor Market and Follow-Up Information" Vol 1., Number 2 of the Beyond the Numbers Occasional Paper Series (Austin, TX: Texas SOICC, July 1996) ; and Richard Froeschle and Marc Anderberg, "The Anatomy of an LMI System for the 21st Century: The Role and Practices for Transactional Analysis and Descriptive Statistics in a Comprehensive LMI System" (Washington, DC: Employment and Training Administration, forthcoming).

34 In Florida, the State Legislature took the initiative in promoting automated and centralized follow-up largely to improve program planning and administration. In Texas, on the other hand, automated follow-up initially was used by a consortium of public community and technical colleges as a way of documenting the successful outcomes achieved by their former students. Because start-up dollars in Texas came from the state's federal vocational and technical education grant, a large part of the initial focus was on the post-exit achievements of "special populations" per reporting requirements under the Perkins Act. Community and technical college administrators realized that well-documented successful outcomes could be used in the recruitment process. From the outset, there was a more natural affinity in Texas between follow-up and career development based on informed choice -- particularly for "special populations." These differences in emphasis among progenitors of the two systems are reinforced by the respective states' choices regarding where their central follow-up entities are housed. Florida's central follow-up entity is housed in that state's Department of Education, an agency geared primarily toward strategic planning and program evaluation. Texas's central follow-up entity is operated by the State Occupational Information Coordinating Committee which has a long history of delivering labor market information to support career counseling and operational planning.

The impetus for automating and centralizing follow-up may come from quite disparate sources. That makes other scenarios equally probable in other states. For example, the impetus might come from your State Job Training Coordinating Council. If that is the case, the follow-up entity might be housed in the agency that administers the state's JTPA grants. Information delivery by the central follow-up entity initially might be more focused on generating compliance reports to be forwarded to federal agencies. Information delivery to substate planners, service providers, and/or prospective students and workforce development program participants might occur as a belated afterthought. States whose automated follow-up begins with a particularly narrow focus will mature and they probably will turn to other state's for advice or models on best practices in order to "back fill" the other kinds of functions their fellow lead agencies perform.

35 Per conversation and e-mail exchanges between the co-authors of this Guide and Tom Lynch of Oregon's Shared Information System. For a description of Oregon's integrated follow-up efforts, see Tom Lynch, et. al., "SIS: Shared Information System" (Salem, OR: Oregon Employment Department, January 1997).

36 The same sources listed in endnote 33 also contain more detailed analyses of the role outcomes data can and should play in the planning process.

37 In Systems Theory, these change factors would include known or anticipated sources of input (stress, disturbances demands and supports - i.e., expectations and resources) and feedback.

38 Unfortunately, some paraprofessionals without training in statistics, econometrics or model-building are hired as planners. They assume extrapolation consists of forecasting that circumstances in one year will be identical to or a linear (straight-line) extension of conditions in the preceding year. As a colleague of the co-authors once noted, "The tools most frequently used by some planners are the photocopier, a bottle of white-out and a typewriter. They reproduce last year's plan, white-out references to last year's dates, then type in new dates corresponding to the upcoming program year." When using the term "extrapolation" herein, the co-authors of this Guide are referring to a much more rigorous process of projecting trends into the future and building confidence intervals around them by adjusting for the effects that known change factors are likely to produce.

What is at stake here is best expressed in the phrases "reasonable margin of error" and "defensibility of the model." While we all may claim 20/20 hindsight, no one can guarantee the accuracy of their forecasts for future events. In fact, the longer a forecast's timeframe, the more complex the events being predicted and the more multivariate the causal factors, the lower the probability of generating an accurate forecast. Nonetheless, taxpayers are being asked to invest heavily in planned employment and training activities that are supposed to yield future returns. Politicians are staking their reputations on the success of the plans they bless. Service providers' livelihoods are at stake. Individual customers and clients entrust their future economic security in large part to the judgments of planners. Short of guaranteeing accuracy, what can planners do to make a convincing and defensible case for others to buy into their plans?

Given what is at stake, SWAG planning or planners' lazy reliance on photocopiers and white-out are, in the co-authors' opinions, irresponsible. Bending to the will of entrenched vendors may serve the providers well but that approach to planning does not serve the best interests of the taxpayers or customers of the employment and training system.

40 New variables, for example can be added one at a time through step-wise regression until the process arrives at the "least squares" (or best-fitting) solution; i.e., the regression equation having the highest degree of predictive validity when used ex post facto to forecast events known to have occurred.

41 Assume you are a realtor who wants to estimate how much a family will spend to buy a house. You could use knowledge of the family's income in making the forecast. Your forecast is likely to be more precise if you use an interval scale (actual income in dollars) to measure the family's income rather than a nominal scale (yes, they have an income/no, they don't) or an ordinal scale (above average income/below average income or high/medium/low income). The more "discerning" the scale used to measure the relevant independent variables in your forecasting model the greater their capacity to explain variance in the dependent variable. This handy rule-of-thumb is used later in this chapter to explain how the Zip code of a former student's worksite (obtained through an employer follow-up survey) provides a deeper understanding of geographic mobility among job-seekers than does the nominal variable, "found to be working in Texas (yes/no)."

42 Plans are underway to supercede the OLMID with a new occupation-oriented database structure, the America's Labor Market Information System-Database (ALMIS-D), with links to a skills-based structure in machine-readable form (called the O*NET) that is destined to replace hardcopies of the Dictionary of Occupational Titles (DOT).

43 This widely used model is best described in a regional planning context by W.L. McKee and N.L. Harrell, Targeting Your Labor Market - First Edition (Austin, TX: Texas State Occupational Information Coordinating Committee, 1989) Chapters 7 and 8. The planning model described in theory in that workbook is virtually identical to the guidelines issued by the State of Texas for planning JTPA Title IIA and Title III services as well as for regional planning by Quality Workforce Planning Committees and their successors under the Tech Prep and School-to-Work initiatives. See Mark Butler and Marc Anderberg, Regional Quality Workforce Planning Directors' Handbook (Austin, TX: Texas Education Agency, 1992). Building on the OLMID, the Texas SOICC added data elements and developed its own interface (together known as the SOCRATES system). SOCRATES takes a user through the logical process described by the authors cited in this note. More technical details on how the automated model works, its assumptions, its data sources and limitations are provided in John Romanek (ed.), The SOCRATES System: Technical Reference/User Documentation/Training Manual (Austin, TX: Texas State Occupational Information Coordinating Committee, 1991).

44 The Texas Higher Education Coordinating Board (THECB) requires proof of sufficient occupational employment demand to justify requests for state or federal dollars to fund new post-secondary vocational/technical or Tech Prep program offerings. The THECB does not require Texas's public postsecondary institutions to use the standard planning model to generate supporting occupational employment demand forecasts. However, the THECB does accept forecasts generated by the standard model as prima facie evidence of demand. In practice, any postsecondary institution that submits any other kind of evidence in support of a request for program start-up dollars faces a more difficult burden of proof.

The Texas State Board of Education (SBOE) periodically issues a statewide list of priority demand occupations. Although local education agencies (K-12) in Texas have sufficient autonomy to ignore the statewide priority list, most independent school districts use it voluntarily as a guide when planning their vocational and technical program offerings. In devising the priority list, the SBOE begins with a list of demand occupations generated by the standard planning model then narrows that list to a subset for which entry-level employment can be obtained on the basis of appropriate training at the secondary level. The final priority list may contain occupations that were not identified through the use of the standard planning model.

Here again, forecasts generated by the standard model constitute prima facie evidence of sufficient demand. Those using different kinds of evidence to justify the inclusion of other occupations on the priority list face a more difficult burden of proof.

As a result of coordination and collaboration driven by both federal and state mandates, planning for services in Texas funded with JTPA, Carl Perkins and Wagner-Pyser dollars is driven across the board by the same forecasting model. While illustrations herein rely on co-author Anderberg's experiences as a regional Quality Workforce Planning director, the state's JTPA LMI analyst and labor market economist for the SOICC, the Texas experience is not unique. The same federal mandates and initiatives will require comparable reliance on a common forecasting model among each of the partner agencies in any state's employment and training system. In all probability, each state also will engage in coordinated education and training program planning under parallel mandates contained in its own Human Resource Investment legislation or under an executive order by its governor.

45 Statewide data do not differentiate between the prevailing occupational wages paid to senior incumbent workers and those paid to new-hires. Moreover, statewide prevailing wage data do not take regional differences into account.

46 The JTPA system, for example, sets a minimum pay threshold. Other workforce development and welfare-to-work programs may skip this step because, under "work first" reforms, clients are expected to take immediate responsibility for at least some portion of their own economic security with provisions for getting the education and training they need during the course of their remaining years of eligibility to move from a low-wage entry-level job to one that will pay them enough to give them financial independence.

47 Workforce development programs typically provide eligible participants funding for education and training up to and including the Associate degree level. Thus, the user can set parameters in the model to exclude occupations that require a baccalaureate, post-baccalaureate or professional degree or more than two years of specific vocational preparation. This filter is based on the Specific Vocational Preparation Time (SVPT) established for occupations in the Dictionary of Occupational Titles which are then crosswalked to equivalent occupational titles in the OES taxonomy.

48 Under recent welfare reforms, this filter can be skipped for programs that do not provide public assistance to cover education and training or which provide education and training assistance only after the client has obtained work first. The basic assumption of the "work first" model is that the needs of many customers can be met through the labor exchange function without providing them more time-consuming (and expensive) education and training services and that some clients may not have the "capacity to benefit" from education and training services even if offered to them.

49 These observations originally were made in Anderberg, op. cit. (1993) pp. 13-26 and were restated in condensed format in Appendix C by William D. Witter, Targeting Your Labor Market: Second Edition (Austin, TX: State Occupational Information Coordinating Committee, 1995).

50 Training-relatedness, procurement of education and training services, and the benefits of longitudinal research alluded to in this section are discussed in more detail elsewhere in this Guide.

51 Use of follow-up data in identifying emerging occupations is discussed in more detail in the chapter in this Guide on "Process Considerations" and in Terry Ramsey (ed.), Emerging and Evolving Occupations in Texas (Austin, TX: Texas State Occupational Information Coordinating Committee, 1996) -- especially the Technical Reference Appendix by Terry Ramsey, Duane Whitfield, Jay Pfeiffer and Marc Anderberg.


Table of Contents | Previous Chapter | Next Chapter
Follow-up Homepage | Glossary | CDR Homepage | E-mail CDR