Monday, October 9, 2017

Managing quality: TQM in libraries

Riggs, Donald E. “Managing Quality: TQM in Libraries.” Library Administration & Management v. 7 (Spring 1993): pp. 73-78.
Beginning in the late 1970s, continuing through the 1980s, and gaining greater momentum in the early 1990s, total quality management (TQM) has had a significant impact on American industries. TQM began getting attention by non-profit organizations (e.g., city governments, hospitals, and universities) in the mid 1980s.

One of the first indicators of the importance of TQM occurred in the 1950s when W. Edwards Deming, a statistician, tried to convince the leaders of American businesses they should commence using the principles of quality improvement. Not finding a receptive audience here in the United States, Deming went to Japan and began a revolution from shoddy products and services to those designed for zero defects.

Quality is a difficult term to define. It could be described as the “rail on which the train runs.” Like some other things, one will know when one sees it. My dictionary defines quality as “any of the features which make something what it is.” Customer/user satisfaction is as good a definition of quality as there is.

Why is there a cry for improved quality? Customers/users believe they are short changed when they have paid for inferior products and services. Price tags continue to get larger while products and services fall short of expectations.

Why TQM for libraries?
No matter how a library’s management fabric is cut, it becomes abundantly clear that nearly every aspect of a library can still be improved. The installation of TQM in libraries should not imply that the staff has not been engaged in a continuous improvement process. Quite the contrary. TQM provides a systematic, formalized process for focusing on improvements. It is a process that manages by facts, uses tools for analyzing and measuring work, and evaluates progress on a regular basis.

Libraries are essentially service organizations. They have internal and external users. Internally, the reference staff, for example, depends on the work of the technical services staff. Externally, there are users who expect quality services from the library. Based on TQM’s heavy emphasis on user satisfaction, it is an excellent tool for the management of libraries. Who can argue with a library’s intent to offer improved services for the user through a methodical, systematic approach?

A requisite ingredient for a successful TQM program in the library is commitment from the director. Without this commitment, TQM is nothing more than another buzzword. Before accepting this responsibility, the director should know, among other things, how much TQM is going to cost the library and how much time will be required for training the staff. For it to work, managers throughout the library must also share the director’s commitment to and interest in TQM. They must be convinced that TQM is the correct management program for their library and, subsequently, they must devote the necessary time and resources to make it successful. In addition to “talking the talk,” library managers should practice the TQM principles in their daily work lives.

Strategic planning and TQM
Before implementing TQM, the library should already have a strategic plan in place. Can a library begin TQM and concurrently formulate a long-range, strategic plan? Yes, but it will be difficult in terms of reallocating staff time and sequencing the work. A strategic plan lends credibility to the quality improvement process. Mission statements provide the library staff with long-term projections and philosophical directions. Goals and objectives, respectively, specify the broad and more precise intentions of the library. Strategies, in turn, offer the library staff possible courses of actions for realizing the goals and objectives. The attributes of a well-designed plan are critical to the success of TQM. “A ship without a rudder” is the best way to describe a TQM program that does not have a supporting strategic plan.

TQM is a complex undertaking; it requires a thoughtful introduction, a through training program, and continual library-wide communication. Ideally, before implementing TQM, an orientation session should be held for the entire library staff. After the orientation, specialized training must be held for those participating in TQM. This training does not have to be offered to the entire library staff during the early stages of TQM, but it is crucial to provide this training for those who are going to serve on TQM teams.

Staff serving as team leaders should undergo training that prepares them to manage the project team: calling and conducting meetings; assigning administrative details; orchestrating team activities; creating and maintaining channels that enable team members to do their work; and communicating the work of the team with the rest of the library. The library director and the assistant directors should take the team leader training as early as possible.

Assisting the team leader on a respective project will be a facilitator whose responsibilities include observing the team’s progress, evaluating how the team functions, and using these findings to help the team improve its processes. The facilitator’s role is to help move along the team’s work—coaching team members in needed skills and tools—but not to participate directly in the team’s activities. (Peter R. Scholtes, The Team Handbook: How to Use Teams to Improve Quality (Madison, Wisconsin: Joiner Associates, 1988), 3-13)

Team leader and facilitator training can take between three and five working days to complete. If the library’s parent organization has made a commitment to TQM, perhaps the training programs could be shared financially. Otherwise, the library should expect to pay for all of the training. The cost per person depends on the number being trained at one time and who does the training. The library may decide to do its own training after a few staff members have completed the training sessions.

Training zeroes in on the principles and tools of TQM; the process is given heavy emphasis. Participants engage in exercises that delineate problems, extract the root causes of problems, perform work simulations, conduct evaluations, and provide feedback on how to improve the various processes discussed. Taking several staff members away from their regular work for large blocks of training time may have a major impact on various staff areas. In lieu of asking the staff who are receiving the training to leave their work for three to five consecutive days, it may be better to have the training broken down into two or more sessions distributed over a period of a few weeks. Training is paramount and truncating the necessary time for it will result in serious long-term, negative repercussions.

Targeted areas: identification and selection
Normally, an entire library department/unit is not targeted as a candidate for TQM. For example, one should not focus on a project that studies how to improve the hiring of new employees, but on a smaller part or process such as the employment of entry-level clerks. Selecting the first areas for application of TQM may be very sensitive. If the director arbitrarily selects an area without much forethought, the staff in the respective department may come to a fast conclusion that it is not performing up to par and has been singled out as a problem area. Nothing could be more demoralizing to a department/unit than for the director to announce that the respective area has a process in dire need of TQM and, consequently, has been selected to be the very first entity to begin using TQM.

Several alternatives should be considered during the selection of areas; they include asking the entire library to suggest processes that could possibly benefit from TQM, or asking volunteers (involved in a respective process) to participate as the first project teams. Various staff can be involved in the selection of those processes that may benefit from TQM. It is important that the initial areas selected will result in “success stories.” They should reflect a model that other areas can replicate. Beginning the TQM program with some poorly selected areas could mean its early demise. Common errors in selecting projects include selecting a process that no one is really interested in; selecting a desired solution, instead of a process; selecting a process in transition; and selecting a system to study, not a process. When selecting the first area for TQM, the library should choose a process with the following features: it has a direct impact on its users; has a time cycle that can be reduced; is relatively simple, with clearly defined starting and ending points; is something a large number of staff agree is important; and has a lot of visibility. 2 Each library should formulate criteria that can be used in identifying and selecting processes that will benefit from TQM. No single formula will work for every library. Each library has to customize its own quality-oriented infrastructure.

After the target areas have been selected, the next step is to establish respective teams to address the specific challenges. All team members do not have to come from the respective areas; some members may come from other areas in the library, and some may come from outside of the library. Should the department/unit head chair the team? Not necessarily. Quality improvement teams make up the basic building blocks of the quality improvement process. They consist of three major kinds of teams: (1) A functional team--library staff from a work unit; the team is ongoing and its membership is voluntary, (2) A cross-functional team—staff from more than one functional area to work on targets for improvement that cut across functional lines; the team is ongoing and its membership is voluntary, and (3) A task team—staff from one or more functional areas to solve a particular problem, after which, the team disbands. Membership is selected on the basis of qualifications required.

Assessing the current situation
After the library identifies an area that it assumes can be improved by using the principles of TQM, the respective team has to determine its own reasons for working on this particular area. It may want to survey the users of the products/services of the area, interview individuals from the work area, determine how much improvement is needed, describe the current processes/procedures used in the problem area, and establish quality-improvement indicators. Data on all aspects of the problem area should be collected. These data will be useful in developing the quality-improvement indicators. Brainstorming is a good way to explore a broad range of options; it will be useful in generating ideas, garnering participation by all team members, and encouraging creativity. The team should also stratify the problem area from various viewpoints. After the situation has been stratified, it will be specific enough to analyze.

The analysis phase should primarily address the root causes of the problem area. These causes have to be verified by data, and those root causes with the greatest impact must be identified. Team members are to present ways to discover the “cause and effect” components of the problem area. The following are examples of tools that may be used during the analysis: (1) “Fishbone analysis,” so named because it resembles a fishbone, is a good tool for diagramming those categories of potential causes (or solutions) of the problem. Subcategories are drawn off the main categories. It is an effective tool for studying processes and situations, but should not be used for planning purposes. (2) A Pareto chart depicts a series of bars whose heights reflect the frequency or impact of problems. The bars are arranged (from left to right) in descending order of height (based on degree of importance—taller bars being the most important). The Pareto principle advocates that 80 percent of the trouble comes from 20 percent of the problems. Pareto charts narrow down which causes to address first, and they are good tools to use in building consensus among team members. They assist in the search for significance. Arranging data on a Pareto chart helps to highlight the “vital few” in contrast to the “trivial many.” (3) A scatter diagram is another tool often used in the analysis phase. By using a horizontal axis (called the x axis) and a vertical axis (called the y axis), a scatter diagram allows the team to see if there is a relationship or correlation between two characteristics. The scatter diagram is often used for further examination of the elements isolated in the “fishbone diagram.” Other tools frequently used in analyzing root causes are the histogram, control chart, and dot plot.

After the root causes of the problem have been identified, the next step is to select countermeasures (proposed solutions) to the root causes. Some solutions to the problems may appear to be obvious, but this rarely occurs. Much care has to be taken in choosing a solution; in making the choice, the team should work from its database (its research options), be as creative as the issue allows, and be diligent in the pursuit of not just an adequate answer but the “right” answer (QualTech Quality Improvement Program, Team Leader Course Participant Workbook (Miami, Fla: Power & Light Company, 1987), 4). One highly effective technique is to compare the alternative countermeasures. Each countermeasure should be rated in terms of effectiveness and feasibility. What will each of them involve in terms of people, funding, space, and time? After the right countermeasures have been chosen, they can be judged by checking to see if the root causes have been reduced or eliminated, the quality-improvement indicators have been satisfied, user needs have been met or exceeded, and cost benefits were achieved.

The team should develop an action plan to implement the countermeasures; the plan should answer who, what, when, where, and how. Formation of an action plan is a technique that catalogs all the things that must be done to ensure a smooth and objective trial of solution or improvement. (Ibid, 4-20.) Moreover, the plan must contain standards of behaviour that will prevent the root causes from reoccurring. The standards and solutions should become part of the respective library area’s daily work, and they should be considered for replication in other areas that have identified problems. Periodic checks with assigned responsibilities have to be built in the action plan.

Continuous improvement is at the core of TQM. This ongoing improvement process refers to all activities that fall under the purview of TQM. Countermeasures, for example, should be evaluated to see if they require fine tuning or a major overhaul after a specific time. In a sense, nothing in a TQM-driven library should escape some scrutiny to see if improvement can be brought forth in the various processes. Even the problem-solving activity itself should be put under the microscope to see what was done well, what could be improved, and what could be done differently.

Deming translates his methodology for evaluation as the Plan, Do, Check, Act (PDCA) system and describes it in four steps:
  1. Organize an appropriate team that can study a process and decide what change might improve it.
  2. If enough data are not available, tests or studies need to be performed with support from the group. Make the necessary changes, preferably on a small scale.
  3. Observe the effects.
  4. Determine what was learned. Repeat the test if necessary, perhaps in a different environment. Look for side effects.
      (Mary Walton, The Deming Management Method (New York: Putnam, 1986), 86).
After a library has had TQM in place for a couple years, an external visiting team (people with TQM experience) should be invited to evaluate the library’s TQM activities. The visiting team should seek answers to questions like: “What went right?” “What went wrong?” “Is TQM transforming the culture of the library, and, if so, how?”

Points to remember
  • TQM is user focused. If the user is not the centerpiece of the TQM endeavor, then the library is missing the target.
  • TQM attacks the process, not the people. Deming believes 85 percent of the problems are traceable to the problem itself, and just 15 percent to the people (W. Edwards Deming, Out of the Crisis (Cambridge, Mass: MIT Center for Advanced Engineering Study, 1985), 21). Library staff even at the lowest level possible should be empowered to make day-to-day decisions. 
  • Without commitment from the library director, TQM will have a short life. 
  • Implementing TQM will require additional resources. 
  • A large investment will have to be made in training. A new or renewed culture of quality is one of the dividends realized by TQM libraries. 
  • Instant gratification should not be expected from TQM. Expect at least two or three years before its benefits are evident. Organizations that have failed in their TQM quest did not allow enough time for the benefits to evolve. TQM requires patience; it is not instant pudding. 
  • The TQM principle of continuous improvement will enhance the library’s opportunity to offer more value-added services. A new psychology of value will evolve among library users. • The success of TQM depends largely on how well its philosophy and expected benefits are communicated within the entire library. 
Libraries stand to gain much from using the principles of TQM. Based on the service orientation of libraries, it is heartening to see a commitment to improve service further and make user satisfaction a top priority. TQM should not be perceived as a panacea. It is simply another management technique that focuses on continuous improvement in a formal, systematic manner. Its emphasis, for example, on bench marking and reducing cycle time is commendable and appropriate for managing libraries. Focusing on cycle time means speeding up the total time, start to finish, that it takes to complete a library transaction. Who can argue with reducing the cycle time in libraries and becoming more responsive to our users?

Are libraries a good fit for TQM? Deming responds by stating that “service organizations need quality improvement even more than business or industry.”
(W. Edwards Deming, Quality, Productivity and Competitive Position (Cambridge, Mass.: MIT Center for Advanced Engineering Study, 1982), 235). An all-out quality commitment could possibly require some libraries to move from entrenched habits (e.g., we’ve always done it this way) toward creating a somewhat new attitude about user services. In closing, I offer the following “quality creed”: “We shall strive for excellence in all endeavours. We shall set our goals to achieve total customer (user) satisfaction and to deliver defect-free, premium-value products on time, with service second to none.” (author unknown)

Monday, October 2, 2017

Can management by objectives and total quality management be reconciled?

Passi, Wolfgang J. Can management by objectives and total quality management be reconciled? “Total Quality Management,” Mar 03, Vol. 4, Issue 2, p. 135, 7 p.
Management by objectives (MBO) goes back to 1954 when Peter Drucker (Drucker, P. F., (1954). The Practice of Management (London, Pan Books)) proclaimed it as the management system of the future. It was credited with the flourishing of Western business in the 1950s and 1960s and seems common sense.

The tasks to be carried out are described.

They are associated with expected outcomes.

These are quantified, and time constraints and conditions are specified.

The people entrusted with carrying out the tasks are accountable for achieving these defined, measurable objectives.

The system is based on a hierarchical structure where the objectives and sub-objectives and sub-sub-objectives are passed ‘down’ in a cascade, so that each employee is governed through a set of performance standards or work quotes. Reddin (1977) defines MBO as: The establishment of effectiveness areas and effectiveness standards for managerial positions and the periodic conversion of these into measurable time-bounded objectives linked vertically and horizontally and with future planning. (Reddin, W. J. (1977), Effective MBO (New York, McGraw Hill)

Unfortunately, MBO often encourages quantity over quality, short-term over long-term success, controls over dynamic improvements and innovation, the emphasis on results rather than on the processes leading to them, individual accountability rather than scrutiny of the systems within which staff work, individual performance over team work, and inward orientation over customer orientation. Since all this is happening despite warnings from interpreters of MBO, the system appears to be somewhat conducive to these pitfalls.

Over the last couple of decades the success rate of MBO-led companies and organizations has become less spectacular in comparison with a new management philosophy which established the relationship between quality and competitiveness and which has been used by Dr Edwards Deming and other consultants (Deming & Edwards, 1982 a/b; Scherkenbach, W. W., (1987) The Deming Route to Quality and Productivity (Washington DC, Ceep Press); Deming, E. W. (1982a) Quality, Productivity and Competitive Position (Cambridge, MA, MIT Centre for Advanced Engineering Study); Deming, E. W. (1982b) Out of the Crisis (Cambridge, MA, MIT Centre for Advanced Engineering Study). It has, in particular, helped Japan to overtake the US economically. This philosophy is called total quality management (TQM).

TQM seeks continuous improvement in the quality of performance of all the processes, products and services of an organization. It emphasizes the understanding of variation, the importance of measurement, the role of the customer, and the involvement of employees at all levels of an organization in the pursuit of such improvement. An important feature of this philosophy is that management plays a key role and carries the responsibility for the bulk of mistakes, defects and waste.

When comparing modern texts of MBO and TQM, there seems to be no striking contradictions, rather shifts of emphasis. However, MBO is built on old ingrained attitudes, and some modern aspects are emphasized inadequately. Although MBO could claim flexibility at the time of its formation, it now appears rigid when contrasted with TQM. Table 1 contains a brief comparison of the two management systems.

Table 1. Comparisons of MBO versus TQM

Company culture Emphasis on financial management Emphasis on customer satisfaction (quality)
Focus Result orientation Process orientation
Organizational structure Hierarchy Matrix (network)
Strategy Three-step:
1 set objectives
2 direct their attainment
3 measure results
Four step:
similar to MBO but includes additional improvement step
Operation Setting of numerical objectives Designating of desired outcomes = quality into performance systems
Never the Twain?
Is there a question of choice between the two systems, exchanging one for the other? I think not. According to my own experience and that of others, TQM ideas may be used to further develop and modernize MBO. However, this will make attitudinal changes necessary, which is difficult within an established system. T QM thinking would improve the traditional MBO system in several areas as outlined below.

Dedication to quality and improvement
Although MBO includes improvement in its system, in particular concerning organizational and staff performance, progress and competitiveness are founded on financial strategies. There is some talk of customer orientation. The term ‘quality’ is often missing in the indices of MBO texts (Drucker, P. F., (1979), Management (London, Pan Books); McConkey, D. D., (1983), How to Manage by Results (New York, AMACOM)). The TQM principle of continuing improvement is much more comprehensive. Dedication to quality in TQM becomes the organizational culture, with much greater emphasis on the processes leading to the ‘results’. Thus, the organization, the ‘system’, becomes much more ‘staff friendly.’ By tailoring quality precisely to the customer’s needs, TQM is also by far more ‘customer friendly’ than MBO.

We must not forget that MBO was formulated at a boom period, when manufacturers and providers of service could sell almost any product or service because of great demand, with only limited need for ‘quality’.

Organizational structure
MBO is based on the outdated hierarchical system, with the general manager at the top of a pyramid and managers at the tops of the sub-pyramids. There is continuous talk of ‘up’ and ‘down’. The reliance on status and superior-subordinate relationships bears the historic shackles of the master-slave (later servant) polarity. This attitude also ignores the fact that most modern economies have comparatively highly educated workforces whose potentials are inadequately tapped.

Management as a separate layer is counterproductive—a manager ‘sitting on top of a pyramid’ can be perceived as a burden. If we have to stick to the pyramidal structure, a useful manager would be one that supports the tip of an inverted pyramid. Managerial positions distinguish themselves by their authority and power to provide resources and to modify the system within which staff work, to enable them to achieve optimum results.

This is not to say that proponents of MBO have not seen the writing on the wall. McConkey (1983) notes: “The traditional, hierarchical form of organisation structure, usually portrayed as a pyramid, will be replaced by ‘matrices’ or ‘networks’ of teams formed to achieve the needs of the organisation.”

Management strategy
MBO states three main steps: (1) establish objectives, (2) direct their attainment, (3) measure the results. This strategy can be represented by a circle which allows comparison of ‘what was wanted’ with ‘what has been achieved’. According to TQM, this is inadequate. MBO underemphasizes the need for a review step which allows improvement of similar processes in the future. Lifting the quality of processes within a category to ever higher levels can be perceived as an upward moving spiral. TQM relies on the participation of the entire staff for continuous improvements everywhere in an organization. Simply ‘doing a job according to the book’ is not enough!

However, it must be acknowledged that with MBO a balanced participative management style replaces the authoritative one. Because of clear definitions of authority, decisions can be made at the point nearest action, at the time when action is to take place. This is helped by in-depth delegation and safeguards that the right information is passed on at the right time, to the right place. Clear accountability prevents ‘decision drift’ and procrastination. At the same time, modern MBO encourages change, and keeps policies and procedures flexible and to a minimum. Controls are tight but minimal. All this ensures a measure of dynamism.

Planning and processes
MBO must be credited with replacing fuzzy objectives with well-defined ones; these are based on goals which in turn emanate from a mission statement. Subordinates participate in objective setting and planning. Results are not (officially) used for immediate apportionment of blame but serve as yardsticks by which the employees can ‘measure their own performances’.

Unfortunately, followers of MBO have gone too far, placing too much emphasis on numbers. Often, the chosen numerical objectives are purely fictitious and often adequate groundwork has not been done. It is interesting to note that McConkey (1983) also lists the tendency to ‘quantify everything’ as one of the pitfalls of MBO.

Proponents of TQM are in total opposition to numerical quotes and performance standards. There is no question that it is always possible to improve on a performance, but what is the setting of a numerical objective to achieve, unless we can calculate the outcome from our inputs? Situations where staff sabotage the achievement of an objective are rare. As a rule, the opposite is the case. Hence, if someone is unable to achieve the target, what advantage is there in making this person feel bad? Or else, do we want to encourage the setting of ‘safe’ objectives? Or, if the target can be surpassed, are we to gain from discouraging overachievement?

Focus on numerical objectives often distracts from the real goals and can even be harmful. There is also, by necessity, a bias in the setting of the so-called ‘performance indicators’ to prove achievement. Since not everything can be measured, management must select them. What if a dieter simply assessed ‘progress’ in losing weight by standing on scales, ignoring body build and state of health?

In line with its axiom that quality must pervade the entire organization, TQM demands the designing of all processes and systems in such a way that the desired results are assured. Building the necessary ‘quality’ into the entire system ensures that delegated tasks have adequate support. There is also less risk of compartmentalization and segmentation of work.

TQM thus focuses overwhelmingly on the planning and design of systems and processes instead of numerical objectives. It is not surprising to find the opposite approach in MBO texts.

Assessment of results
This is the area where TQM contradicts MBO most. MBO is dedicated to the performance appraisal of individual staff, TQM to the continuing critical assessment and improvement of systems. Arguments against performance appraisal are as follows:

(1) With performance evaluation, one may be inclined to disregard that employees work within a group. The individual results are influenced by all members of the group.

(2) With performance evaluation, one may be led to overlook the fact that employees work within a system (for which management is responsible). Mager and Pipe (Mager, R. F. & Pipe, P. (1984), Analyzing Performance Problems (Belmont, CA, Lake Publishing Company) list 12 points to be clarified concerning the system within which the employee works, before the issues of ‘sanctions’ and ‘rewards’ should be tackled. The necessity of investigating possible system failures, and the fact that ‘management’ is responsible for 80% (or more) of failures, are even discussed in MBO texts. Nevertheless, ‘sanctions’ and ‘rewards’ regularly appear as the alternatives in performance evaluation flowcharts!

(3) With performance evaluation, one may tend to ignore that employees work within variability and instability. No jobs are precisely the same. If the variability of a system is due to the ‘common cause’ and inherent in the system, and not to any special cause’, employee performance will be randomly scattered around the average independently of the employee’s efforts! A particularly unfair situation may arise where employees are singled out, often unconsciously, for the performance of certain tasks (because of their special skills or special aptitudes) but are assessed by the number of tasks performed and not the difficulties associated with them.

(4) Employees are invariably evaluated within an evaluation system that is biased and inconsistent. The evaluator is by necessity under pressure to grade employees as average, above average or below average. The evaluation also depends on current company policy and evaluator philosophy, no matter whether or not these can be sustained. The performance of the employee is also strongly influenced by the Pygmalion and Galatea effects which have been investigated extensively (Livingston, J. S. (1969) Pygmalion in management, Harvard Business Review, July-August, pp. 81-89; Rosenthal, R (1973), The Pygmalion effect lives, Psychology Today, September, pp. 56-63); Babad, E. Y., Inbar, J., and Rosenthal, R. (1982) Pygmalion, Galatea and the Golem, Journal of Educational Psychology, 74, pp. 459-474). The first effect is connected with the phenomenon of a subordinate’s performance rising or falling to meet the manager’s expectations. The second effect concerns a person’s performance rising or falling according to his/her own expectations. In addition, employees themselves may overestimate the influence they may have had on an outcome.

Other disadvantages associated with performance assessment are given below. (1) No one likes assessments.
(2) Competition douses cooperation, team work, the inclination to help others.
(3) Systems are likely to be squeezed.
(4) Spontaneity is discouraged; important tasks not listed as objectives are neglected or at least delayed; this may be particularly detrimental in research and development.
(5) If assessments are to mean more than a toss of the coin, they are prohibitively time consuming.

Some benefits are often listed as results of individual performance appraisal:
(1) Staff receive direction.
(2) Staff receive feedback on their performance.
(3) Staff training needs can be developed.
(4) Communication between management and staff is fostered.
(5) Staff can be promoted and reward according to performance.
All these are better achieved through other means.

Promotions and rewards
A person’s inclination or capabilities to take on a wider range of responsibilities or more demanding work are often not visible in a given job situation and can thus not be detected by performance appraisal. It is better to give staff special assignments, or allow them to show their talents in improvement projects. Listening to ‘customers’ is sometimes revealing; they may prefer the services of certain people. In general, a flat organizational culture is preferable. In an age where most mindless or repetitive jobs are carried out by machines, it is becoming increasingly difficult to decide which job is more demanding than another. Indeed, if there is no machine for mindless jobs, staff assigned to these jobs may have to be paid more than others in less boring jobs.
The Deming school states:
To take something as unreliable and capricious as performance appraisal and use it as basis for salary increase, wages, bonuses, merit pay, etc., turns the reward system into a form of lottery. Flipping a coin would be more fair. (1) Personnel policies must not reward people for being lucky or punish people for being unlucky;

(2) must not induce fear or create barriers; rather fear must be driven out and barriers broken down;

(3) must not tamper with anyone’s system of internal motivation;

(4) must encourage everyone to work together to accomplish the transformation;

(5) must foster a climate of teamwork and trust.
Incentive plans. Proponents of MBO sometimes favour such plans as superior to fixed salaries because incentives can be better tied to performance. Yet there are serious risks associated with incentives as a means of eliciting higher performance. Attainment of short-term benefits may take precedence over long-term benefits. Quantity may be pushed at the expense of quality. The system may be squeezed, short-cuts taken, successes faked. Ultimately, customers may become dissatisfied and markets may be lost. However, incentives may be useful for achievements outside ordinary job performance (improvements, novel concepts, gaining of customers, etc.).

If we wanted to adhere to the MBO approach, but introduce the progressive thoughts of TQM, what would we have to do?

(1) Base long-term competiveness and success primarily on quantity management instead of financial considerations.

(2) Replace any hierarchy with a horizontal matrix where individuals and teams relate to each other as providers and receivers of all services. (This system does not end at the doors of the organization or company but extends to all external suppliers of materials and services as well as to all external customers.)

(3) Extend the MBO strategy by including a review step for continuing improvement.

(4) Thoroughly study system performance. This may be expressed numerically.

(5) Beware of introducing numerical targets into objectives. Go for the ‘optimum achievable’ under the current system. If this optimum should prove unsatisfactory, improve the system.

(6) Replace performance appraisals of individual staff with performance appraisals of systems.

(7) Award salaries according to ‘market rates’, seniority or overall prosperity of the organization. Adopting these recommendations will not only improve staff morale and cooperation, but also help an organization to become more prosperous and more competitive!

If we introduce TQM into MBO by putting into practice these recommendations, have we not changed the salient features of MBO? I believe that only redirecting performance appraisals from people to systems would be a fundamental alteration. MBO showed itself superior to the management systems preceding it and has been refined over the past decades. Introducing the ideas of TQM would mean just another step in the evolution of MBO.

Monday, September 25, 2017

How situational leadership fits into today’s organizations

“How situational leadership fits into today’s organizations.” Supervisory Management, Feb. 96, Vol. 41, Issue 2, p. 1, 3 p.
The trend toward consensus-driven decision making is causing managers and supervisors to question when they can make independent decisions. The answer may lie in situational leadership, a concept that is far from new. It recognizes that the best managers are those who know when to lead by consensus but are also confident in making decisive, independent decisions whenever and wherever appropriate.

Under situational leadership, managers adapt their leadership and decision making styles to the situation, the time, and the people involved.

“Make it so” Management
That there is nothing wrong with a team leader who makes the final decision even within a team structure is evident from reaction to the leadership style of Jean-Luc Picard, captain of the Starship Enterprise (think “team leader” or “manager”), from the TV sci-fi program Star Trek: The Next Generation. A manager/trainer who is a fan of the show told me, “He is the kind of leader most managers would like to believe they are.”

In their book Leadership Lessons from Star Trek: The Next Generation
(Pocket Books, 1995), authors Dr. Wess Roberts and Bill Ross demonstrate how richly Jean-Luc Picard illustrates the leadership qualities that are indispensable for managers today.

When the crew and ship face a crisis, Picard brings his direct reports together to discuss their options, listens empathetically to their ideas, then decides on that option that he believes has the best chance of success. He does not get drawn into the details. He empowers his crew to carry out the plan he has chosen with three words that have become familiar to every fan of the show: “Make it so.”

Mastering four styles of leadership
Interestingly, authors John D. W. Beck and Neil M. Yeager refer to not only Jean-Luc, but Captain James T. Kirk (from an earlier series built around the same sci-fi theme), along with some real-life executives, in their more traditional management book The Leader’s Window (John Wiley & Sons). Their book identifies four leadership styles to be used to create high-performing teams.
  1. Directing. Directing involves making decisions for those new to their responsibilities to help them avoid mistakes. 

  2. Problem solving. “The leader seeks input from those people who have to live with the consequences of decisions, meets only with those people who need to be involved, runs effective meetings when they are needed, and makes assignments that speed up the decision-making process,” write Beck and Yeager. 

  3. Developing. Another term might be “facilitating,” since a leader whose primary style falls into this category would listen to team responses to questions raised, paraphrase key points, be aware of both verbal and nonverbal communication, and summarize the issues for the group. 

  4. Delegating. By delegating authority as well as responsibility, say Beck and Yeager, the manager “empowers members of the team to make decisions and take actions in areas where they have expertise and are motivated to follow through.” Abdication is the extreme of delegating, where the manager provides no presence and staff feel “they have been left out on a limb.” 
In which categories do our two Star Fleet captains primarily fall (a question many supervisors/Star Trek fans might wonder)? Beck and Yeager believe they primarily operate within Styles 2 and 3. “When there is a problem that requires their attention,” write Beck and Yeager, “Kirk and Picard summon experts and engage in intensive problem solving to find a resolution. If the problem is of highly technical nature regarding an aspect of the ship they know little about, they form a team of experts and support its efforts in solving the problem.”

The bottom line
Whether a decision is better made by a group is best determined by these factors:

The need to buy in. Goal setting is something best done in group settings in which members commit to the final decisions reached. Here, consensus-based decision making may be critical, for it increases the likelihood of success. The more people participate in the decision-making process, the more ownership they have in the outcome, and, therefore, the harder they work.

The creativity involved. If you need a new view on an old problem, then you might want to bring together a group. But this doesn’t necessary mean that the final decision should be made by the group, although the information that comes from the group members should strongly influence the final decision.

The timeline involved. If time is scarce, then a decision is best made by the team leader.

The need for a decision that reflects the bigger picture. Team members may have some idea about the situation but they may not see the problem as their supervisor does, from a strategic view. Supervisors need not always let the current movement toward teams cause them to devote staff time to meetings that are unnecessary or cause them to abdicate their leadership role. After all, being decisive is as much a characteristic of a good leader as are cooperation and a sense of teamwork.

Monday, September 18, 2017

Quality is not a quick fix

Freeston, Kenneth. “Quality is not a quick fix.” Emergency Librarian Volume 22 (May/June 1995), pp. 14-19.

Remember when problem solving was the rage in educational journals and workshops? We all thought that if we could just teach ourselves and kids how to solve problems, our schools and our world would be better places. We produced students and teachers who could generate a multitude of solutions. Regrettably, many of us forgot the importance of problem finding, the critical first step to the problem-solving process.

The quality movement is gaining popularity as a solution. Signals of the pursuit of quality now appear in journals, popular media and a smattering of national organizations ready to train people in the latest solution. While there is mounting evidence that only quality-oriented organizations can survive in the future, unless we go about our business of change in dramatically different fashion from our past attempts, the quality movement in schools will be doomed to the same familiar failings of other annual trends and quick fixes. Well-meaning educators will adopt quality as a solution before spending time articulating the problems it addresses.

Organizational leaders throughout the world are achieving significantly improved results by applying the quality sciences to their organizations. Each leader would tell us that this process is, simply put, hard work. Once understood, the work of Deming, Jurand, Crosby, Glassner and a host of other experts substantially improves organizational culture and outcomes. Often, when these quality science tenets are applied to the educational setting, they are mistakenly seen as quick-fix solutions by superintendents, school boards, teachers and parents, and are not recognized as the core element necessary to restructure our schools.

A commonly used phrase applies here: people who know where they are going are more likely to get there. When going in the direction of quality, educators need to anticipate the formidable obstacles that block the way. This process reveals as much about the deep resistance to change that is present in schools as it does about school improvement. Obstacles block desired paths: they are not reasons to stop movement. Educators who spend the time finding the problems, the obstacles, will have a better understanding of how to achieve quality improvement.

The teacher-librarian is in a unique position to assist with an understanding of the obstacles outside the school system. For too long, schools have not stayed current with changes in the outside world. Teacher-librarians are aware of the increasing pace of change – the amount and type of information available. It is here that the teacher-librarian should assume a leadership role, assisting colleagues in their search for appropriate information, to gain an understanding of the hurdles to quality education.

The word Quality itself
The first hurdle is often the term quality itself, which is seen by many as a platitude, a hollow phrase with no substance or meaning. Regarded as laudable, quality is widely perceived as being unobtainable, as are truth, beauty and justice. The word is used freely by advertisers for everything from sophisticated electronics to second-rate products. As a result, the term has no meaning to people who hear it applied to management theory for the first time.

When applied to organizations, quality is quite difficult to define. Those who understand and apply quality know that slogans and superficiality have no place in a quality setting. To gain educator’s acceptance, we have to move beyond the notion that quality is undefinable and that “we know it when we see it.” The essence of quality is substance. A consensus is now emerging on the definition of quality as a clear system continuous improvement that meets customer needs. Only after training and application do these terms carry their intended meaning.

After displaying an initial interest in quality, many people quickly give up trying to learn more about it once they confront the bulky and difficult-to-understand language – emanating from management writers – that currently describes the quality sciences. Prematurely, many decide that the idea cannot be applied to schools.

Although achieving quality is very hard work, maintaining it is even harder. Workers, whether in schools or corporations, work harder and smarter when the work meets their needs. 
Corporate world as the model
Skeptical of a school improvement model that comes from faltering American corporate structures, educators are reluctant to apply quality to schools. Many of us do not look at corporate life in America as an example of success, either in terms of results or of ethics. On closer examination, however, we find that it is that failing of corporate culture that the theories of W. Edwards Deming and others address (Walton, 1986).

Joel Barker has popularized the work of Thomas Kuhn regarding the importance of paradigms in the way we think about change
(Barker, 1989). One of the reasons so many American corporations fail is that they do not recognize that marketplace paradigms have changed (Dobyns, 1992).A generation ago, the company that won was the company that made the most product; now the winning company makes the best product. In conventional marketplaces, the seller retained power over product design and manufacturing. In actuality, the buyer always had the power, and therein lies the paradigm shift. The buyer now expresses that power through the desire to purchase quality. Companies that have understood the paradigm of customer satisfaction – whether a low-technology company such as Lands End or a high-technology company such as Motorola – have achieved remarkable success.

What is the American response to foreign companies that embrace quality first? We bash them. We blame them. We think they are the cause of economic downturns.

Through the direct leadership of W. Edwards Deming in the 1950s, Japanese governmental and corporate leaders adopted the notion of quality and propelled themselves into a leadership position in the world marketplace. At the same time, American corporate leaders rejected Deming’s thinking and concentrated on issues that were tangential to quality. In a classic example of wrong-headed thinking, some American corporate leaders now blame Japan for the failing American corporate structures. This kind of blaming is wrong-headed because limiting the import of quality products will not help the American corporate structure, the economy, or the consumers. Even tax cuts, as psychiatrist William Glasser points out, are not the solution
(Glasser, 1991). Given the choice, American consumers will spend their newfound dollars on quality products, thus deepening recessionary trends for countries that do not make the best.

Deming’s ideas work, but they encounter resistance when applied to schools. Some of that resistance resides in the language used by him and other management theorists to explain quality; some of it comes from perceived weakness in the American corporate structure. Much of the resistance, however, resides in two areas: leadership and change.

Leaders of quality organizations must live and breathe the essence of quality. In every action they take, every decision they make, they are role models for the rest of the organization. Although a quality school is not a top-down setting, such a school will not come into being unless the school leader is the champion of quality. In my view, two of Deming’s 14 points are critically important to leaders: constancy of purpose and self-evaluation.

Deming asserts that 94 percent of the problems that exist within an organization are within management’s power to solve. Yet those who occupy leadership positions in our schools are perhaps the single greatest obstacle to implementing a quality approach to the teaching and learning process. School leaders are so overwhelmed by financial, political and statutory constraints on their actions that they perceive themselves as powerless to effect real change in schools.

Over the past decade, schools across the country developed mission statements. Generally in narrative forms and written by broad-based committees, these statements tend to be characterized as a rational link of platitudes. Once written, these well-intentioned efforts often play no continuing roles in schools. Specifically, school and instructional practices remain unexamined for consistency with the mission. In a quality school, constancy of purpose is the critical factor. Whether in Sitka, Alaska; Johnson City, New York; Madison, Wisconsin; or LaJoya, Texas, schools have a constancy of purpose. The leader articulates that purpose endlessly to all internal and external customers.

Early systems of management theory that were based on inspection of workers failed because the inspection model assumed that fear would motivate the workers to higher levels of productivity. Someone was watching, rating and ranking. In a quality school, leaders drive out the fear by eliminating inspection for staff and program evaluation. Collecting information is important to making better decisions, but that information cannot be gathered usefully in a culture characterized by fear and mistrust. To optimize the school’s mission, every aspect of its work should be critically self-evaluated. In schools, the obstacles to a self-evaluation process are considerable, given the public’s concern over student performance and the widespread political pressure for school improvement.

These changes hold interesting consequences for recent initiatives in our profession, such as school-based management. Such efforts at collaborative decision making in schools is good, but take alone, they are short-range, quick fixes without a leadership commitment to constancy of purpose and self-evaluation.

Just another change
We are victims of our own scattered and disjointed attempts to change. We read an article, attend a workshop, or hire a consultant and get excited because we mistakenly think we have found the answer. In reality, all we have found is a short-term solution, one that lasts only until the next workshop. Unless schools shatter the norms that work against quality, we will continue to use impulse reactions to ill-defined problems.

Schools across the country are staffed with educators who we think do not need to change. By conventional measures, their students perform well. Our past successes guarantee us nothing, however, when change occurs
(Baker, 1989). Remember that the Swiss are the ones who invented the quartz watch, but because it did not meet their definition of a watch, they gave the patent away to Texas Instruments and Seiko. It is because of this resistance to change that the teacher-librarian must impress upon the staff how thoroughly the world of information has been altered over the last decade. Technology and communication have experienced the most radical changes, and because of this, instruction must change too.

Judy-Arin Krupp and other experts on adult development provide valuable insight into the effects adult development stages have on school culture
(Krupp, 1981). Schools that expanded during the growth-oriented era of the 1960s now find themselves with a majority, in some places as high as 75 percent, of teachers over the age of 50. Adult development theorists have a lot to say about how these older professionals approach change: they wait it out. Annually, these teachers experience the unbridled enthusiasm of younger teachers and new administrators who attempt to win support for the latest trend. How often have we seen them greet new ideas with a mellow, seasoned response of “this too will past”. Look at back volumes of educational journals, and you will discover that it is the rhetoric that we frequently associate with change that has caused the skepticism of our senior and experienced faculties.

One year at a time
The conventional planning process for schools have always been limited to a year-to-year basis. Schools everywhere are funded on annual budgets and, therefore, have to justify the existence of programs and changes. State legislatures convene annually and change the bureaucratic requirements that reign over local school systems. Boards of education require annual reports and other rituals based on a year-to-year approach to planning. Even something as pedestrian as a teacher’s planning book contains only enough space for one year.

Partly because of this orientation and a 10-month year, time passes too quickly for teachers. Shortly after the frantic rush of concluding one school year, we begin the frantic rush of preparing for another. The symbolism of this short-range planning is obvious; its effects are disastrous. This pattern of thinking leads well-intentioned people to quick fixes. We mistakenly seek closure as a goal. Remediation and special education practices perpetuate this idea in their emphasis on short-range instructional planning. As quality-oriented educators, we can begin to make improvements in our schools when we drop year-to-year pattern of thinking about our problems.

Think of a goal or want that you achieved recently. What was your immediate reaction? For most people, a void or emptiness follows the short-lived satisfaction. New needs, wants and goals surface. It is this flow of goal/achievement/new goal that characterizes continuous improvement, a long-range approach to planning that is a core concept of quality.

Although similar to elements of strategic planning and other problem-solving models, continuous improvement is a cycle of planning, doing, studying and planning again. The process never stops. It begins with a valid statement of wants that is then filtered through beliefs and profound knowledge before the action planning begins. This plan-do-study approach characterizes the difference between continuous improvement and a blitz of quick fixes.

I know that already
Deming asserts that we need to base decisions on profound knowledge. When first applied to schools, this is interpreted as gathering an understanding of existing research. The teacher-librarian should be the role model for lifelong learning, using the latest retrieval systems and instructing colleagues who are less familiar in their use. The research process itself is modeled by the teacher-librarian, utilizing an information skills structure such as the “Big Six Skills” by Eisenberg and Berkowitz.

Veteran teachers have a wealth of experience that is often overlooked when constructing a knowledge base. Schools need to look inside, as well as outside, when gathering knowledge. Data searches are valuable; but, when consulted and engaged, senior educators can also be excellent resources for the change process.

Collecting the right information and using it to plan and evaluate improvement is essential. Expertise in this area often exists, untapped, in a school’s community. In Newtown, Connecticut, community advisory groups are a regular part of the improvement process. When bringing its mathematics curriculum in line with NCTM standards, the school district contacted area corporations and asked them to nominate to an advisory group people whose jobs required a high degree of mathematical competence. Experts emerged in fields ranging from laser technology to statistics. Once convened, the advisory group validated the need to alter mathematics instruction and assisted the district in making the changes.

“I know that already” is the death knell for change in a school. With information doubling every two to three years
(Roberts & Hay. 1989), we can’t possibly “know that already” very often or for much longer. Once we develop experience in basing knowledge and shared values (constancy of purpose), we will move schools forward.

By continuing to model the gathering, synthesizing and evaluation of information from a variety of sources, teacher-librarians will find themselves key players in the restructuring process.

Students don’t value school
In the fashion of Lake Wobegon, many schools throughout the country meet traditional expectations well. However, good enough is no longer good enough. In quality schools, the entire bell-shaped curve shifts to the right, with learners at all levels of performance improving their achievement through the establishment of higher standards once quality is embraced.

Phil Schlechty, president of the Kentucky-based Center for Educational Leadership in School Reform, sends a wake-up call to senior faculties and educational leaders throughout the country when he observes that high schools are places where young people come to watch older people work
(Schlechty, 1989). Students, whom Schlechty refers to as knowledge workers, take on a different posture in quality schools. The problem becomes defined as: how do we convince students that learning adds quality to their lives?

To move our students toward a commitment to lifelong learning, it is essential to provide them with the appropriate information skills. The success that students experience in learning will provide the motivation to continue (achievement motivation).

Following the research done by psychiatrist William Glasser in American high schools
(Glasser, 1990), the faculty and students of the Newtown, Connecticut, high school surveyed its student body on issues of quality (Freeston, 1992a). Alarmingly, students in Newtown are similar to students in Glasser’s research. Like students everywhere, they know when they produce quality work. Ask them, and they’ll tell you they don’t do it very often, and when they do, it’s on the field or in the orchestra (Table 1). We have not been effective at teaching students that learning adds quality to their lives.

Table 1: Student survey results
Question Student response
(mean score)
How would you characterize the level of effort you normally expend in your class? 6
What level of effort are you capable of maintaining in your class over a marking period? 8
How many students do you know are doing their best possible work most of the time? 4
Looking at other students, how hard do you think most of them are working? 5
In what activity or class is your best effort demonstrated in the present school year? Over 50% cited music/athletics 

*scale 0 to 10; 0 is low, 10 is high

Deming asserts that we have to drive the fear out of organizations. One way of driving out fear is to reduce or eliminate inspection-driven, coercive models of evaluation for students and staff, and replace them with the power and validity of self-evaluation.

Recent assessment developments, such as the New Standards Project, will provide more comprehensive measures of student accomplishment, because they call for the student to self-evaluate. Schools that embrace continuous improvement collect information and regularly use it to make better decisions. There is an openness to data, not a fear of it. There is hunger for ever-changing techniques based on new information. Information is not feared, hidden, or manipulated.

It’s not my fault
Educators everywhere in America are bombarded by complaints of diminishing student achievement. These attacks have led many of us to respond in a defensive way by pointing to the changed nature of the learner. The changed nature of the family and the deplorable conditions children live (Table 2) do indeed shatter the American myth of the Norman Rockwell family.

Every 26 seconds a child runs away from home.
Every 13 seconds a child is reported neglected or abused.
About every minute an American teenager has a baby.
Every 9 minutes one of our children is arrested for a drug offense.
Every 40 minutes one of our children is arrested for drunken driving.
Every 3 hours a child is murdered.
Every 53 minutes one of our children dies from poverty.

Growing numbers of schools now understand what changes are necessary to restructure. These changes have little or nothing to do with the student or with family or personal problems. We have to see these deplorable social conditions as context, not product. Unless we are truly going to restructure, when we say all children will learn, we probably should add a footnote: unless you happen to come from a broken home. We need to recognize the changed nature of the student and forcibly change the way we teach (Freeston, 1992b). In how many schools do we together openly debate a collective belief system? In how many schools do we publicly commit to the achievement of high-risk, high-stakes standards for all students? In how many schools do we acknowledge that all people, teachers, students and parents choose behavior to meet their basic needs? In how many schools do we meet or exceed those basic needs as the heart of our mission?

A Question of culture?
Introductory economics classes traditionally examine a nation’s or region’s natural resources as a predictor of economic success. In truth, countries such as Japan, South Korea, and Switzerland are startling examples of countries with few natural resources that, nevertheless, enjoy enormous worldwide economic success
(Dobyns, 1992). That is a paradigm shift, fueled by a focus on quality, which, ironically, is an American perspective.

Popular media commentaries suggest that Japanese workers and American workers come from radically different cultures. These cultural differences, it is often argued, explain the difference in performance between the Japanese and their American counterparts. Although clearly there are cultural differences between America and Japan – as there are between most countries in the world – if we continue to see culture as the reason for differential achievement, we miss the point of the quality sciences. Quality is cross-cultural. The greatest irony in this debate is that we taught the Japanese to produce quality and now we buy it.

Certainly cultural issues bear on motivation. In our culture and many others, internal motivation is a well-documented catalyst for action. Yet, schools still treat people as though external motivation is an effective means of eliciting desired outcomes. Glasser has convinced many leaders that the reason Deming’s 14 points work is that they are actually rooted in what psychologists call “control theory”. Oversimplified, control theory holds that, as individuals, we seek to satisfy wants come from our desire to meet basic human needs as Glasser and others define them. In “stimulus theory,” by contrast, the stimulus sets the standard and is an external focus for change. People and organizations change best when they are internally motivated to do so. Leaders who continue to behave as though stimulus response theory were effective face insurmountable obstacles to quality. They just can’t get there.

Inherent in all these obstacles is the issue of attitude change and the difficulties it poses for school improvement. There is a fundamental resistance to the term customer, common in business, as it applies to schools. Teachers do not readily perceive themselves as suppliers of a service (teaching) or a product (learning) to a customer base.

The customer orientation, although different in schools from business, holds that we do what we do in schools in order to meet someone’s needs. Why else would we teach, if it were not to fill a need, individual or societal? The debate about whether schools have internal or external customers is specious, because we have too many customers. To start the process, pick one. Collect information to determine the needs, collect more information to see if the needs are being met, then identify the areas of improvement to be undertaken. Start.

The ramification this holds for teacher-librarians is great. The change that has occurred from being a keeper of materials toward being a facilitator in a learning laboratory is mammoth. An analysis of the environment will change the way we look at students. They are becoming much more active and involved customers of information. Teacher-librarians must consider how students develop strategies to acquire information; extract appropriate information; use the best information; integrate that information into a presentable form; and evaluate the final product. Teacher-librarians must consider the role technology will play in changing the library resource center. With the appearance of computer local area networks (LANS), it is clear that information may be shared and is not necessarily available only in the library resource center.

We must be prepared to help students become knowledge navigators in a sea of information. The library resource center should be perceived as the information center of the school – the whole school community should be using this resource to cultivate successful users in an information age.

What lies behind the obstacles? Although certainly not a quick fix or panacea, quality management holds answers to questions that are at the center of the school reform debate. By establishing, together, a system of core beliefs, teachers, administrators, students and parents can ask themselves, when faced with difficult choices, “What do we believe?”, and use the answer to make better choices. Through the concept of continuous improvement, schools will less frequently be in a defensive position reacting to external criticism. Instead, educators can work together to establish and maintain a constancy of purpose and break the cultural norms of autonomy and independence that impede collaborative decision making. When educators collect information and understand the statistical importance of variance, they use knowledge and beliefs to make better decisions. Through the establishment of higher student achievement outcomes, which result from a quality orientation, performance increases are more likely for all students.

We must acknowledge the psychological reality of internal motivation and use it as an accelerant for school improvement. When a school system works together to establish a constancy of purpose, openly operates to continuously improve the teaching and learning process, collects information to make decisions and strives daily to meet or exceed the needs of its students, it achieves quality improvement.
Barker, J. (1989). The business of paradigms. [Videotape]. Burnsville, MN: Chart House Learning Corporation.

Crowley, J. (1994). Developing a vision: Strategic planning and the library media specialist. Westport, CT: Greenwood.

Dobyns, L (1992). Quality or else. PBS Special Broadcast.

Freeston, K (1992a). Other people’s theories. Education Week, 10 (23), p. 22.

_____ (1992b). Getting started in TGM. Educational Leadership, 50 (3), 10-13.

Glasser, W. (1984). Control theory. New York: Harper and Row.

_____ (1991, Winter). The quality society: The economics of control theory. Institute for Reality Therapy Newsletter, pp. 3-6.

Hay, L. & Roberts, A. (1989). Curriculum for the millennium: Trends shaping our futures. Southport, CT: Connecticut Association for Supervision of Curriculum Development.

Krupp, J. (1981). Adult development. A manuscript available from Judy-Arin Krupp, 40 McDivitt Drive, Manchester, CT 06040.

Schlechty, P. (1990). Schools for the 21st century. San Francisco: Jossey-Bass.

Walton, M. (1986). The Deming management method. New York: Putnam.

Monday, September 11, 2017

Recognition and Situational Leadership II

Blanchard, Kenneth; Nelson, Bob. “Recognition and Situational Leadership II.” Emergency Librarian, Mar/Apr 97, Vol. 24, Issue 4, p38. 
Situational leadership was first developed by Ken Blanchard and Paul Hersey over 25 years ago. This article emphasizes helping managers recognize employees who are at different stages of development, for their efforts and achievements. Situational leadership II advocates that the best managers provide the amount and kind of direction and support which best fits the developmental level of the employee.

Here is an overview of the four development styles that make up Situational Leadership II and the corresponding type of recognition that would be most effective at each level.

Enthusiastic Beginner (D). This is where everyone starts a new job. Already motivated, enthusiastic and excited about the opportunity to do something new, this person needs little support from a manager. However, what the person does need is information about the job – what exactly is needed, how best to approach the task, what a good job looks like, etc. How to recognize: The manager can recognize the enthusiastic beginner by providing specific answers to get them back on track. These novice employees need attention, specific direction and redirection.

Disillusioned Learner (D2). This stage of a job occurs when “the honeymoon is over”. The initial excitement of the job has worn off and some aspects of the job have proven more difficult than originally anticipated. Since the employee is still learning, the difficulties are especially frustrating since they have not yet performed satisfactorily and have little to show for their effort to date. How to recognize: Because they are still learning the job, the manager needs to “catch them doing things right”. Praisings that are sincere and specific as well as time acknowledgements of progress towards the desired goal, reinforce desired performance. The best praisings are done personally, face-to-face with the employee, but written praisings are also effective. And don’t forget to redirect. Get them back on track toward the desired end result.

Capable but Cautious Contributor (D3). Having successfully complete the task only once, has not given the employee enough time to gain confidence in his or her ability. As a result, employees tend to be overly cautious. How to recognize: A manager of an employee in this stage of development needs to provide clear, specific positive recognition to the employee for the achievement of the desired performance. The best methods are often things that have little or no cost (see list).

No-cost ways to recognize employees
  • Personal thank you for doing a good job 
  • Written thanks for doing a good job 
  • Public praise at staff, department or company meeting 
  • Reference in company, industry or community publication 
  • Photo on a “Wall of Fame” 
  • Designated parking spot 
  • Time off 
  • Certificate of appreciation 
  • Special celebration or lunch 
  • Appreciation day
  • The manager needs to encourage the individual to repeat the performance and must continue to be available when needed. At the D3 level, recognition for achieving a goal or task is the best form of reinforcement.

    Self-reliant achiever (D4). At this stage of development an employee has demonstrated competence and commitment in doing the job and has essentially become self-managed on the given task. How to recognise: High performers need recognition too, or else they may come to feel taken advantaged of, that is, not valued for the contribution they consistently make to the organization. Their needs have shifted so that although they may still appreciate a sincere thank you for a job well done, they are apt to feel even more appreciated if you used a “higher order” incentive. Asking the person to train others on the job they have learned to do so well, granting them more autonomy in their job, providing a chance to select future assignments, involving them in decisions that affect their jobs or increasing their visibility in the organization are all appropriate.

    Remember too that the recognition you give a high performer serves a second purpose as well: It sends a message to others in the organization that “this is the type of performance that gets noticed around here.” It also provides an opportunity for high performers to thank others in the organization.

    “All behaviour is a function of its consequences.” Managers can harness the power of this statement by providing recognition and rewards to positively reinforce desired behaviour and performance.

Monday, September 4, 2017

Strategic planning to avoid bottlenecks in the age of the Internet

Penniman, W. David. Strategic planning to avoid bottlenecks in the age of the Internet. Computers in Libraries, Jan 99, Vol. 19, issue 1.
When the early libraries of Mesopotamia and Egypt were in their heydays, their respective staffs must have been concerned with dramatic changes in materials, the growth of sources, and the demands of users. They probably also worried about adequate support for the efforts and lack of appreciation of what they did for society. For them, these worries were undoubtedly no less bothersome than the worries and concerns of today’s librarians. They, like us, had concerns about the future. And, like us, they had no better grasp of how to accurately predict the future. (If they did, they would undoubtedly have been appalled at the sad fates of their libraries and might have given up right then and there.)

Putting planning into perspective
We might be equally dismayed if we had perfect insight into the future. We have to contend with increasing costs of materials, increasing sources of materials (many of these sources now electronic), and increasing options of (and competition for) delivery channels to our users. All of these lead to a dismaying array of demands from a planning perspective. At the university level, for example, we must plan for and maintain three kinds of libraries:
  • The library of the past, which focused on building collections and providing direct physical access to printed materials
  • The library of the present, with extraordinary added costs of inflation, automation, and for many, the preservation of decaying material
  • The library of the future that we must plan for, and that includes not only the development of new ideas, but the implementation of new prototypes for publishing, acquiring, storing, and providing access to information through new technology and new attitudes about such things as ownership and access (Billy E. Fyre, “The University Context and the Research Library,” Library Hi Tech 40 (1992): 27-370)
Building on past assumptions
At the same time that these three types of libraries are being maintained, we are seeing significant strains on the physical structures we call “libraries”. The design considerations of today are different from those yesterday ((W. David Penniman, “Tomorrow’s Library.” Computer Methods and Programs in Biomedicine 44, nos 3/4 (1994): 149-153). Unfortunately, the fact is that, for too many institutions, the library of the 21st century has already been built and it is too late to do any planning for the structures themselves. During the 1970s and 1980s we completed new academic libraries at the rate of almost 20 per year. We added to our renovated about half as many each year. In any given year there were about 100 academic library building projects in progress. In the 1990s the rate of completion of academic libraries has held constant while additions and renovations have doubled. To quote Library Journal, “There doesn’t seem to be financial concerns in the construction/design industries when it comes to building libraries. Academic libraries in particular seem little affected by the economic or political climate.” ((Bette-Lee Fox, et al., “Building a Brighter Tomorrow.” Library Journal (December 1992): 51)

Most organizations will have to live with those decisions that were made years or even decades ago and attempt to serve their users on the basis of those assumptions. The assumptions made in the 1970s could not have been nearly as insightful as those in the 1980s regarding technology. We were only beginning to think in terms of mainframes, dumb terminals, and centralized databases at that time.

The ‘80s reflected new technologies and more emphasis on stand-alone systems as well as new networking concepts. The design ideas of the ‘70s no longer seemed valid. More space was needed for CD-ROM or other disc-based systems, and more room was needed to pull cables through undersized cableways. Buildings only a few years old seemed ill-fitted for the tasks and services required. Now, in the late ‘90s, we have the Internet and the World Wide Web, and who could have guessed the revolution they would bring to society in general and to libraries in particular?

Years from now, the same may be said about the new structures and systems some of you are contemplating. Will your assumptions seem shortsighted and naïve? They may if you fall into the trap of planning for a specific technology or structure. Rather, plan for a specific (and more stable) mission and vision for your institution, and then use the available technology and structures of the times to fulfil these relatively stable elements.

Planning with a clear mission and vision
I believe our ability to predict the full impact of a specific technology (let alone broader technological evolution) is sorely limited, whereas our abilities to articulate a useful mission, to envision a future that enhances that mission, and to use technology now available to eliminate impediments to our mission and vision are not nearly as limited.

Think in terms of bottlenecks
We can plan for a future we wish to create and work toward that future. I believe this is true despite the fact that we cannot begin to predict the full impact of such innovations as the Internet (and may not be able to until it has become part of our history). Since we cannot wait for a historical perspective of today’s events, we need to think in terms of current impediments or bottlenecks to our mission and vision, and how to eliminate those bottlenecks. This puts us far more in control than trying to grasp at the elusive trajectory of technology.

Look, for example, at the amazing result of the lowly spinning wheel and how it eliminated bottlenecks as described by that wonderful interpreter of history in terms of connections of technology and societal change, James Burke. (See the sidebar entitled “The Spinning Wheel and Bottlenecks—A Historical Example.”)

We have to think clearly in terms of bottlenecks to a well-articulated mission and vision, and we have to think clearly in terms of partnerships and alliances to overcome such bottlenecks. We have to learn from the historical perspective offered by such marvellous individuals as James Burke and seek out those bottlenecks that stall our vision.

We have faced and continue to face five major bottlenecks, or impediments, to the vision I have suggested:

1. Lack of accessibility—We must provide information independent of where it is kept. In addition, we must provide better means of retrieving information held in our books and journals—especially our books. And, we must make our libraries accessible from anywhere.

2. Outdated materials—By the time material reaches our libraries much of it is old. The library, to be useful to decision making, must not be bypassed in the delivery of current information.

3. Higher costs—The cost of materials continues to rise for libraries, and a wider variety of sources appear every day. Om addition, price as a means of protecting intellectual property is an increasing barrier.

4. Insufficient storage capacity—The growing cost of space requires that better storage methods be found and that preservation and accessibility for future generations be considered as well. Consider the short technological half-lives of some storage media, e.g., seven-track tape and 40- and 80-column punch cards.

5. Unavailability of materials—This is distinguished from accessibility in that some necessary material may not ever be “published” due to cost, space, time, or proprietary considerations.

The Spinning Wheel and Bottlenecks—A Historical Example
An interesting example of the connection between a new technology, bottlenecks, and major societal change is found in the tale of the spinning wheel. This device, invented in China about a thousand years ago, ultimately uncorked the bottleneck of thread production, giving rise to an abundance of cloth, which led to an abundance of clothes, which led to an abundance of rags, and, since paper can be made from linen rags, cheaper paper.

In the period before the 14th century in Europe, paper was difficult to produce due to a scarcity of rags. Books were, therefore, in short supply because as an alternative to paper, it took one or two hundred sheep or calves to make the parchment to form one good, thick Bible.

The bottleneck was not the copying of the book (done then by scribes who were relatively plentiful) but the cost of the material on which to write. With an abundance of rags, paper became cheap, and the bottleneck shifted to the scribe who, in writing the book by hand, could no longer keep up with the paper supply. Thus arose the need for a faster way of copying printed material, and this historical need gave rise to the printing press.

The printing press led to an abundance of affordable books. Affordable books have allowed us to build the astonishing array of libraries which we now enjoy. But the analysis doesn’t stop there, for cheap books and public libraries in which they can be found are key elements in the diffusion of knowledge within a society.

Now, societies tend to stratify and stabilize on the basis of the stratification of knowledge (another bottleneck). With increased access to information and diffusion of knowledge to the lower strata of society, the evolution of democracy as we now know it was possible. Democracy then could be said to be the result of the steady though not especially rapid elimination of bottlenecks beginning with the spinning wheel over a thousand years ago.
(Burke, James. Connections. Boston: Little, Brown, 1978) How do we begin to look purposefully at today’s bottlenecks to the mission of our libraries and use technology as a tool for overcoming these bottlenecks?

Look for alliances to overcome the bottlenecks
No single institution can address these challenges alone. Therefore, I suggest that the way to attack them is via partnerships and alliances. The consortia of libraries by region and/or type is one such alliance to increase affordability and accessibility. But what about others? Consider the alliances of publishers and libraries to bring the material to the user in electronic mode and complement (or sometimes completely bypass) the hard-copy delivery mechanism.

I believe that alliances are built on enlightened self-interest. This requires that you and your ally have a shared vision or at the very least that you understand each other’s visions and missions fully. If you can’t answer “What’s in it for me?” not only for yourself but also for those you seek to form alliances with, you probably won’t be effective in creating the alliance—even though it might be the most effective way to bring that troublesome bottleneck tumbling down. The path is strewn with failed partnerships where the end might have been laudable, but the price was just too high (and price is not always measured in dollars; it can be ego, perceived independence, etc.). But, when done right, the results can be amazing and can help all parties succeed.

Look, for example, at the amazing and continuing success of OCLC, an alliance of libraries now worldwide in scope. Not all alliances, however, are that grand. Simple collaborations with other components of your organization (infrastructure units such as telecommunications or computing, for example) may be the most significant efforts you can implement to pursue your mission and reduce bottlenecks to achieve your vision.

Thinking strategically to be considered ‘strategic’
What do I mean when I suggest that you should be thinking strategically? To understand this phrase, you must understand what I mean by strategies. I maintain that strategies are the policies (written or unwritten) that guide organizational decisions, and they are tied inextricably to the nature, direction, and basic purpose of an organization (or individual). These policies are connected to an organization’s mission and vision, but are more about how the mission and vision are achieved. They are not always explicit (or written, but strategies can (and must actively) be deduced from the actions of people and organizations crucial to you. This deductive process goes on all the time in the minds of your users, for example. They think about what is important to their efforts, and if your strategies seem to be to their benefit, then your organization is important to them as well. Because your organization’s importance is best measured in the minds of your customers, or key stakeholders, your strategies must align with theirs. If your actions (and strategies) communicate that you are vital to their interests and strategies, then your organization is “strategically positioned” with respect to them.

When marketing professionals talk about “positioning” they never mean what they think of a product or service but what a potential customer thinks. You must do the same thing. You must put yourself in the shoes of your customers and other key individuals. Remember the sample mission that said “making users more effective in a competitive environment”—if you are not vital, in your customer’s eyes, then you are not strategically “positioned” with respect to them. Simple as that is to describe, it is not so simple to execute, for this alignment is fundamentally a communication problem. You can’t operate effectively without explicitly portraying the value you contribute to your users. You can’t “not communicate.” Inaction as well as action “positions” you in the minds of your customers and other important stakeholders.

It is important to be positioned well with both your “users” and “choosers” (i.e. those who choose what you will get in the way of resources). Customers are only one of the key groups. They may not be the direct users of your services, but they are certainly just as vital. There are board members, senior executives, other managers, user group representatives, etc.

In short, you must have a clear understanding of your own strategies, though it is not always easy to keep them consistent and explicit. Even more crucial is to have a clear understanding of the strategies of the community in which you reside and which you serve. You must align your strategies with the strategies of your key stakeholders (and there may be many different types). And, finally, you must always make explicit the value of your organization to those you serve, and that is primarily a communication challenge.

Mashing gophers and smashing bottlenecks
To borrow from a popular desktop sign, “Don’t let the bottlenecks wear you down.” I have been arguing for partnerships or alliances as one means of battling the bottlenecks. In many arcades there is a “gopher game” in which the player wields a padded mallet against gophers popping up randomly from a variety of holes. The faster the player mashes the gopher back into the hole, the faster the next gopher pops up. But some of these games have two mallets and can be played by a pair of gopher mashers. Then the gopher is really in trouble. That’s what I like about partnerships.

What other strategies are available to address the bottlenecks in this age of the Internet? Certainly user education is one. But one that I like even more is provider education, and by that I mean learning from the user. Listening to the user via focus group interviews, exit interviews, user groups, surveys, advisory boards (selected to represent both those who use your services and those who decide how much resource you will get) is important—more important in many ways than having them listen to you.

Don’t fight battles on all fronts. Select the most significant obstacles and evaluate them against current resources and available technology. By looking at both cost of eliminating the bottleneck and payoff when the bottleneck is diminished or gone, you can select your strategies for optimum use of your time, energy, and other resources.

I used the phrase “age of the Internet,” which is just a shorthand way of saying a time of rapid change. Really, when haven’t we lived in a age of rapid change? My grandmother, who died at the age of 93, saw more change than I think I will even if I live that long. As a new bride, she rode from the church in a horse and buggy used by her husband, a country doctor, to make his rounds. Before she died, she was flying on a jet passenger aircraft to visit relatives in California she had never seen before. What change could I ever experience, Internet not withstanding, that could equal that? Perhaps a vacation on the moon would be comparable.

Short of such a vacation, your pursuit of change should be consistent with your (unchanging) vision and mission. So spend your time now on that aspect of planning and then the “age of Internet” will be just one more phase in your own process of eliminating bottlenecks.

By W. David Penniman
W. David Penniman currently serves as professor in the School of Information Science at the University of Tennessee and is the director of the Center for Information Studies. He is also a consultant to senior management in information systems, resources, and services. He holds an undergraduate degree in engineering from the University of Illinois and a Ph.D. in behavioural science from Ohio State University. His e-mail address is