It’s a Guy Thing About a Boy Review

Zachary Woodruff’s “About a Boy” review titled “It’s a Guy Thing”, published September 8, 1998 in the “Tuscan Weekly”, criticizes what, in Woodruff’s opinion, makes the book entertaining and enjoyable yet not memorable and influential. The reader is presented with a short summary of the book’s main storyline focusing on why and how Marcus is an awkward kid, Marcus’s sources of misery and how Will fits into Marcus’s life. Will is able to assist him due to Will’s immaturity, which he employs as a tool to teach Marcus, whose development exceeds his age, how to be a child.

The paradox of a grown man teaching a child how to in fact be a child is hinted in the book’s title “About a Boy”. Subsequently reading the book one realizes that the “boy” the title is referring to is more likely to be Will than it is Marcus, stressing Nick Hornby’s cleverness. Woodruff also points out Hornby’s cleverness in naming the protagonist Will Freeman as it is an obvious definition of the character himself. If one were to break down the name and truly look at the words’ definition one would find that Will Freeman does in fact live proudly by the lifestyle a free man with free will.

Another main aspect Woodruff underlines is how Hornby did not aim for a cliche or extreme novel, which actually contributed to it. As opposed to other novels, the main relationship fixation in the book is not one between a man and woman but rather between a man and a child. Also another unique quality is that although one usually has preset notions concerning certain events in the book, they turn out nothing like one had expected which therefore makes the novel less predictable than others thus leaving the reader in astonishment.

Hornby very rarely goes overboard with the plot as a whole although the opportunity to do so is available more than once. Furthermore Woodruff highlights the effect of the informal, relaxed prose Hornby uses supposedly with the intent to illustrate the protagonist and his way of thinking. What generally stands out of the novel as a whole is not its brilliance or wild imagination but rather its lack of those qualities and yet the reader’s ability to connect with the characters on a certain level.

Consequently the book is temporarily effective and pleasurable, nevertheless was unable to build up to a long term, deep idea for the reader. In order to grasp the attention and genuine intrigue of the reader, Woodruff is truly able to indulge in the review he is writing and do so for the reader as well. Rather than taking a solemn approach, he initiates the text with a sarcastic yet clever remark on the protagonist’s name: “This week’s winner for the most obvious fictional character name goes to Will Freeman. ” (L. ) Because a hidden message is unveiled that not many of the readers would have noticed prior to reading this review, it leaves the reader dazed and bemused from the start. Throughout the review Woodruff seems playful and uses irony, and although that could be misinterpreted as “low quality”, it is in fact more inviting. Woodruff is able to get all of his views across to the reader in a way that the reader can understand and identify with his criticism. Another style in the language that contributes to Woodruff’s easy-flowing, down-to-earth review is its register.

The level of the language used is relatively colloquial. Woodruff applies words like “dammit”, “man”, “butt-kicking” etc. which are considered slang/ taboo words. Short forms are also used in Woodruff’s review which is not to be used in formal language: “doesn’t”, “it’s” etc. Other than that the language is fairly simple and comprehendible. The usage of such informal and easy language makes this review accessible to anyone and everyone. Woodruff also refers to expressions used daily that are easily able to stress the point one is attempting to make and yet is proficient in adding his own style; e. . “cooler-than-thou”. It derives from the expression “holier-than-thou”, which is used to describe a person who thinks they are more morally righteous than others. So “cooler-than-thou” is to describe someone or something that believes to be better or superiorly cooler than others. Through the general style portrayed by Woodruff, the review is playfully serious and allows certain hidden aspects and perspectives to surface. As Woodruff suggested in his review, Hornby attempts to demonstrate Will’s character through the style of the prose the personal narrator uses in the book, e. g. : Will wrestled with his conscience, grappled it to the ground and sat on it until he couldn’t hear a squeak out of it. Why should he care if Marcus went to school or not? Ok, wrong question. He knew very well why he should care whether Marcus went to school. Try a different question: How much did he care whether Marcus went to school or not? Answer: not a lot. That was better. He drove home. ”- (Chapter 20, p. 159) This specific paragraph begins with a metaphor that allows us to really recognize how deeply Will is struggling with his conscience and yet how he is able to overcome its persistence and shut it up.

Most of the sentences are elliptical and some are even incomplete serving the purpose of truly projecting the way Will, among all people, thinks. One also finds himself stating questions in one’s head and contemplating on them as is shown here as well. As one looks closely to the flow of Will’s thoughts one recognizes that he is one to do whatever possible in order to avoid responsibility or the need to intervene into another person’s life. He is confronted with emotions of caring for another person in this paragraph but finds a sneaky way to escape those emotions and trick himself into not caring.

He goes from asking himself whether or not he cares if Marcus were to go to school to which extent it is he cares, therefore making it excusable for him to just “drive home. ” Also Will’s thoughts seem so be constantly jumping around from emotions to facts to escape mechanisms; he first states a question then comments on that question etc. Will is in a regular battle between his objectivity and his feelings. Conclusively since the narrator knows Will’s thoughts it is omniscient, but on the other hand, the narrator’s knowledge is limited as other people’s thoughts and actions are only seen, described and often interpreted through Will’s eyes.

Hence Hornby allows the reader to be linked to Will and understand what he is thinking but also provides the reader with an opinion of Will as an observer. Zachary Woodruff’s review stated at the end that “…the book doesn’t leave you with much to remember after you’ve finished. ” I must pronounce that I do not fully agree with this statement. It is justified to say that the book’s all-in-all impact on the reader is not vast; however, it does exhibit multiple messages concerning varied, imperative and inalienable matters and truths in life.

The focal points are the issues one faces when growing up, bullying, adults who are incapable of accepting responsibility, single parents, what it means to be an adult, children losing their innocence in dysfunctional families, the need for love and shared human relationships, and the fact that no man is an island. An adult and his maturity are not defined by age but by experience. Therefore those who lack experience and lack the opportunities while growing up to learn to be responsible may never truly reach a decent level of maturity. In order to obtain the ability to be responsible one must be trained; it is a skill one must develop.

To truly be responsible one must have acquired consciousness and awareness of one’s maturity level. The ability to be an adult at any age or to not be an adult at an age one would consider customary was projected by Marcus’ maturity and ability to analyze and understand situations only mature people could and that Will was just as immature as any teenager despite his real age, and his unwillingness to take responsibility for anything for the majority of his life. Another message the book held for the reader is the difficulties of growing up, which can take place at any age.

To grow up is to encounter new emotions and thoughts, to view issues from different perspectives and to endure pain and hurt in order to distinguish between good and bad. One begins to learn the “unwritten rules” about life and must develop certain skills in order to survive. The attainment of such skills primarily begins at home and later on is further developed according to the child’s exposure. Consequently if the family is in some way dysfunctional then these skills may either never develop or develop too fast from a different source causing the child to miss out on its childhood.

Every family faces its own difficulties and issues and the parents always come face to face with obstacles and in the end the child’s ability to grow up in a healthy and natural way depends on the parents’ qualification and strength when faced with these obstacles. Many parents go through separation, divorce, abuse and many other things that may result in the parents not being compatible together leading to single parents. How one deals with being on one’s own also fully relies on the parent as an individual. Single parents are in a difficult position be it by choice or by force.

Naturally one is supposed to have both a mother and father when raising children and while the absence of one of those parents may not have permanent negative results on the child and may even be better for it, it will definitely make the job for the present parent much more difficult. According to the nature of the break-up the parent is carrying around emotions that may inflict on the relationship of the parent and the child. Due to these emotions the parent may not be competent enough to fulfill the roles required of a parent (e. g. to be an instructor, friend, supporter etc. leading to the child being under more pressure to be the adult and take care of the parent. This is projected in the book as Marcus’ mother, Fiona, is depressed and attempts to take her own life. As Marcus has to witness such an act he is put under more pressure as a child than he should be under, constantly worrying about how to take care of his mother. He obtains this role of an adult because he feels he needs to and at the same time is ripped of the innocent perspective one has on life at a child and is confronted with life’s ugliness, pain and obscurity too early.

Bullying is another issue presented in the book that plays a crucial role in the development of children. According to a website focusing on bullying, bullying is defined as “a pattern of behavior whereby one person with a lot of internal anger, resentment and aggression and lacking interpersonal skills chooses to displace their aggression onto another person, chosen for their vulnerability with respect to the bully, using tactics of constant criticism, nit-picking, exclusion, isolation, teasing etc. with verbal, psychological, emotional and (especially with children) physical violence. There are many forms of bullying, some of which mentioned above, and it is not only the bullied that needs to be taken into consideration but also the bully him-/herself. The targets of bullies are usually those who cannot fend for themselves and who are considered vulnerable due to their small size or low self-esteem. All these are problems that should be addressed and worked on but also the bully is usually faced with a severe problem he is not consciously aware of or does not want to acknowledge and therefore expresses his frustration through most obvious means.

Bullying has caused depression and even suicide amongst children as young as twelve years of age. Its effects are severe and if not dealt with immediately could have lifelong consequences. The book also succeeded in highlighting the importance of the need for love and shared human relationships. Relationships teach a person certain things and help a person take on certain fundamental qualities that are required in life. Through relationships and love one learns to be selfless, to share, trust, relate and connect. Also relationships can be a source of role-models or confidence i. e. the relationship between a child and a parent.

Moreover a relationship between a man and a woman, be it romantic or merely friendly, can offer you a point of reference and motivation. All relationships help a person become who he is and allows a person to thrive being surrounded by people that care about him and believe in him. As exemplified in “About a Boy” Marcus and Will’s relationship had a huge impact on who they were as individuals and in fact helped them become who they were meant to be, as Rachael and Will’s relationship broadened Will’s understanding of life and the ability to care for someone as deeply as he did for her.

Finally, the last message the book holds for its readers is the fact that “no man is an island”. As portrayed in all the above mentioned messages of the book, everything and everyone is connected. All things take place as a consequence of something prior to it. The circle of life is that everyone gives and takes; we are all connected through a chain that is never-ending. We each have to ability to affect others; it is a power we each possess and also one that must be used carefully and responsibly. Many, like Will, attempt to go through their whole lives without affecting those around them as well as not being affected by anyone or anything.

However, it is inescapable and though it may make life more difficult or may cause more stress, it is what makes life worth living; the ability to change others lives and the ability to be changed yourself. Living in isolation, trying to live on an island defies the whole purpose of life and as projected in the book, when one embraces this chain it can affect your life in the best way. But one must also beware that “no man is an ocean” as well, for we must not allow every single surrounding to affect us as individuals.

One must find a gray area between being an island and being an ocean and find a way to embrace the world and yet at the same time not let it control you. I believe that although “About a Boy” is not an especially powerful book, it held some of life’s realities and left us with many issues that we must take into consideration. It was able to open up our eyes to truths one must be aware. Because of this book I was encouraged to research such exceedingly crucial problems and learned so much about how to avoid such problems and how to deal with them. Therefore the book, for me, definitely served the purpose of why it is I find it vital to read.

Measuring the Cost of Quality Management

ECONOMIC CASE FOR QUALITY Measuring the Cost of Quality For Management by Gary Cokins T he quality movement has used the term cost of quality (COQ) for decades. But few organizations have actually adopted a reliable and repeatable method for measuring and reporting COQ and applied it to improve operations. Is the administrative effort just not worth the benefits, or is there a deeper problem with the methodology for measuring COQ? What COQ Should Do At an operational level, quality management techniques effectively identify waste and accelerate problem solving for tactical issues related to process improvement.

For many organizations, quality management initiatives have prevented financial losses from customer defections caused by quality problems or from waste and inefficiencies. At a more strategic level, however, has quality management reached an adequate level of support from senior executives? Unfortunately, the avoidance of reduced profits from quality initiatives is not widely measured or reported by organizational financial accounting systems. As a result, organizations cannot easily quantify the magnitude of benefits in financial terms—and the language of money is how most organizations operate.

In short, there has been a disconnect between quality initiatives and bottom-line profits to validate any favorable impact on profitability and costs. QUALITY PROGRESS In 50 Words Or Less • Although management prefers to have fact based data and reasonable estimates to evaluate decisions and prioritize spending, financial measurements generally aren’t used to validate quality’s impact on profitability and costs. • Activity based cost/management systems are effective ways to account for the hidden costs of poor quality. I SEPTEMBER 2006 I 45

ECONOMIC CASE FOR QUALITY Why Traditional Accounting Fails One of the obstacles affecting quality management and other initiatives has been the accounting field’s traditional emphasis on external reporting. The initial financial data are captured in a format that does not lend itself to decision making. It is always risky to invest in improving processes when true costs are not well established. This is because management lacks a valid cost base against which to compare the expected benefits from improving or reengineering the process.

In The Process-Centered Enterprise, Gabe Pall says: Historically, process management has always suffered from the lack of an obvious and reliable method of measurement that consistently indicates the level of resource consumption (expenses) by the business processes at any given time—an indicator which always interests executive management and is easily understood. The bottom line is that most businesses have no clue about the costs of their processes or their processes’ various outputs. 1 2. Management can more reliably assess the different value of processes and how they contribute to the overall performance of the business.

The accountant’s traditional general ledger is a wonderful instrument for what it is designed to do: post and summarize transactions into specific account balances. But the cost data in this format (salaries, supplies, depreciation) are structurally deficient for decision support, including measuring COQ. They disclose what was spent but not why or who or for what. Expense data must be transformed into the costs of the processes that traverse across the departmental cost centers reported in a general ledger system—and ultimately transformed into the costs of products, services and ustomers that uniquely consume the costs of various processes. Bring Facts, Not Hunches To some people, it is obvious better management of quality ultimately leads to good performance, which in turn should lead to improved financial health of an organization. These people believe if you simply improve quality, good things, such as happier customers and higher profits, automatically will fall into place. Other types prefer having fact based data and reasonable estimates for evaluating decisions and prioritizing spending.

They do believe in quality programs, but in complex organizations with scarce idle resources, they prefer to be more certain of where it is best to spend discretionary money. Some quality managers have become skeptical about measuring COQ. They have seen increasing regulations and standards, such as the ISO 9000 series, in which installing any form of COQ measurement is perceived as more of a documentation compliance exercise for certification to a standard rather than a benefit to improve performance. Veterans of quality management believe quality just for quality’s sake—meaning conformance to a standard—is not sufficient.

They say quality should be viewed as a condition in which value entitlement is realized for customers, suppliers, employees and shareholders in every aspect of a relationship. There always will be debates among shareholders, customers, employees, taxpayers and environ- When the costs of processes and their outputs can be measured adequately, two things can happen: 1. It can gain management’s attention and give management confidence the accounting data are reliable business indicators. FIGURE 1 Levels and Scope of Quality Costs Error free Postponed profits Costs of quality

Lost profits Customer incurred costs Socioeconomic costs Purchased goods and services 46 I SEPTEMBER 2006 I www. asq. org mentalists about trade-offs, but the methods of COQ measurement can help convert debates into agreements. 2 Quantification Methods Exist The lack of widespread tracking of COQ in practice is surprising because the tools, methods and technologies exist to do it. A research study investigating the maturity of COQ revealed the major reason for not tracking COQ was management’s belief it lacks sufficient value. 3 Other major reasons are a lack of knowledge

Categorizing Quality Costs Almost every organization realizes anything less than the highest quality is not an option. High quality is simply an entry ticket for the opportunity to compete or exist. Attaining high quality is a must. Anything less will lead to an organization’s terminal collapse. To some people, quality costs are quite visible and obvious. To others, quality costs are understated. These people believe many quality related costs are hidden and go unreported. Figure 1 illustrates several levels of nonerror free quality costs.

This article’s scope is the figure’s inner concentric circles—those costs cited in the organization’s financial profit and loss reporting. Examples of these obvious hidden financial costs and lost income opportunities include rework, excess scrap material, warranties and field repairs. These error related costs can be measured directly from the financial system. Spending amounts are recorded in the accountant’s general ledger system using the chart of accounts, but other types cannot be measured directly from the financial system. Sometimes the quality related costs include the expenses of an ntire department, such as an inspection department that arguably is solely quality related. However, as organizations flatten and eliminate layers and as employees multitask more, it is rare for an entire department to focus exclusively on quality. COQ related work is thus part but not all of its work. The hidden poor quality costs, represented in Figure 1’s inner COQ concentric circles, are less obvious and more difficult to measure. For example, a hidden cost would be those hours a few employees spend sorting through paperwork resulting from a billing error.

Although these employees do not reside in a department dedicated to quality related activities, such as inspection or rework, that portion of their workday was definitely quality related. These costs of correcting errors are not reflected in the chart of accounts of an accounting system— and are referred to as hidden costs. A rule of thumb is that the nearer the failure is to the end-user, the more expensive it is to correct. of how to track costs and benefits of COQ and a lack of adequate accounting and computer systems.

Given the advances in today’s data collection, data warehousing, data mining and activity based cost/management (ABC/M) system implementations, these reasons begin to look like lame excuses. The technology is no longer the impediment for reporting COQ it once was. ABC/M systems are typically implemented to accurately report costs of products, services, channels and customers by replacing broadly allocated indirect expenses with cost drivers having cause and effect relationships, such as the number of inspections.

Hence, customer caused costs and the process costs they consume can be reported with an audit trail back to the resources those expenses came from. Value of Data Providing employee teams both obvious and hidden quality related costs is valuable for performance improvement. Using the data, employees can gain insight into causes of problems. These hidden and traditional quality related costs can be broadly categorized as: • Error free costs: costs unrelated to planning, QUALITY PROGRESS I SEPTEMBER 2006 I 47 ECONOMIC CASE FOR QUALITY controlling, correcting or improving FIGURE 3 Cost of Quality Subcategories quality.

These are the did-it-rightthe-first-time costs. Each activity cost gets tagged • COQ: costs that could disappear if all processes were error free and all products and services were defect Error free Conformance Nonconformance free. COQ can be subcategorized further as: • Conformance: costs related to Internal External Prevention Appraisal prevention and appraisal to meet failure failure requirements. Work • Noncomformance: costs related to activities internal or external failures, includStable Unstable Defective ing detective appraisal work from not meeting requirements.

There is a distinction between internal and tomer invoicing process might be as follows: external failure costs: Internal failure costs are • Error free: first time through work without a detected prior to the shipment or receipt of flaw. service by the customer; customers usually • Prevention: training courses for the invoicing discover errors that lead to external failure department; programming error checking in costs. the invoicing software. An oversimplified definition of COQ is the costs • Appraisal: reviews of invoices by supervisors. ssociated with avoiding, finding, making and • Internal failure: wrong prices or customer repairing defects and errors—assuming all defects quantities posted; correction of typographical and errors are detected. errors. COQ represents the difference between the actu• External failure: rework resulting from a cusal costs and what the reduced cost would be if tomer dispute of an invoice. there no substandard service levels, failures or Figure 2 portrays, in financial terms, how an defects. organization’s sales, profits, purchased materials Simple examples of these categories for a cusand COQ expenses might exist.

In principle, as the COQ expenses are reduced, they can be converted FIGURE 2 Sales – Costs = Profits into higher bottom-line profits. Sales Minus: Purchased Items Using ABC/M Systems Figure 3 illustrates how quality attributes for COQ categories can be tagged or scored into increasingly finer segments of the error free and COQ subcategories. Attributes are tagged to individual work activities belonging to various processes that already have been costed using ABC/M. Each of the categories can be further subdivided. Figure 4 shows examples of subcategories for work activities one additional level below the four major categories of COQ.

For example, value stream mapping is an essential tool of the lean management movement. By tagging work activity costs with these subcategories, Conformance COQ Minus: Error free costs Nonconformance COQ Costs (labor, supplies, and overhead) Equals: Profits COQ = cost of quality 48 I SEPTEMBER 2006 I www. asq. org more robust information can be provided than by simply classifying a cost as value added and nonvalue added. Subcategorization of COQ provides far greater and reliable visibility of costs without the great effort required by traditional cost accounting methods.

Because all the resource expenses can be assigned to the activity costs, 100% of the activities can be tagged with one of the COQ attributes. This is because it is feasible to measure the costs of work activities, typically with estimates, using the principles of ABC/M. Invasive time sheets are not required for ABC/M systems. The attribute groupings and summary rollups also are automatically costed. Life would be nice in an error free world, and an organization’s overall costs would be substantially lower relative to where they are today. But all organizations will always make mistakes—the goal is to manage mistakes and their impact.

COQ reporting communicates fact based data—in terms of money—to enable focusing and prioritizing to manage mistakes. Organizations that hide their complete COQ continue to risk deceiving themselves with the illusion of effective management. It may be easier to think of the sum total of all of the cost categories—error free and COQ—equaling total expenditures during a time period less purchased material costs. FIGURE 4 Investment Justification Of Quality Initiatives Using before and after histograms, Figure 5 (p. 50) illustrates how to manage quality related costs.

Ideally, all four COQ cost categories should be reduced, but the cost of prevention initially might have to be increased prudently to dramatically decrease the costs of and reduced penalties paid for nonconformance COQ categories. This makes COQ more than just an accounting scheme—it becomes a financial investment justification tool. It is widely believed that as failures are revealed—for example via complaints from customers—the root causes should be eliminated with corrective actions. A rule of thumb is that the nearer the failure is to the end user, the more expensive it is to correct.

The flip side is that it becomes less expensive—overall—to fix problems earlier in the business process. As failure costs are reduced, appraisal efforts also can be reduced rationally. Figure 5 demonstrates this overall improvement. Not only are nonconformance COQs significantly reduced, but the level of prevention and inspection costs, which some classify as nonvalue added, are also reduced. The $20,000 of COQ from the before case in Figure 5 (p. 50) has been reduced to $11,000 in the after case. This good work can result in more Examples of Cost of Quality Components Conformance Nonconformance Prevention • Quality education. Process design. • Defect cause removal. • Process change. • Quality audit. • Preventive maintenance. Appraisal • Test. • Measurements. • Evaluations and assessments. • Problem analysis. • Inspection. • Detection. Internal Failure • Scrap. • Rework. • Repairs. • Unscheduled and unplanned service. • Defect removal. • Lost process time. External Failure • Returned products. • Billing reduction from customer complaints. • Field repair call. • Warranty expenses. • Legal exposure and costs. • Liability claims. • Poor availability. • Malfunction. • Replacement. • Poor safety. • Complaint administration. QUALITY PROGRESS I SEPTEMBER 2006 I 49

ECONOMIC CASE FOR QUALITY requests for orders and higher sales without any changes in the staffing level. The original “before” error free costs have remained the same, at $80,000, hence a $9,000 savings. Benefits of Including Total Expenditures Starting a COQ measurement by assuming a 100% inclusion of the total incurred expenditures of Figure 1’s (p. 46) inner concentric circles (not the opportunity costs) and then subsequently segmenting those expenses between the error free costs and COQ provides three benefits: It reduces debate, increases employee focus and integrates COQ with the same financial reporting data used in the boardroom.

Reduces debate: With traditional COQ measures, people can endlessly debate whether a borderline activity, such as expected scrap produced during product development, is a true COQ. Including such a cost as COQ may reduce a measure that is of high interest. By excluding that expense, it becomes hidden among all the other total expenditures of the organization. By starting with the 100% expenditure pool, every expense reported in the general ledger accounting system will fall into some category and always be visible. Increases employee focus: By defining cate- ories into which all costs can be slotted, it is hoped organizations will focus much less on their methods of measurement and more on their organizations’ problems and how to overcome them. Integrates COQs with the same financial report data used in the boardroom: When traditional and obvious COQ information is used, only portions of the total expenditures are selected for inclusion and some portions are not reported. This invites debate about arbitrariness or ambiguity. However, when 100% of expenditures are included, the COQ plus error free costs reconcile exactly with the same data used by executive management and the board of directors.

Executives like to see managerial accounting data reconciled and balanced with their financial accounting reports. There is no longer any suspicion some COQ has been left out or the COQ data are not anchored in reality. By starting with 100% expenditures, the only debate can be about misclassification—not omission. Quantification A formal COQ measurement system provides continuous results. In contrast to a one-time assessment, it requires involvement by employees who participate in the business processes.

More important, these employees must be motivated to spend the energy and time, apart from their regular responsibilities, to submit and use the data. FIGURE 5 Conformance Related Cost of Quality For such a COQ system to be sustained After Before longer term, the system COQ = $20 requires the support and interest of senior manBreak agement as well as gen(to scale) uinely perceived utility by those using the data Purchases Purchases External COQ = $11 Error Error to solve problems. failure free free Internal Regardless of the colfailure lection system selected, it Internal is imperative to focus failure Appraisal External Prevention nalytical and corrective Prevention failure time and energy on the Appraisal area of failure costs. As $90 $80 $1 $2 $5 $12 $90 $80 $5 $3 $2 $1 Joseph Juran discussed in COQ = cost of quality his popular article “Gold 50 I SEPTEMBER 2006 I www. asq. org in the Mine,” much mining still can be performed. 4 This mining should be considered a long-term investment because failure costs when starting a quality management program usually constitute 65 to 70% of an organization’s quality costs. Appraisal costs are normally 20 to 25%, and prevention costs are 5%. Continuous Improvement

Tagging attributes against COQ categories is obviously a secondary purpose for measuring costs. The primary purpose of costing is to simply learn what something costs. Costing data are for measuring profit margins, focusing on where the larger costs are that may be impacted or estimating future costs to justify future spending decisions (for example, return on investment). In short, managerial accounting transforms expenses collected in the general ledger into calculated costs. Expenses are purchases of resources. In contrast, costs are the uses of that spending and are always calculated.

Many organizations arbitrarily base allocation of indirect expenses on broadly averaged volume factors (direct hours to make a product), but the proper rule is to trace and assign indirect expenses based on a one-to-one cause and effect relationship. When an organization has good cost accounting, it then can use calculated costs, such as the cost per processed invoice, as a basis for comparison. In short, the unit cost per each output of work is computed, and then these data are usable for external and internal benchmarking. In benchmarking studies, there often can be a bad case of apples-to-Oreos comparison.

That is, consistency is lacking or unrecognized regarding which work activities or outputs should be included in the study. ions. It would make sense for measuring the financial implications of quality to become an increasingly larger part of the quality management domain. The addition of valid costing data will give the quality movement more legitimacy. ANSI/ISO/ASQ Q9004-2000 suggests financial measurement as an appropriate way to assess “the organization’s performance in order to determine whether planned objectives have been achieved. 5 I hope there will be increased coordination among the quality, managerial accounting and operations systems. REFERENCES 1. Gabe Pall, The Process-Centered Enterprise, St. Lucie Press, 2000, p. 40. 2. Mickel J. Harry, “A New Definition Aims to Connect Quality With Financial Performance,” Quality Progress, January 2000, p. 65. 3. Victor E. Sower and Ross Quarles, “Cost of Quality Usage and Its Relationship to Quality Systems Maturity,” working paper series, Center for Business and Economic Development, Sam Houston State University; November 2002, pp. 0-12. 4. J. M. Juran and Frank M. Gryna, Juran’s Quality Handbook, fourth edition, McGraw-Hill Book Co. , 1951. 5. ANSI/ISO/ASQ 9004-2000, Quality Management Systems—Guidelines for Performance Improvement, ASQ Quality Press, 2000. GARY COKINS is a manager at the SAS Institute, Cary, NC. He earned an MBA from Northwestern University’s Kellogg School of Management in 1974. Cokins is also a quality management speaker and author on advanced cost and performance management. He is a member of ASQ and its quality cost committee.

Cokin’s latest book is Performance Management: Finding the Missing Pieces to Close the Intelligence Gap. Integrating Quality Costs With Operations An ABC/M methodology and system introduces rigor and is sufficiently codified and leveled for relevancy to remove this nagging shortcoming of benchmarking. The quality movement has been a loud advocate for measuring things rather than relying on opin- Please comment If you would like to comment on this article, please post your remarks on the Quality Progress Discussion Board at www. asq. org, or e-mail them to [email protected] org. QUALITY PROGRESS I SEPTEMBER 2006 I 51

Borrowings: English Language and Word

Plan Introduction………………………………………………………………………… …. 4 Part I. Lexico-Semantic Characteristics of different types of borrowed elements in English 1. 1. the definition of the term “borrowed word”…………………….. ……………….. 6 1. 2. the semantic features of types of borrowed elements in English…………………9 1. 2. 1. translation loans. ………………………………………………………………. 10 1. 2. 2. semantic loans…………………………………………………………………11 1. 2. 3. etymological doublets…………………………………………………………. 12 1. 2. 3. 4. hybrids………………………………………………………………………. 15 1. 2. 3. 4. 5. international words………………………………………………………… 18 1. 3. ssimilation of borrowings, its types and degrees………………………………. 24 Part II. Textual Characteristics of types of borrowed elements in Modern English…………26 Conclusions………………………………………………………………………….. 28 Resume………………………………………………………………………………. 29 Bibliography…………………………………………………………………………. 31 Electronic Sources……………………………………………. ……………………… 33 Introduction Etymologically the vocabulary of the English language is far from being homogenous. It consists of two layers – the native stock of words and the borrowed stock of words. Numerically the borrowed stock of words is considerably larger than the native stock of words.

The topicality of the investigation is in the fact that native words comprise only 30% of the total number of words in the English vocabulary that is why a borrowing problem is very popular in linguistics and needs to be learnt. The native words form the bulk of the most frequent words actually used in speech and writing. Besides, the native words have a wider range of lexical and grammatical valency, they are highly polysemantic and productive in forming word clusters and set expressions. The most effective way of borrowing is direct borrowing from another language as the result of the contacts with other nations.

Though, a word may be also borrowed indirectly not from the source language but through another language. [12] When analyzing borrowed words one should distinguish between two terms – source of borrowing and origin of borrowing. The first term is applied to the language from which the word was immediately borrowed and the second – to the language to which the word may be ultimately traced. The closer the two interacting languages are in structure the easier it is for words of one language to penetrate into the other. 6] The subject matter of this Course Paper is to identify types of borrowed elements in Modern English along with lexico-semantic aspect. There are different approaches to classifying the borrowed stock of words. The borrowed stock of words may be classified according to the nature of the borrowing itself as borrowing proper, loans translation and semantic loans. The novelty of the problem arises from the necessity of a profound scientific investigation of the existing types of borrowed elements in Modern English.

The main aim of the Course Paper is to summarize and systematize the lexical and semantic peculiarities of borrowing elements in Modern English. The tendency of the English language to borrow extensively can be traced during the centuries [26]. Thus, one can confidently claim that borrowing is one of the most productive sources of enrichment of the English vocabulary. Part I: Lexico-Semantic Characteristics of different types of borrowed elements in English. 1. 1. The definition of the term “borrowed word” Borrowed words or loanwords are words taken from another language and modified according to the patterns of the receiving language.

In many cases a borrowed word especially one borrowed long ago is practically indistinguishable from a native word without a thorough etymological analysis. The number of the borrowings in the vocabulary of the language and the role played by them is determined by the historical development of the nation speaking the language. A substantial amount of all English words have been borrowed from other languages. These words are usually called “loanwords”, since they are not native English words. In Merriam- Webster’s Online dictionary the word “loanword” is defined in this way: “a word taken from another language and at least partly naturalized. 17] Naturalized means in this case “to introduce into common use or into the vernacular” Sometimes it is done to fill a gap in vocabulary. When the Saxons borrowed Latin words for “butter”, “plum”, “beet”, they did it because their own vocabularies lacked words for these new objects. For the same reason the words “potato” and “tomato” were borrowed by English from Spanish when these vegetables were first brought to England by the Spaniards. But there is also a great number of words which are borrowed for other reasons.

There may be a word (or even several words) which expresses some particular concept, so that there is no gap in the vocabulary and there does not seem to be any need for borrowing. However a word is borrowed because it supplies a new shade of meaning or a different emotional colouring though it represents the same concept. This type of borrowing enlarges groups of synonyms and provides to enrich the expressive resources of the vocabulary. That is how the Latin “cordial” was added to the native “friendly”, the French “desire” to “wish”, the Latin “admire” and the French “adore” to “like” and “love”. 29] The historical circumstances stimulate the borrowing process. Each time two nations come into close contact. The nature of the contact may be different. It may be wars, invasions or conquests when foreign words are imposed upon the conquered nation. There are also periods of peace when the process of borrowing is due to trade and international cultural relations. When words migrate from one language into another they adjust themselves to their new environment and get adopted to the norms of the recipient language.

They undergo certain changes which gradually erase their foreign features, and, finally, they are assimilated. Sometimes the process of assimilation develops to the point when the foreign origin of a word is quite unrecognizable. It is difficult to believe now that such words as “dinner”, “cat”, “take”, “cup” are not English by origin. Others, though well assimilated, still bear traces of their foreign background. “Distance” and “development”, for instance, are identified as borrowings by their French suffixes, “skin” and “sky” by the Scandinavian initial (-sk), “police” and “regime” by the French stress on the last syllable. 8] Loanwords are often even more widely known than native words since their “borrowing served a certain purpose, for example to provide a name for a new invention”. An example of such a borrowing is “pizza”. Since the Italians were those who introduced pizzas in England, the English borrowed the word from them. The word “loanword” is in fact a type of loanword itself. The word comes from the German word “lehnwort”, which means precisely loanword. In this case, the meaning of the word has been borrowed into the English language, ut instead of using the German words (lehn + wort), the English equivalents are used. This type of borrowing is called a calque. As this example shows us, there are different kinds of borrowings, and they can be divided into subgroups. These subgroups will be discussed later in the essay. [32] The word “borrow” is often used in literature on loanwords to symbolize that a language uses a word that originally comes from another language. In this paper the term will also be used, even though the word is somewhat misleading.

The word “borrow,” indicates that the item borrowed will be returned, and since this obviously is not the case, “borrow” may not be the best metaphor in this particular case. In order for loanwords to enter a language it is necessary that some people of the “borrowing” language are bilingual. These people have to be able to understand and to some extent speak the “lending” language so that words can be borrowed from that language. Borrowings enter a vernacular in a very natural way. The process starts off with that bilingual people of a certain language community start using words from another language.

These people often choose to use certain foreign words because they feel that these words are more prestigious than their natives ones. Borrowed words are adjusted in the three main areas of the new language system: the phonetic, the grammatical and the semantic. [27] The lasting nature of phonetic adaptation is best shown by comparing Norman French borrowings to later (Parisian) ones. The Norman borrowings have for a long time been fully adapted to the phonetic system of the English language: such words as “table”, “plate”, “courage”, “chivalry” bear no phonetic traces of their French origin.

Some of the later (Parisian) borrowings, even the ones borrowed as early as the 15th century, still sound surprisingly French: “regime”, “valise”, “matinee”, “cafe”, “ballet”. In these cases phonetic adaptation is not completed. Grammatical adaptation consists in a complete change of the former paradigm of the borrowed word. If it is a noun, it is certain to adopt, sooner or later, a new system of declension; if it is a verb, it will be conjugated according to the rules of the recipient language. Yet, this is also a lasting process.

The Russian noun “??????” was borrowed from French early in the 19th century and has not yet acquired the Russian system of declension. The same can be said about such English Renaissance borrowings as “datum” (pl. data), “phenomenon” (pl. phenomena), “criterion” (pl. criteria) whereas earlier Latin borrowings such as “cup”, “plum”, “street”, “wall” were fully adapted to the grammatical system of the language long ago. 1. 2. The semantic features of types of borrowed elements in English By semantic adaptation is meant adjustment to the system of meanings of the vocabulary.

Sometimes a word may be borrowed “blindly” for no obvious reason: they are not wanted because there is no gap in the vocabulary nor in the group of synonyms which it could fill. Quite a number of such “accidental” borrowings are very soon rejected by the vocabulary and forgotten. But some “blindly” borrowed words managed to establish itself due to the process of semantic adaptation. The adjective “large”, for instance, was borrowed from French in the meaning of “wide”. It was not actually wanted, because it fully coincided with the English adjective “wide” without adding any new shades or aspects to its meaning.

This could have led to its rejection. Yet, “large” managed to establish itself very firmly in the English vocabulary by semantic adjustment. It entered another synonymic group with . the general meaning of “big in size”. Still bearing some features of its former meaning it is successfully competing with “big” having approached it very closely, both in frequency and meaning. [21] Semantic borrowings are such units when a new meaning of the unit existing in the language is borrowed. It can happen when we have two relative languages which have common words with different meanings, e. g. here are semantic borrowings between Scandinavian and English, such as the meaning «to live» for the word «to dwell’ which in Old English had the meaning «to wander». Semantic borrowing can appear when an English word was borrowed into some other language, developed there a new meaning and this new meaning was borrowed back into English, e. g. «brigade» was borrowed into Russian and formed the meaning «a working collective«,»???????». This meaning was borrowed back into English as a Russian borrowing. The same is true of the English word «pioneer». [18] 1. 2. 1. Translation loans By translation-loans we indicate borrowings of a special kind.

They are not taken into the vocabulary of another language more or less in the same phonemic shape in which they have been functioning in their own language, but undergo the process of translation. A form of borrowing from one language to another whereby the semantic components of a given term are literally translated into their equivalents in the borrowing language. English superman, for example, is a loan translation from German Ubermensch. [1] It is quite obvious that it is only compound words (i. e. words of two or more stems). Each stem was translated separately: “masterpiece” (from Germ. “Meisterstuck”), “wonder child” (from Germ. Wunderkind”), ”first dancer” (from Ital. “prima-ballerina”). Translation loans (calque) – words and expressions formed from the material already existing in the English language but according to patterns taken from another language by way of literal word-for-word or morpheme-for-morpheme translation: e. g. chain smoker. [14] Calque entails taking an expression, breaking it down to individual elements and translating each element into the target language word for word. For example, the German word “Alleinvertretungsanspruch” can be calqued to “single-representation-claim”, but a proper translation would result in “Exclusive Mandate”.

Word-by-word translations usually have comic value, but can be a means to save as much of the original style as possible, especially when the source text is ambiguous, or undecipherable to the translator. 1. 2. 2. Semantic loans A semantic loan is a process of borrowing semantic meaning (rather than lexical items) from another language, very similar to the formation of calques. In this case, however, the complete word in the borrowing language already exists; the change is that its meaning is extended to include another meaning its existing translation has in the lending language.

Calques, loanwords and semantic loans are often grouped roughly under the phrase “borrowing”. Semantic loans often occur when two languages are in close contact. Semantic loan is a borrowing where “the meaning of a foreign word is transferred onto an existing native word. An example of a semantic loan is the word “God”. The word is a native English word and existed in Old English as well, but the Christian meaning it has today was borrowed from the Romans and their religion when they came to the British Isles.

One example is the German semantic loan realisieren. The English verb “to realise” has more than one meaning: it means both “to make something happen/come true” and “to become aware of something”. The German verb “realisieren” originally only meant the former: to make something real. However, German later borrowed the other meaning of “to realise” from English, and today, according to Duden[1], also means “to become aware of something” (this meaning is still considered by many to be an Anglicism).

The word “realisieren” itself already existed before the borrowing took place; the only thing borrowed was this second meaning. (Compare this with a calque, such as antibody, from the German Antikorper, where the word “antibody” did not exist in English before it was borrowed. ) A similar example is the German semantic loan uberziehen, which meant only to draw something across, before it took on the additional borrowed meaning of its literal English translation overdraw in the financial sense. Semantic loans may be adopted by many different languages: Hebrew kokhav, Arabic ??? nagm), Russian zvezda, Polish gwiazda, Finnish tahti and Vietnamese sao all originally meant “star” in the astronomical sense, and then went on to adopt the sememe “star”, as in a famous pop or film artist, from English[22]. 1. 2. 3. Etymological Doublets The words originating from the same etymological source, but differing in phonemic shape and in meaning are called etymological doublets. Doublets or etymological twins (or possibly triplets, etc. ) have the same etymological root but have entered the language through different routes.

Because the relationship between words that have the same root and the same meaning is fairly obvious, the term is mostly used to characterize pairs of words that have diverged in meaning, at times making their shared root a point of irony. [23] Some of these pairs consist of a native word and a borrowed word: “shrew”, n. (E. ) – “screw”, n. (Sc. ). Others are represented by two borrowings from different languages: “canal” (Lat. ) – “channel” (Fr. ), “captain” (Lat. ) — “chieftain” (Fr. ). Still others were borrowed from the same language twice, but in different periods: “travel” (Norm. Fr. ) – “travail” (Par. Fr. ), “cavalry” (Norm.

Fr. ) – “chivalry” (Par. Fr. ), “gaol” (Norm. Fr. ) – “jail” (Par. Fr. ). A doublet may also consist of a shortened word and the one from which it was derived: “history” – “story”, “fantasy” – “fancy”, “defence” – “fence”, “shadow” – “shade”. And for example English pyre and fire are doublets. Subtle differences in the resulting modern words contribute to the richness of the English language, as indicated by the doublets frail and fragile (which share the Latin root, fragilis): one might refer to a fragile tea cup and a frail old woman, but a frail tea cup and fragile old woman are subtly different and possibly confusing descriptions. 19] Another example of nearly synonymous doublets is aperture and overture (the commonality behind the meanings is “opening”), but doublets may develop divergent meanings, such as the opposite words, host and guest from the same PIE root, which occur as a doublet in Old French hospes, before having been borrowed into English. Doublets also vary with respect to how far their forms have diverged. For example, the resemblance between levy and levee is obvious, whereas the connection between sovereign and soprano is harder to guess synchronically from the forms of the words alone.

Etymological twins are usually a result of chronologically separate borrowing from a source language. In the case of English, this usually means once from French during the Norman invasion, and again later, after the word had evolved. An example of this is warranty and guarantee. Another possibility is borrowing from both a language and its daughter language (usually Latin and some other Romance language). Words which can be traced back to Indo-European languages, such as the Romance “beef” and the Germanic “cow”, in many cases actually do share the same proto-Indo-European root.

The forward linguistic path also reflects cultural and historical transactions; often the name of an animal comes from Germanic while the name of its cooked meat comes from Romance. Since English is unusual in that it borrowed heavily from two distinct branches of the same linguistic family tree, it has a relatively high number of this latter type of etymological twin. [28] The changes a loan word has had to undergo depending on the date of its penetration are the main cause for the existence of the so-called etymological doublets.

They differ to a certain degree in form, meaning and current usage. Two words at present slightly differentiated in meaning may have originally been dialectal variants of the same word. Thus we find in doublets traces of Old English dialects. Fxamples are whole (in the old sense of ‘healthy’ or ‘free from disease’) and hale. The latter has survived in its original meaning and is preserved in the phrase hale and hearty. Both come from OE Kal: the one by the normal development of OE a into 0, the other from a northern dialect in which this modification did not take place.

Similarly there are the doublets raid and road, their relationship remains clear in the term inroad which means ‘a hostile incursion’, ‘a raid’. The verbs drag and draw both come from OE dragan. [20] The words shirt, shriek, share, shabby come down from Old English, whereas their respective doublets skirt, screech, scar and scabby are etymologically cognate Scandinavian borrowings. There are also etymological doublets which were borrowed from the same language during different historical periods. Sometimes etymological doublets are the result of borrowing different grammatical forms of the same word, e. . the Comparative degree of Latin «super» was «superior» which was borrowed into English with the meaning «high in some quality or rank». The Superlative degree (Latin «supremus»)in English «supreme» with the meaning «outstanding», «prominent». So «superior» and «supreme» are etymological doublets. [16] 1. 2. 3. 4. Hybrids A hybrid word is a word which etymologically has one part derived from one language and another part derived from a different language The most common form of hybrid word in English is one which combines etymologically Latin and Greek parts.

Since many prefixes and suffixes in English are of Latin or Greek etymology, it is straightforward to add a prefix or suffix from one language to an English word that comes from a different language, thus creating a hybrid word. Such etymologically disparate mixing is considered by some to be bad form. Others, however, argue that, since both (or all) parts already exist in the English lexicon, such mixing is merely the conflation of two (or more) English morphemes in order to create an English neologism (new word), and so is appropriate[25]. Automobile – a wheeled passenger vehicle, from Greek ???? (auto) “self-” and Latin mobilis “moveable” Homosexual – from the Greek ???? (homos) meaning “same” and the Latin sexus meaning “gender” (This example is remarked on in Tom Stoppard’s The Invention of Love, with A. E. Housman’s character saying “Homosexuality? What barbarity! It’s half Greek and half Latin! “. ) Hyperactive — from the Greek ???? (hyper) meaning “over” and the Latin activus Hypercorrection — from the Greek (hyper) meaning “over” and the Latin correctio Hyperextension — from the Greek (hyper) meaning “over” and the Latin extensio meaning “stretching out” Liposuction — from the Greek ????? lipos) meaning “fat” and the Latin suctio meaning “sucking” Minneapolis — from Dakota mni meaning “water” and Greek ????? meaning “city” Monoculture — from the Greek ????? (monos) meaning “one, single” and the Latin cultura Monolingual — from the Greek ???? (monos) meaning “one” and the Latin lingua meaning “tongue”; the non-hybrid word is unilingual Mormon — It was alleged by Joseph Smith[citation needed] that Mormon comes from the English “more” and the Reformed Egyptian mon meaning “good”. Neuroscience — from the Greek ?????? euron, meaning “sinew,” and the Latin “sciens,” meaning “having knowledge. ” Neurotransmitter — from the Greek neuron, meaning “sinew,” and the Latin, trans meaning “across” and mittere meaning “to send. ” Nonagon — from the Latin nonus meaning “ninth” and the Greek ????? (gonon) meaning “angle”; the non-hybrid word is enneagon Pantheism — from the Greek ??? (pan) meaning “all” and Latin deus meaning “God”; the non-hybrid word is pantheism Sociology — from the Latin socius, “comrade”, and the Greek ????? logos) meaning “word”, “reason”, “discourse” Television — from the Greek ???? (tele) meaning “far” and the Latin visio from videre meaning “to see” English further abounds with Hybrid Compounds, i. e. , words made up from different languages. Many of these are due to the use of prefixes and suffixes. Thus in a-round, the prefix is English but round is French; so also in be-cause, fore-front, out-cry, over-power, unable. In aim-less, the suffix is English, but aim is French; so also in duke-dom, false-hood, court-ship, dainti-ness, plenti-ful, fool-ish, fairy-like, trouble-some, enial-ly, &c. But besides these we have perfect compounds, such as these: beef-eater, i. e. , eater of beef, where eater is English and beef is French; so also black-guard, life-guard, salt-cellar, smallage. On the other hand, French is followed by English in eyelet-hole, heir-loom, hobby-horse, kerb-stone, scape-goat. ” An initial wave of hybridization took place in the early Middle Ages between Anglo-Saxon and Danish that included, among many other items, that apparently most English of words, the. A second process began after the Norman Conquest in 1066, . . . hen English mixed with French, and began to draw, both through French as well as directly, on Latin and Greek for a wide range of cultural and technical vocabulary. Indeed, rather than being an exception, such hybridization is a normal and even at times predictable process, and in the twentieth century a range of such flows of material has been commonplace. [24] 1. 2. 3. 4. 5. International Words It is often the case that a word is borrowed by several languages, not just by one. Such words usually convey concepts which are significant in the field of communication.

Many of them are of Latin and Greek origin. In linguistics, an internationalism or international word is a loanword that occurs in several languages with the same or at least similar meaning and etymology. These words exist in “several different languages as a result of simultaneous or successive borrowings from the ultimate source”. Pronunciation and orthography are similar so that the word is understandable between the different languages. [13] Most names of sciences are international (e. g. philosophy, mathematics, physics, chemistry, biology, medicine, linguistics, lexicology).

There are also numerous terms of art in this group: music, theatre, drama, tragedy, comedy, artist, primadonna, etc. ; and the sports terms: football, volley-ball, baseball, hockey, cricket, rugby, tennis, golf, etc. It is quite natural that political terms frequently occur in the international group of borrowings: politics, policy, revolution, progress, democracy, communism, anti-militarism. 20th century scientific and technological advances brought a great number of new international words: atomic, antibiotic, radio, television, sputnik (a Russian borrowing).

Fruits and foodstuffs imported from exotic countries often transport their names too and become international: coffee, cocoa, chocolate, banana, mango, avocado, grapefruit. The similarity of such words as the English “son”, the German “Sohn” and the Russian “???” should not lead one to the quite false conclusion that they are international words. They represent the Indo-European group of the native element in each respective language and are cognates, i. e. words of the same etymological root, and not borrowings. [7] It is debated how many languages are required so that a word is an internationalism.

The term is uncommon in English linguistics, although English has contributed a considerable number of words to world languages, e. g. the sport terms: football, baseball, cricket, and golf. Words or the initial or final parts and roots of so called internationalisms (international words or parts of words) which are videly used in different European languages. Mainly these words consist of word-elements of Latin and/or Greek origin and are widely used with similar spelling and pronunciation in different European languages.

Terms being built on the basis of Latin and Greek elements quickly spread into other languages and become internationally intelligible. These are such terms as appropriation, communication, comparator, control, descriptor, examination, identification, inspection, regulation, technique, technology, and many others. Such terms are either borrowed from the Latin and Greek language or built in different modern languages on the basis of Latin and/or Greek word-elements.

Due to the same or similar spelling and/or pronunciation these words form that part of vocabulary which translators usually do not use to translate. As a result – these words are transferred from one language into another without translation considering them to be with the same meaning. It creates problems if the meaning differs. Then such “internationalisms” become the so called false friends of translators. [33] The understanding of the so called “internationalisms” we can find in the definition which is developed as a result of investigation of this group of borrowings.

Now it is enough to remind only of some aspects of this definition: for to qualify the word in the status of the “internationalism” it is necessary that the word in different (different group of) languages is used with the same or similar spelling and pronunciation and besides – with the same or close meaning as well. The meaning of the word, especially in the function of a term, is very relevant in terminology. In fact, all the requirements put before scientifically motivated term are based on the semantic aspect.

The specific role of the semantic aspect in terminology is underlined by a number of terminologists. The necessity of the unity between concepts and terms (which we spell and pronounce) is one of the characteristic features of terminology at all. But if we compare equivalents given in ISO standards, for instance on energetics, in English, German, Russian and other languages, sometimes we will see that such international terms given as equivalents (for the expression the same concept) are not with the same meaning: English | |German | |Russian | |telecontrol |– |Fernwirken |–|telemehanika | |telemonitoring |– |Fernuberwachen |–|telekontrolj | |teleindication |– |Fernanzeigen |–|telesignalizacija | |telecomand |– |Fernsteuern |–|teleupravlenije | |teleinstruction |– |Fernanweisen |–|telekomandovanije |

Taking into account that internationalisms on the base of Latin and Greek word-elements are widely used in EU legislative acts and ISO standards, and the semantic discrepancies of such internationalisms cause serious misunderstandings among legislation act users, one of the relevant tasks of nowadays linguists is to find out ways how to bring nearer the semantics of such words in different languages. This task refers to interlingual level of terminology. [11] One of the ways for bringing nearer the semantics of internationalisms being built on the base of Latin and Greek languages is to respect the meaning of every word-element in the source language. Let us compare meanings of the elements bi- (from Latin bi ‘two’), tri- (from Latin tres ‘three’), multi- (from Latin multus ‘much, many’) given in the Oxford dictionary (Oxford 1995 |bi- |– |biannual |‘ocuring twice a year’ | | |biaxial |‘having two axes’ | | | |bicycle |‘a vehicle of two weels’ | | | |bikini |‘a two-piece swimsuit for women’ | | | |bilingual |‘able to speak two languages’ | |tri- |– |triangle |‘a plane figure with three sides and angles’ | | | |triathlon |‘consisting of three different events’ | | | |triaxial |‘having three axes’ | | | |tricycle |‘a vehicle having three wheels’ | | | |trilingual |‘able to speak three languages’ | |multi- |– |multiaxial |‘of or involving several axes’ | | | |multicellural |‘having many cells’ | | | |multicolour |‘of many colours’ | | | |multilateral |‘if three or more parties participate’ | | | |multilingual |‘using several languages’ | | | | | | As we can see from examples, the elements bi-, tri-, multi- are used in different terms according their meaning in the Latin as the source langauge of these elements.

Consequently, if we use the term bilingual or bilingualist that means that we can to attribute it only to an individual who is able to communicate in two languages, not in three or more languages. Than he would be a multilingual person, or multilingualist. The semantic “creativity” sometimes applied in language practice by some lawyers, clerks or other language users we can qualify only as a deviation which contradicts with the national content of the word, and may create misunderstandings. A serious problem, and not only linguistic but political as well, is semantic discrepancies between the same English and Latvian international term in politics. These are such terms as nationalism and nationalist, occupation and occupant, national minority and ethnical minority, integration and assimilation, etc.

These terms are internationalisms on the spelling and pronouncing level but differs on the semantic level. [15] From the one hand, the semantic difference of one and the same internationalism has objective reasons: 1) the polysemy of a word or word-element in the source language; 2) the specifity of the historical development of each national language. From the other hand, such internationalisms are the factor which results in contradictions. The choice of more appropriate form is realized on the base of semantic investigation of each word-element in source language using the appropriate manuals and according appropriate structural-semantic models of internationalisms.

As international models of terms in English, German, Russian and Latvian are stated (examples are given in English only): 1) derivatives with the postfixal element -logy: biology, geology, immunology, lexicology, philology; 2) derivatives with the postfixal element -graphy: geography, orthography; 3) derivatives with the postfixal element -sphere: atmosphere, lithosphere, stratosphere; 4) derivatives with the postfixal element -eme: Some models are actual in German, Russian and Latvian, but not in English (examples are given in German only): 1) derivatives with the postfixal element -thek: Bibliothek, Diskothek; 2) derivatives with the postfixal element -ur: Doktorantur.

These models are still active for derivation new terms from Latin and Greek word-elements. There are some groups of internationalisms which have word-elements with common origin but different structure, for instance, such elements as dermo- and dermato- from Greek derma (dermatos) ‘skin’, or such as ferri- and ferro- from Latin ferrum ‘iron’. The terminology practice shows that there is a tendency to fasten each of these forms for expressing the different content: the element derm[a]- is used for expressing the content of ‘that which is belonging to skins, or that is like a skin’ (dermal), but dermato- is used for expressing the content of ‘that which refers to skin diseases’ (dermatology).

In chemistry different forms ferri- and ferro- are used to express compounds with different iron content (ferrimagnetism; ferroelectricity, ferromagnetism). [9] The meaning of such word-element variables is not the same in different languages. Therefore a very difficult task is to harmonize the semantics of such elements on the international scale. It is necessary to establish appropriate meaning system for such elements first of all in a particular national language. Considerations expounded do not mean that all internationalisms in a number of European languages must be revised and unified. The main idea is, that common structural-semantic models could help us in unambiguous communication. Therefore it is recommended to fix such models and use them, if necessary, for new derivations.

In cases when the meaning of the same internationalisms is different we can try to bring it nearer to the appropriate meaning in origin. [30] The multilingual investigation of international terminology shows that Latin and Greek word-elements are still vital in new structural-semantic models. These models may induce a positive influence on unambiguous communication process if these models are interlingually coordinated, being a good remedy also in translating EU regulations and ISO standards. If possible, there could be established a special Board or a Committee whose task would be providing unambiguous international term-models with coordinated meaning. A lot of such models are in use in many languages. 1. 3. Assimilation of borrowings, it’s types and degrees

The degree of assimilation of borrowings depends on the following factors: a) from what group of languages the word was borrowed, if the word belongs to the same group of languages to which the borrowing language belongs it is assimilated easier, b) in what way the word is borrowed: orally or in the written form, words borrowed orally are assimilated quicker, c) how often the borrowing is used in the language, the greater the frequency of its usage, the quicker it is assimilated, d) how long the word lives in the language, the longer it lives, the more assimilated it is. Accordingly borrowings are subdivided into: completely assimilated, partly assimilated and non-assimilated (barbarisms). [3] Completely assimilated borrowings are not felt as foreign words in the language, if the French word «sport» and the native word «start». Completely assimilated verbs belong to regular verbs, e. g. correct -corrected. Completely assimilated nouns form their plural by means of s-inflexion, e. g. gate- gates.

In completely assimilated French words the stress has been shifted from the last syllable to the last but one. Semantic assimilation of borrowed words depends on the words existing in the borrowing language, as a rule, a borrowed word does not bring all its meanings into the borrowing language, if it is polysemantic, e. g. the Russian borrowing «sputnik» is used in English only in one of its meanings. [10] Partly assimilated borrowings are subdivided into the following groups: a) borrowings non-assimilated semantically, because they denote objects and notions peculiar to the country from the language of which they were borrowed, e. g. sari, sombrero, taiga, kvass etc. b) Borrowings non-assimilated grammatically, e. g. ouns borrowed from Latin and Greek retain their plural forms (bacillus – bacilli, phenomenon – phenomena, datum -data, and genius – genii etc. c) Borrowings non-assimilated phonetically. Here belong words with the initial sounds /v/ and /z/, e. g. voice, zero. In native words these voiced consonants are used only in the intervocal position as allophones of sounds /f/ and /s/ (loss – lose, life – live) Some Scandinavian borrowings have consonants and combinations of consonants which were not palatalized, e. g. /sk/ in the words: sky, skate, ski etc (in native words we have the palatalized sounds denoted by the digraph «sh», e. g. shirt); sounds /k/ and /g/ before front vowels are not palatalized e. g. irl, get, give, kid, kill, kettle. In native words we have palatalization , e. g. German, child. Some French borrowings have retained their stress on the last syllable, e. g. police, and cartoon. Some French borrowings retain special combinations of sounds, e. g. /a:3/ in the words : camouflage, bourgeois, some of them retain the combination of sounds /wa:/ in the words: memoir, boulevard. d) borrowings can be partly assimilated graphically, e. g. in Greak borrowings «y» can be spelled in the middle of the word (symbol, synonym), «ph» denotes the sound /f/ (phoneme, morpheme), «ch» denotes the sound /k/(chemistry, chaos),«ps» denotes the sound /s/ (psychology).

Latin borrowings retain their polisyllabic structure, have double consonants, as a rule, the final consonant of the prefix is assimilated with the initial consonant of the stem, (accompany, affirmative). French borrowings which came into English after 1650 retain their spelling, e. g. consonants «p», «t», «s» are not pronounced at the end of the word (buffet, coup, debris), Specifically French combination of letters «eau» /ou/ can be found in the borrowings: beau, chateau, troussaeu. Some of digraphs retain their French pronunciation: ‘ch’ is pronounced as /sh/, e. g. chic, parachute, ‘qu’ is pronounced as /k/ e. g. bouquet, «ou» is pronounced as /u:/, e. g. ouge; some letters retain their French pronunciation, e. g. «i» is pronounced as /i:/, e,g, chic, machine; «g» is pronounced as /3/, e. g. rouge. [31] Non-assimilated borrowings (barbarisms) are borrowings which are used by Englishmen rather seldom and are non-assimilated, e. g. addio (Italian), tete-a-tete (French), dolce vita (Italian), duende (Spanish), an homme a femme (French), gonzo (Italian) etc Part II: Textual Characteristics of types of borrowed elements in Modern English Most linguists categorize borrowings in this way: Loanwords are words that keep their meaning and phonetic shape, when they find their way into another language.

The word “pizza”, for example, which has its origin in Italian, has the same “shape”, in other words, is pronounced and written in the same way in both English and Italian, which makes it a “real” loanword. It is also important that the word is inflected in the same way. The plural forms therefore also have to be identical in both languages. A calque or a “loan translation” is a “one-to-one translation of a foreign model”. An example of a calque is the English word “embody”, which has its origin in the Latin equivalent “incorporare”. The word “loanword” is also a calque. The names of the days of the week are further examples of loan translations. They were borrowed from Latin approximately around 400 A. D.

All Germanic people, except the 10 Gothic, used the Germanic equivalents of the Roman gods when they named the days of the week, and the names are therefore from Germanic mythology. [4] The word “calque” can also stand for a “loan transfer”, which is almost the same as a loan translation, the only difference being that “at least one part is semantically different from the model”. An example of such a calque is the German word “Wolkenkratzer”, which literally means “cloudscraper”. Here “cloud” is used instead of “sky”, while the word “scraper” is correspondingly translated. A loan creation is another form of borrowing. A loan creation is a rather complicated type of borrowing, since a word or the meaning of word is not actually borrowed.

If a new word is created in a language, and there was some sort of influence from other languages, even if only to a small degree, it is called a loan creation. Usually, words that refer to exotic ideas, concepts or objects are borrowed. An example of this is how names of animals that do not inherently come from Great Britain are often loanwords in English. The name of the animal is borrowed from the language that is spoken in the country in which the animal originally comes from or lives in. When we examine loanwords in different languages we will find that most of these borrowings are nouns. Nouns, and lexical words in general, are borrowed more frequently than grammatical words.

The can be explained with the fact that a major reason for borrowing lexica is “to extend the referential potential of a language. Since reference is established primarily through nouns, these are the elements borrowed most easily”. [5] There are certain structural features which enable us to identify some words as borrowings and even to determine the source language. We have already established that the initial sk usually indicates Scandinavian origin. You can also recognise words of Latin and French origin by certain suffixes, prefixes or endings. Conclusions This paper has showed that linguistic borrowing is an old way of acquiring new vocabulary, and not a new phenomenon of our globalized world.

People of different cultures have always interacted with each other, and there has always been an exchange of lexis due to this interaction. Loanwords enrich a language, since the vocabulary gets larger and each word therefore acquires a more specific and subtle meaning and this should be kept in mind before one simply criticizes and dismisses borrowings. While writing this Course Paper it was investigated that the actual process of borrowing is complex and involves many usage events. Conventionalization is a gradual process in which a word progressively permeates a larger and larger speech community. As part of its becoming more familiar to more people, with conventionalization a newly borrowed word gradually adopts sound and other characteristics of the borrowing language.

In time, people in the borrowing community do not perceive the word as a loanword at all. Generally, the longer a borrowed word has been in the language, and the more frequently it is used, the more it resembles the native words of the language. English has gone through many periods in which large numbers of words from a particular language were borrowed. These periods coincide with times of major cultural contact between English speakers and those speaking other languages. The waves of borrowing during periods of especially strong cultural contacts are not sharply delimited, and can overlap. For example, the Norse influence on English began already in the 8th century A. D. nd continued strongly well after the Norman Conquest brought a large influx of Norman French to the language. [2] It is part of the cultural history of English speakers that they have always adopted loanwords from the languages of whatever cultures they have come in contact with. ?????? ?????????? ????? – ?? ?????, ????? ? ????? ??? ? ??????? ? ????????????? ? ????????? ?????????? ????. ? ???????? ???????? ?????????? ?????, ???????? ??, ?? ???? ?????????? ????? ????????? ????????? ?????????? ??? ??????? ????? ??? ?????????? ?????????????? ???????. ????????? ?????????? ? ???????????? ?????? ???? ?? ????, ??? ???? ?????????? ???????????? ?????????? ????????? ?????, ??? ???????? ?????. ????? ???? ???? ??????????? ???? ???? ?????????? ? ????? ???. ?? ????? ???????? ????????? «?????????????», ???????? ???? ?? ? ??????? ??????????? ????. ? ?????? ???????? Merriam-Webster’s ????? «??????????» ???????????? ????????? ?????: «?????, ????? ? ????? ???? ? ????????? ???????? ??????????????» ???????????? ???????????, ???????????? ? ??? ??????, ??????? ? ????, ?? ????? ????? ?????????? ???? 30% ??? ????????? ????????? ???? ? ??????????? ???????, ???? ???????? ?????????? ? ?????? ?????????? ? ??????? ??????????? ? ??????? ???? ?????????? ?? ????????? ?????. ????????? ????? ?????? ? ?????????? ????? ?????????? ? ???????? ??????????? ???? ? ?????? ???????-???????????? ???????. ??????? ????? ?????? ? ???????????? ?? ?????????????? ???????-??????????? ???????????? ??????????? ????????? ? ???????? ??????????? ????. ?????????? ???? ?????????? ????’?????? ????? ???????: 1. ?? ????????? ???????? ???????????????? ?????? ?? ??????? ??????? ?????????? ??????? ????? ????? ???? ? ????? ???????, ??????????? ? ?????????? ????; 2. ????????? ????? ?? ????????? ???????, ?? ?????????? ?????????????? ? ???????? ?????? ??????????? ? ????? ???; 3. ????????? ????? ????????? ???????? ?????????? ? ??????? ?? ????????? ? ??????????? ????; 4. ??????? ???????-?????????? ??????????? ???????? ??????????; ??’????? ??????????? ? ???????-?????????? ??????? ? ??????????? ????, ????????? ??????????????? ?????????? ??????????. ?????? ??????????? – ??????? ???????? ??????????? ? ??????????? ????, ????????? ?? ????????? ? ????????. ?????? ??????????? ?? ??????? ?????????????? ??????? ????????? ?????????? ??’???? ? ??????????? ?????????? ??????. ??????????? ???????-??????????? ????? ? ????????? ????????????? ?? ?????????????, ????????????? ? ????????????? ????????. ???????????? ?????? ?? ???????? ???????? ?????????? ???????????? ???????????? ???????????? ?????????? ?????? ? ????? ??????????? ???????????? ????????? ???????. ??????? ?????? ??????????? ? ?? ??????, ???????? ???????, ?????????, ?????? (???. ?????), ?????? ???????????? ?????????? ?? ?????? ???????? ??????. ??? ???????????? ????????? ?? ??????? ?????? ? ???????? ???????. ?????? ?????? ???????? ????? ??????????? ??????, ????????? ????, ???? ?? ???????????? ???? ???????? ??????. ??????? ??????? ??????????? ? ???? ??????, ??? ?????????? ?????????? ??????? “???????????” ?? ???????-??????????? ????????????? ?????? ????? ??????????? ????????? ? ???????? ??????????? ????. ? ????????? ?? ????????? ???????? ?? ?????????????? ???????-?????????? ??????????? ???????? ????? ??????????. Bibliography 1. Arnold I. V. The English Word. M. , 1973 (1986), Chapter XIV. ‘Native words versus loan words’. 2. Arnold I. V. The English Word. – M. , 1973, Chapter ?III pp. 236-247. 3. Bolton W. F. , A Living Language: The History and Structure of English. Random House, 1982 4. English as a Globlal Language. Crystel, D. 997 Cambridge: University Press. 5. Francis W. N. «The structure of American English» New York. 1998 6. Ginsburg R. S. et als. , A Course in Modern English Lexicology. M. , 1979, Chapter VI. Etymological survey of the English Word Stock, p. 160 -175. ; 200-209. 7. Govdon E. M,. Krylova I. P A Grammar of Present-day English». ?????? 1971 8. Hornby A. S Oxford Advanced Learner’s Dictionary of Current English . – Oxford: Oxford University Press, 1989 9. Klimenko A. P and Tokareva. I. I. Varieties of English Mn. , 2002, pp. 55-133 10. Lefevere A, «Translation: Its Geneology in the West,» in Translation, History amd Culture, ed.

Susan Bassnett and Andre Lefevere (London and New York: Pinter Publishers, 1990), 14 11. Lescheva L. M. Words in English, Mn. , 2001, Chapter 8, pp. 123-136 12. McCrum The Story of English, 1992 13. Metcalf ?, Predicting New Words. Houghton Mifflin, 2002 14. Murray, J. A. H and Bradley, H and Craige, Sir W. A and Onions, C. T. 1933. The Oxford English Dictionary. A correct re-issue of A New English Dictionary on Historical PrinciplesLondon: Oxford University Press 15. Pyles Th. , Algeo J. The Origins and Development of the English Language. 1982. Chapter 12. p. 292-316. 16. Sereda L. et al. Introduction to the history and varieties of the English Language Bialystok, 2003, Chapter X, pp. 81-101. 17. Strang B. Modern English Structure» L. D. 1974 18. Tom McArthur, “English World-Wide in the Twentieth Century,” in The Oxford History of English, ed. by Lynda Mugglestone, Oxford Univ. Press, 2006 19. Walter Skeat, Principles of English Etymology, Clarendon Press, 1892 20. ??????????? ?. ?. ??????? ??????????? ?????. – ??????, 2001. ???. 296 – 328 (Development of English vocabulary from the 12th to the 19th century). 21. ??????????? ?. ?. ???? ?????? ?? ??????? ??????????? ?????. – ??????, 1. 972. ???. 130-150. 22. ??????? ?. ?. ????? ???????????? ??????????? ????? ? ???. – M. , 1963, ???. 7-12,84-89. Electronic Sources 23. http://dooku. miun. e/engelska/englishB/languagesruvey/Compendium/History%20of20kEnglish%20compendium. htm 24. http://www. etymonline. com/abbr. php 25. http://www. orbilat. com/Influences_of_Romance/English/RIFL-English-Periodization. html 26. http://french. about. com/library/bl-frenchinenglish. htm 27. http://en. wikipedia. org/wiki/Language_contact 28. http://www. wordorigins. org/histeng. htm 29. http://www. anglistik. uni-kiel. de/chairs/LingHist/English-History. pdf 30. http://www. ruf. rice. edu/kemmer/words/loanwords. html 31. http://en. wikipedia. org/wiki/Etymology 32. http://en. wikipedia. org/wiki/Loanword 33. http://en. wikipedia. org/wiki/Old_English_language

Leadership Research Essay

Leadership Research Essay Leadership Research Essay Introduction Leadership is a process of influencing activities of a particular group of people with the aim of attaining certain stipulated goals. In defining leadership there is need to consider a particular group, the common goals and the duties that are allocated to specific members of the group depending on their abilities (Fiedler 1976). Leadership therefore cannot successfully occur unless members of the group are given different considerations in terms of personality, traits and responsibilities.

In considering leadership, it is important to look at the leader, the group or organization they are leading, the members as individuals and the situation; these are variables for interaction in the leadership processes, which are paramount for the success of the whole process. A leader is in essence the person who influences a group of people with the aim of attaining specific goals. Therefore, leaders in a group are separated from the rest of the group members by the extent to which they exert influence towards activities in the organization.

It is to this effect that this essay will consider leadership using psychodynamic theory in an effort to bring out a marshal plan to improve my motivation and leadership skills. Various Models/ Approaches of Leadership According to psychodynamic Approach the leaders are aware of their personality and that of their followers. This is considered with an assumption that the personalities of individuals are deep rooted in individuals and very little can be done to change them (Northhouse, 2007).

The personality of an individual is therefore important in establishing an individual’s leadership potential. The approach in a sense functions to strengthen the relationship between the leader and the followers in that, the leaders understanding of the followers personalities makes them to be accommodative to various personalities of the followers. The leader is thus in a position to determine the most favorable work for their followers based on preferences in terms of making decisions and structuring work efforts (Northhouse, 2007).

Psychodynamic approach to leadership is important because it makes leaders aware of their personality and that of their followers. This is advantageous to specific leaders since it enables them to easily allocate tasks. For instance, if a follower is easily tempered then the leader will avoid giving this particular follower a task that will require them to directly be involved with customers who might be sometimes difficult to handle. There are other approaches that can be used to improve individuals’ skills; these include the trait approach and the style approach.

On one hand the trait approach emphasizes on the characteristics being important to leadership status while the style approach consider certain behaviors as indicators of leadership (Northhouse, 2007). Situational approaches focus on the elements which are matched between leadership styles and behaviors versus the needs of the subordinates unlike psychodynamic approach which considers personality types of individuals. It is to this effect that leaders can use elements in these approaches to improve their skills.

In defining leadership, a variable of leadership situation in the Contingency Model is of essence. This variable entails the need of controlling the situation and making it favorable for solid leadership. In dealing with this concept, there are subscales that are considered including; the degree to which the leader is or feels accepted and supported by their groups, the clarity of the task in terms of identification of the goals, and the ability of the leader to reward and punish accordingly.

Good leadership can be reflected by the Contingency Model in the sense that the concept of task-motivated leaders manifests their best performance when the situational control is high rather than when it is low (Fiedler 1976). On the other hand, relationship motivated leaders equally tend to perform best when the situation is under their control. The trait approach to leadership is a theory that stemmed from the late nineteenth century and in the early twentieth century when it was believed that leaders were born. It was believed that it was a man’s innate ability to lead (Fiedler 1976).

Therefore the result was that an individual destined to lead had inborn traits or characteristics that enabled him to lead. It is to this effect that various researching agencies took the task to find out the truth behind this belief but no crucial evidence was found to support these allegations. Leadership Theory that describes me as a Leader The leadership model that best describes me as a leader is the psychodynamic approach. The basic concept that underlies this approach is personality, which entails the consistent way an individual does things.

The tendencies and qualities are equally considered; in this case my style of leadership is shrouded with independence in creativity and spontaneous reaction (Gastil, 1994). The concept of personality therefore arises in the sense that I am able to decide accurately on very crucial issues spontaneously. Psychodynamic approach to leadership best describes my way of leadership because I usually consider my followers personalities before allocating tasks to them. In this case, each follower has to exhibit some relevant skills to the task that they are allocated.

I also consider it difficult to change individuals’ personality and therefore encourage them to perfect the skills they have and not to change them. This is an element in psychodynamic approach to leadership. The concept of predictability of an individual is an essential psychodynamic element in my way of leadership in the sense that the theory shows that human behaviors are predictable and understandable (Northhouse, 2007). In this sense I easily predict certain individual’s behaviors based on their personality and this makes my leadership style to be described best by psychodynamic approach.

This can be illustrated by Jung’s way of classifying personality such that people’s personality can be classified by understanding that human behavior is predictable and understandable, they have preference of how they feel and think and these preferences become the basis of how they work and play their specific roles (Northhouse, 2007). My Distinguishing Leadership Traits One of the most distinguishing leadership traits I have is the ability to instill some sense of passion to my team and hence leave in the team members a conviction and the will to move on.

I also have the quality of listening to my team; in this case I listen carefully and consider various options to the issues raised before giving feedback. In essence, as a good leader, I involve everyone, give everyone responsibility according to their identified abilities and make everybody accountable; thus I am responsible for my actions and the actions of my followers. Another trait that distinguishes my leadership skills is the confidence I manifest. I communicate to my followers with a lot of confidence and endeavor to develop them, display trust and dedication.

In addition, as a leader I give example through my behavior and thus I am a role model to some of my followers. Moreover, I promote innovation by nurturing creativity tributes in my team. I am equally responsible enough to be decisive; I am able to make rational decisions under pressure and at a short notice. These outstanding traits are reflected in the positive results I get from my followers. As a follower, what leadership approach do you prefer? (Situational leadership: High supportive and low Directive Behavior) As a follower, I prefer the situational leadership approach.

This is a contingency theory that focuses on the followers in the sense that the success or failure of a leadership is in the ability of the follower to either accepting or reject a leader (Mullins, 2000). In this case the actions of the followers have been given prominence. I prefer this approach as a follower because it considers the extent to which I am willing to accomplish specific tasks. Equally important is the fact that this approach enables the leaders to adjust their leadership styles towards the follower’s readiness level and this gives the followers an easy throughway in handling tasks (Mullins, 2000).

It therefore caters for the high support by the leaders and low directives by leaders; thus it is followers friendly and therefore this is why I prefer it. Goals and Action Plan for Improvement One of the goals that I am set to come up with to improve my leadership and motivational skills is ensure that I empower my followers so that they can be motivated in the course of my leadership. I will do this by improving my skills to be sufficient enough to train my followers on specific skills.

I will also up or improve my motivation skills by trying to identify the exact incentives that motivates my followers and by doing this, my followers will be able to appreciate the incentive I will give them and thus be motivated. I will ensure my followers are empowered and motivated through establishing teams that will make them to compete internally. The competition within the teams will be rewarded by identified incentives and this will ensure its (competition) sustainability. Within each team various talents and abilities will be identified and thus each team will be blended with numerous skills.

In implementing the leadership skill that I will use to develop the teams I will use psychodynamic approach because I will be able to mix individuals with various personalities to come up with strong teams. On the other hand, in implementing my motivational strategy I will use Contingency Model in that the concept of task-motivated leadership manifesting their best performance when the situational control is high rather than when it is low while relationship motivated leadership equally tend to perform best when the situation is under their control will be taken into account (Fiedler 1976).

Timeline My leadership empowerment skills will take one month to be accomplished. The most challenging task is to put individuals in groups that will enhance competition. On the other hand, the motivational self -improvement goal will be implemented in two months that is when the results from the groups start to be seen so that they can be reinforced by rewards or incentives. However, these self- improvement goals will be continuous in the sense that empowering and motivating my followers will be done consistently and continuously when the need arise since we are living in a versatile world.

References Baldoni, J (2005). Great Motivation Secrets of Great Leaders. McGraw Hill Fielder, Fred. (1976). ‘Situational Control and a Dynamic Theory of Leadership’, in B. King, S. Streufert and P. B Fielder (eds. ), Managerial control and Organizational Democracy. Washington: Winston & Sons, 107-31. Gastil, J. (1994). ‘A Definitions and Illustration of Democratic Leadership. ’ Human Relations, 953-75. Mullins, L (2000). Management and Organizational Behavior. Berkshire: Penguin Northouse, P. G. (2007). Leadership: Theory and Practice (4th ed. ). Thousand Oaks, CA: Sage Publications, Inc

Depreciation at Delta & Singapore Airlines

Financial Accounting Depreciation at Delta Airlines & Singapore Airlines (Solution to Case #2) 24th November, 2009 1. Calculate the annual depreciation expense that Delta and Singapore would record for each $100 gross value of aircraft. a. Delta: i. Prior to July 1, 1986 the Delta airline assets were depreciated using Straight Line Method at 10% for 10 years for a salvage value of 10%. Depreciation Expense = (Cost of Asset – Salvage Value) / number of year Depreciation Expense = (100. 00 – 10. 00) / 10 = 9 dollars for every 100 dollars of airline equipment i. From July 1, 1986 to March 31, 1993 the depreciation was Straight line at 10% for 15 years for a salvage value of 10%. Therefore Depreciation Expense = (Cost of Asset – Salvage Value) / number of year Depreciation Expense = (100. 00 – 10. 00) / 15= 6 dollars for every 100 dollars of airline equipment iii. After April 1, 1993 depreciation was at 5% salvage value for 20 years Depreciation Expense = (Cost of Asset – Salvage Value) / number of year Depreciation Expense = (100. 00 – 5. 00) / 20 = 4. 5 dollars for every 100 dollars of airline equipment b. Singapore: i. Prior to April1, 1989 – Depreciation was at 10% salvage value for 8 years Depreciation Expense = (Cost of Asset – Salvage Value) / number of year Depreciation Expense = (100. 00 – 10. 00) / 8 = 11. 25 dollars for every 100 dollars of airline equipment ii. After to April1, 1989 – Depreciation was at 20% residual value for 10 years Depreciation Expense = (Cost of Asset – Salvage Value) / number of year Depreciation Expense = (100. 0 – 20. 00) / 10 = 8 dollars for every 100 dollars of airline equipment [pic] 2. Are the differences in the ways the two airlines account for depreciation expense significant? Why would the companies depreciate aircraft using different depreciable lives and salvage values? What reasons could be given to support these differences? Is different treatment proper? We can create an analytical model looking at the above assumptions and calculations. The range between the depreciation of Delta and Singapore airlines is 4. 5 – 11. 25, which is a vast difference even though both are using a similar method of depreciation, that of straight line method. Keeping in mind the airline industry the way of depreciation is very important as the value may rise to billions of dollars. We can see in this case that both the airlines have a very different approach to depreciation. The average life calculated by Delta for its airline equipment 15-20 yrs with a low salvage value whereas for Singapore Airline is 8-10 years with a high salvage value.

While delta over the years increases the useful life and decreases the salvage value of its assets and thereby reducing the depreciation expense, Singapore airlines charges a very high depreciation and over a period of time increases the salvage value and the useful life estimates. As a result of these, the Depreciation expense to total operating expense ratio for Delta (average depreciation expense from 1989 to 1993 is 5. 3%) is much lower than Singapore airlines (average depreciation expense from 1989 to 1993 is 14. %) Companies would depreciate aircraft values depending on the following factors 1. The nature of the technology employed: – Technologically newer aircraft probably last longer than earlier, technologically less advanced aircraft. According to the exhibit3 and exhibit7, we can see that Singapore has more of the Boeing 747-400, but Delta doesn’t have any 747-400s. 2. The specific use that aircraft is given: – The case indicates that Singapore is a much longer-haul carrier than Delta.

The average passenger trip length for Delta is about 900 miles, whereas the average passenger trip length for Singapore is about 2700 miles. 3. Maintenance: – The better maintained aircraft are, the longer they are likely to last. Due to the existing maintenance standards for aircrafts we can assume that major airlines like Delta and Singapore both have good maintenance programs. Every company has their own way to depreciate fixed assets based on their requirements and situations. The main reason for such a difference in strategies is showing the amount of profit in a particular period.

In case of Delta they have increased the life of an asset showing low depreciation which leads to low operating expense resulting in higher profits. However for Singapore airline the operating profit is good and there is not much need to show lower depreciation, moreover it adds on to their value by showing a higher salvage value for the equipments they carry. The difference is policies if proper as the useful life of the asset and the salvage value largely depend the experience that the organization has in the field and usage of the equipment.

In this case we can clearly see that Singapore airlines have a much smaller operation level than delta. 3. Assuming the average value of flight equipment that Delta had in 1993, how much of a difference do the depreciation assumptions it adopted on April 1, 1993 make? How much more or less will its annual depreciation expense be compared to what it would be were it using Singapore’s depreciation assumptions? |Delta Airlines |1993 | |Value of Owned Aircraft $9,043. 00 | |Value of Leased Aircraft |$ 173. 00 | |Total Value of Aircraft |$9,216. 00 | Looking at the equation we did in question 1. We worked out that the depreciation per $ 100 prior to 1993 was $ 6. 00. After the change in policy the depreciation per $ 100 changed to $4. 75. Looking at this we can we can say that there has been a difference of 20% on the depreciation expense.

Hence the depreciation after changing the policy will be as follows: – = Accumulated Depreciation of 1993 – 20%(Accumulated depreciation of 1993) = $ 3559. 00 – $ 711. 80 = $ 2847. 20mn We also worked out that the depreciation expense of Singapore airlines per $100 during 1993 was $ 8. 00. This shows that there is an increase of 33. 33% to the accumulated depreciation. Hence if the policy of Singapore Airlines is followed the accumulated depreciation will be = Accumulated Depreciation of 1993 + 33. 33%(Accumulated depreciation of 1993) = $ 3559. 0 – $ 1186. 20 = $ 4745. 20mn Assuming Delta airlines calculates depreciation like Singapore Airlines; it will be accumulating higher depreciation on the flight equipment. 4. Singapore Airlines maintains depreciation assumptions that are very different from Delta’s. What does it gain or lose by doing so? How does this relate to the company’s overall strategy? The operation expenses of Singapore Airlines are less compared to Delta Airlines. Hence there is less need for Singapore to have a small rate of depreciation.

Moreover there is always a possibility that Singapore Air can cover the high amount of depreciation in future. By recording a high amount of depreciation and for lower years can mean that the asset can be sold at a price higher than the salvage value. It is evident in case Singapore airlines has done that and has made a significant amount of gain by sales of flight equipment. Moreover if Singapore airlines decide to continue with the fleet and not sell it, they will have to keep a very less provision for depreciation for the asset in the future.

This can be a very useful strategy for Singapore airline, enabling to upgrade their assets a smaller cycle. This will help them in marketing, lower maintenance and better customer satisfaction. 5. Does the difference in the average age of Delta’s and Singapore’s aircraft fleets have any impact on the amount of depreciation expense they record? If so, how much? The average age Delta is 8. 8 yrs and Singapore is 5. 1 yrs Assuming Delta and Singapore airlines buy aircraft at the same price and at the same time. Price of Aircraft $100000 Depreciation for Delta Airlines with their policy post 1993

Depreciation= (100000-5000) / 20 Depreciation / yr= $4750 / yr Avg. Total life of aircraft = 8. 8 yrs Hence total depreciation recorded by Delta air is $4750*8. 8 =$41800 Depreciation for Singapore Airlines with their policy post 1993 Depreciation= (100000-20000) / 10 Depreciation / yr= $8000 / yr Avg. Total life of aircraft = 5. 1 yrs Hence total depreciation recorded by Singapore air is $8000*5. 1 =$40800. We can conclude by saying that Delta Airlines accounts for a higher depreciation as the average life of their aircrafts is longer.

Drug Court Taught Me How to Live

DRUG COURT TAUGHT ME HOW TO LIVE By: Robin Howell Example Essay Composition 1 MW 10:00-11:50 Word count= 1,005 (excluding cover page and title) Drug Court taught me how to live In the past, drug addicts who were convicted of drug related crimes were most commonly either sent to treatment or incarcerated. I have experienced both and did not benefit much from either one. In 2001 I was in a fairly new program called Drug Court. Drug Court is a unique and extraordinary program that gives addicts the tools they need to endure life without using drugs.

I was very nervous about this program because it was so different than anything else I had been through. I was always able to tell the treatment counselors or judges what they wanted to hear so that I could get back to my normal way of life: using. I was worried that I would not be able to get over this road block so easily. Drug court consisted of a team of probation officers, drug treatment counselors, a defense attorney, a county attorney, and a judge. The entire team was specially trained in the area of drug addiction.

This was a very tough team to contend with for the addict who just wanted to get it over and use again. In the beginning, I had to ride the bus to the probation office Monday through Thursday to check in by 7:00 in the morning. I didn’t have to go on Fridays because that was the day I had to go to the courthouse and sit in front of the judge, the rest of the team, and most of the other participants in the program. It was held in the courtroom with the judge at the stand, the remainder of the team arranged in a semi circle around the “hot seat”, and the other participants in the benches.

The jury box was reserved for those currently incarcerated. During the big meeting on Fridays I would have to sit, in front of everyone, and answer questions about what was going on in every aspect of my life. All of the participants took their turn in the hot seat, even the ones who had messed up and were going to jail. I watched several of my peers breakdown sobbing, be taken away in handcuffs, be placed in a residential facility, and even be revoked from the program and sent to prison. I was able to learn from other’s mistakes and achievements which was beneficial.

The team understood that no one is perfect, especially an addict, and looked at most mistakes as learning experiences. I did not use while in the program but I witnessed the Friday sessions of many who did. Instead of being revoked right away, they were given encouragement and support in one form or another. This was amazing to me because the judicial system doesn’t offer much in the way of second chances. A convicted criminal battling addiction is usually looked upon as a lost cause, but Drug Court sees past the stigmas.

I was terrified of the team and resented them in a lot of ways. It wasn’t until my husband was incarcerated (right before my due date) that I realized the team was there to help. I was jobless, soon to be homeless, and scared to death. The judge pulled some strings and got me a room at the YWCA; he also gave me his personal phone numbers and volunteered to take me to the hospital when I went into labor and stay with me during delivery. Fortunately I still had family in Ottumwa and my in-laws in Des Moines, so even though I was touched by it, I did not take him up on his offer.

It was surprising to find out that someone in the system actually cared about me and didn’t just want to lock me up somewhere. I stayed at the YWCA until my daughter was three months old and a room opened up at House of Mercy. The Drug Court team, against my wishes, decided that I should complete the long term residential program there. They looked at my individual situation, not just that I was an addict/criminal, and they based their decisions on my personal needs. I took classes on parenting, job search skills, criminal thinking, budgeting, and basic life skills. I also completed over a hundred hours of ommunity service. At the time, I looked at these opportunities as punishments, but looking back I realize that I was learning how to lead a healthier, more productive lifestyle. I also learned skills that could not be taught in a classroom. Among the most important are: patience, tolerance, responsibility, accountability, and humility. It took me almost two years to complete the program but the amount of time is different for everybody. Upon completion, each participant must then complete regular probation based on the severity of the crime that brought them to Drug Court.

I was given two years probation after I graduated. It is also mandatory for the graduates to participate in the Drug Court Alumni while on probation. In a sense, graduates still had the support of the team even after completing the program. The alumni focused on enjoying life without drugs and supporting the current Drug Court participants. They had holiday parties, picnics, bowling, and other fun activities. They gave out gifts to members and their families on Christmas, Thanksgiving dinner was prepared for those who had no family to share it with, birthdays were remembered, it was like a second family to me for a while.

Unfortunately it did not work for me right away; just because you are given tools does not mean you will use them. I relapsed while still on probation and finished my sentence in state prison. Since the age of fourteen, I have been in six treatment programs, spent a total of forty eight days in county jail and did eighteen months in prison. But then I made the choice to change my life; I look back and Drug Court is where I learned how. It gave me the tools I needed, and now that I use them I am living a successful life. Word count= 1,005 (excluding title and cover page)

Nintendo Strategy

Competitive Strategy in Game Consoles Jay Conrod, Klimka Szwaykowska; Mar 7, 2007 The interactive entertainment industry has grown remarkably quickly in recent years. Since 2001, the market has been dominated by three major players: Sony, Microsoft, and Nintendo. Of these, Nintendo had the smallest market share, even though the company had historically dominated the market. In 2004, faced with strong competition from larger and wealthier rivals, Nintendo had to come up with an innovative strategy to maintain profitability.

At that time, the optimal strategy was differentiation into a neglected segment of the market: casual gamers who wanted a simpler, more intuitive gaming experience. Nintendo’s status in 2004 Unlike its competitors, both of which are powerful players in consumer electronics and business software, Nintendo is primarily a video game company. Nintendo has three main products: consoles, handhelds, and software (games). Typically, only one console product is sold at a time; production of one generation ceases shortly after the next generation is released.

Nintendo’s console in 2004 was the GameCube, which had been on the market for three years. The competitors’ products (Sony’s PlayStation 2 (PS2) and Microsoft’s Xbox) were approximately the same age, but had several advantages over the GameCube. Though they cost a little more ($150 for the Xbox and $130 for the PS2, compared to $100 for the GameCube), they had more advanced networking and media-playback features. By March 31, 2004, only 14. 6 million GameCubes had been sold worldwide (Nintendo 2004 Annual Report). Nintendo’s position was stronger in the handheld gaming market.

The handheld Game Boy Advance (GBA) was introduced at roughly the same time as the GameCube, and sold 51. 4 million units by March 2004. Nintendo develops software for all of its gaming systems. Although software is expensive to develop, it can be sold at a high margin. Nintendo has a wide variety of recognizable franchises such as Mario and Pokemon that keep sales strong. However, most of the software developed by Nintendo was targeted at a younger audience and does not appeal to older gamers. Porter Forces in the game console market Customers

The customers in this market are almost all individuals or families who purchase consoles from Nintendo through retailers. Customers tend to buy only one console at a time. Since console manufacturers suggest retail prices over entire countries or regions, individual customers have no bargaining power. Software purchased for one console cannot be played on other consoles so switching costs are high; if an individual wants to play a particular game, he or she is usually locked into the console that plays it. By 2004, there was a tendency for console games to be increasingly complicated.

Becoming involved in a game required a significant time investment to learn how to play. Game companies aimed mainly at servicing the “hard-core” demographic which enjoyed this kind of game. Relatively few games were produced for the larger demographic of “casual gamers”. Suppliers Suppliers are companies which make hardware and games for the consoles. Nintendo designs some of the hardware components for its consoles, but manufacturing and assembly are often outsourced, and many components are purchased “off the shelf” from large companies.

This keeps costs higher than competitors like Sony and creates a threat of forward integration by parts suppliers, who could potentially manufacture their own consoles. Switching costs are also high, as Nintendo software is made to be compatible with technologies supplied by the outside companies. The situation is different for software. Nintendo does much of the game development for its consoles, though most of its games are made for a fairly young audience. It also licenses a software development kit (SDK) to outside game developers. In a manner, these firms are Nintendo’s customers. Firms hich have made successful games in the past will probably have some bargaining power in this transaction, since Nintendo is interested in keeping their services for future game development (as will be discussed below, games are an essential complement for game consoles). Overall, Nintendo’s SDK tends to be priced lower and have better support than similar packages offered by competitors. Threat of new entrants The console market has a strong threat of new entry. There is very little patentable technology in game consoles, and most consoles tend to have similar features and functionality.

The greatest barrier to new entry is the economy of scale; producing consoles is prohibitively expensive unless done on a very large scale. In addition, a potential entrant would have to develop games to sell alongside the console. An exceptionally strong marketing campaign would be required since Nintendo, Microsoft, and Sony are already household names in many countries, which gives them a strong advantage in this sort of competition. Nevertheless, many large companies could potentially enter the market, as Microsoft did in 2001 with the introduction of the Xbox. Substitutes

Though they have negligible bargaining power, customers have a wide range of available substitutes, spanning over all sorts of possible forms of entertainment. In addition to competitors’ products available to them, they may choose television, movies, PC games, board games, literature, sports, etc. , in their leisure time. Thus, game consoles have to make an effort to be wanted since they are not needed. Complements Strong complements are an important part of getting customers to choose game consoles. The greatest complement to consoles is games, without which a console is useless.

Controllers and memory cards are also complements, as is the Internet, which allows players to network with each other and play video games with their friends. Nintendo’s handheld devices are also a complement for the consoles: the GBA could be connected to the GameCube, unlocking special features in some of the games. Rivalry Overall, there is strong rivalry in the console market. In 2004, there were three main competitors, all going after the same general audience. Product diversity was fairly low; the most significant difference between the consoles manufactured by Sony, Microsoft, and Nintendo was in the games available for each.

This led to price competition that drove prices down from the initial $200-300/unit to $100-150/unit. Nintendo, as the weakest competitor, would have suffered most loss from price competition. This situation gave a fairly strong indication that Nintendo would have to stop competing directly with Sony and Microsoft if it was to remain profitable. Strategies Available to Nintendo In 2004, a number of potentially successful strategies were available to Nintendo. It had become clear that a “stay-the-course” tactic would not allow Nintendo to beat Microsoft and Sony.

Vertical integration presented the opportunity to lower costs and expand into other markets at the same time. Exiting the game console market would allow Nintendo to continue making money on software and handhelds. A third and ultimately most promising strategy was differentiation from competitors, which would enable Nintendo to capture other market segments that had been largely ignored. Vertical integration Sony entered the game console market when it introduced the PlayStation in the mid-90s. Sony was able to make much higher margins than Sega and Nintendo because it was already a large manufacturer of consumer electronics.

Sony could design and manufacture parts for the PlayStation in-house while Nintendo had to buy parts off-the-shelf and outsource manufacture and assembly. This became a significant weakness for Nintendo which could be addressed by vertical integration, both into manufacturing and Internet service Integration of manufacturing has a number of advantages. Although it would cost a significant amount of money to expand, Nintendo would be able to dramatically reduce the cost of its consoles as well as complementing products such as controllers.

A lower cost would also allow Nintendo to introduce additional features to its consoles while maintaining lower retail prices than competitors. Nintendo could then expand into other consumer electronics markets such as cellular phones and digital music players. Integration of Internet service would involve developing an online gaming service similar to Xbox Live. This would greatly enhance the multiplayer capability of its consoles while providing an additional source of revenue: subscriptions. Although both of these integration tactics could be effective, they would not allow Nintendo to beat Microsoft or Sony.

Nintendo would never be able to match Sony’s capability or experience in manufacturing consumer electronics. Adding Internet services for newer consoles would only be adding a feature that Microsoft already provided to its customers. It would be unwise for Nintendo to base its new strategy on vertical integration, when both of its competitors were already very experienced in many other markets. Exit the market After Sega’s Dreamcast console flopped, Sega decided to exit the game console market entirely while continuing to develop software for other consoles.

This proved to be a fairly successful strategy. Like Nintendo, Sega had several popular game characters like Sonic the Hedgehog. The Dreamcast had proven to be a money pit; rumors had surfaced that Sega was losing money on each console sold. Slow sales drove down software sales as well. Sega games had the potential to be much more successful on a more popular console, and Sega could reduce its costs by ceasing hardware development. Although Nintendo’s GameCube was doing poorly against competitors, sales were still strong for handhelds and software.

There were no significant exit costs in the market, so the transition could be made cleanly by simply halting console hardware development and developing software for other platforms. Nintendo would still make a significant amount of money on high-margin software development and would still dominate the handheld market, at least in the short-term. The main disadvantages to this strategy are that it leaves Nintendo cornered in the longterm and that it does not hold the same potential for profitability as other strategies.

The handheld market was still vulnerable, and Sony had already introduced the PlayStation Portable as a competitor to the Game Boy Advance. Software sales could sustain Nintendo indefinitely, but would never restore the company to its former position. As such, this strategy should be used only as a last resort. Sega was forced to exit the market because it did not have enough cash to vertically integrate or to differentiate significantly from Nintendo and Sony. Exiting the market allowed Sega to survive, but nothing more. Differentiate from competitors

In the last several years, the game market had grown somewhat stale, although profits have remained high. Many game developers were focusing on the “hard-core” demographic: customers who were willing to invest a lot of time and money on a narrow range of games. Many new games were sequels, offering very similar game play to their predecessors. At the same time, games were becoming more complicated and difficult to learn for casual gamers. This presented Nintendo with an opportunity: create a console and games that appeals to people who had been ignored by Microsoft and Sony.

Although hard-core gamers were much more willing to spend money on games, they were outnumbered by casual gamers. The console would have to be easier to use than its competitors; a controller with 15 buttons would not be an option. The games would have to be easy enough to learn that someone who had never used a console before could learn and enjoy a game within a few minutes. Both consoles and games would also need to be cheaper to be more appealing to a wider range of customers. The primary advantage of this strategy is that it is one that Microsoft and Sony cannot effectively copy.

In order to appeal to the hard-core market, both companies were competing on hardware performance and features. In order to reduce costs enough to compete with Nintendo, they would need to abandon these goals, alienating their customers. Why differentiation was the best strategy The Nintendo Wii was originally conceived in 2001 shortly after the release of the GameCube. At the time, it was known by its codename “Revolution. ” It was first released on November 19, 2006, in North America. Its key feature was a new form of player interaction, nicknamed the “Wiimote”: a wireless controller that could detect its position in space.

In addition to using buttons and joysticks to control games, users could swing or point the new controller as they would with a sword or tennis racket. This provides gamers with a more enjoyable experience and much more intuitive interface than other consoles, especially if the game involves swinging swords or tennis rackets. Games for the Wii were also targeted at a broader audience, although few games have been released to date. The Wii is also much cheaper than the other consoles. It can currently be purchased for $250, less than half of the price of the PS3.

In the months since its release, it is estimated that the Wii has sold 4. 15 million units, overtaking both of its competitors. Although Nintendo may not be making such a large profit on each unit as Sony or Microsoft, a larger number of consoles on the market will drive software sales in the future. The only thing currently limiting sales is manufacturing capability, and of the three consoles in the newest generation, the Wii has been in shortest supply. Microsoft and Sony, as predicted, continued to appeal to the hard-core market with very advanced consoles.

For instance, Sony bundled its Blu-ray optical disc technology with the PlayStation 3 and included IBM’s Cell processors as the CPU, causing the price of the PS3 to be between $500 and $600. Microsoft included a wide range of media-center and online capabilities in the Xbox 360, expanding into the broader home entertainment market. While Microsoft and Sony’s strategies will undoubtedly be highly profitable, they will not be able to compete directly with Nintendo without alienating their existing customer base. Sources January Game Sales Explode; Wii Dominates. GameDaily BIZ. Feb 21, 2007. http://biz. amedaily. com/industry/feature/? id=15294=AOLGAM000500000000021. Nintendo Says Women, Elderly Key to Wii Game Player. Bloomberg. com. Sept 18, 2006. http://www. bloomberg. com/apps/news? pid=20601080=asia=a0kklJ1sNgDI. Nintendo Annual Reports, 2002-2006. http://www. nintendo. com/corp/annual_report. jsp. The Nintendo Strategy Wednesday, December 21, 2005 I’m going to define two phrases for you — the “current video game industry” and the “other video game industry”. Current Video Game Industry – the current video game industry grew from the idea that better technology leads to better games.

The current video game industry anticipates the next-generation because it promises bigger environments full of better and flashier graphics. The current video game industry gives the award for “Game of the Show” at the Electronic Entertainment Expo to technical or video demonstrations rather than playable software. Other Video Game Industry – the other video game industry is growing from the idea that good technology, good interactivity, good variety, and good access ultimately breeds better games for more people. The other video game industry acknowledges that a variety of games within a selection of genres CAN appeal to everyone.

The other video game industry recognizes that, at the end of the day, games are about interactive entertainment. Sony and Microsoft are pursuing the current video game industry. And there’s absolutely nothing wrong with that. The current video game industry, although perhaps not entirely healthy, has nevertheless proven itself to be profitable and highly lucrative. Many fortunes have been made and will continue to be made from this market. Nintendo, however, has identified the other video game industry. Nintendo has nothing to gain from copying the competition and pursuing their current video game environment.

Besides, Sony and Microsoft already adequately satisfy that segment. The reasoning behind Nintendo’s new pursuit is especially apparent when you consider the fact that the current industry has shown over and over again that there is only room for one or two profitable console manufacturers. In 1996, Michael Porter wrote an article for the Harvard Business Review asking the question, “What is Strategy? ” That’s a great question and relevant to our discussion. Is strategy achieved by copying competition? “A company can outperform rivals only if it can establish a difference that it can preserve,” notes Porter. It must deliver greater value to customers or create comparable value at a lower cost, or do both. ” These ‘differences’ are ultimately born from the hundreds of activities performed in the creation, sale, and delivery of a product or service. Costs result from performing activities. For example, the cost of acquiring materials or the cost of employing workers. Meanwhile, a ‘cost advantage’ results when one of these activities is performed better or more efficiently than competitors. For example, acquiring cheaper materials or employing workers from overseas.

According to Porter, activities are therefore the basic units of ‘competitive advantage’. Before improvements in activities: An item costs $4 to manufacture and is sold for $5. You receive $1 in profit. After improvements in activities: An item costs $3 to manufacture and is sold for $5. You receive $2 in profit. Makes sense right? The more efficient the activities in your operations are, the more likely you are to generate a greater profit. The video game industry is built upon this premise. It begins when a manufacturer sells a technologically advanced console at a price less than it costs to create.

The manufacturer ultimately hopes to earn a profit by making the activities that produce the hardware (or acquire its technical components) more efficient and also by launching software which is built from an engine that is then efficiently re-used to release a sequel. This series of steps is conducted and repeated by multiple console manufacturers. By the conclusion of a console cycle, there is usually at least one manufacturer who successfully improves the efficiency of its activities to gain an enormous profit. As a result, they also reinforce the allure of this flawed model.

This management practice — called operational effectiveness — is not sustainable nor is it a strategy. “Constant improvement in operational effectiveness is necessary to achieve superior profitability,” says Porter. “However, it is not usually sufficient. Few companies have competed successfully on the basis of operational effectiveness over an extended period, and staying ahead of rivals gets harder every day. ” Competition assures that it’s only a matter of time before your activities are mimicked and your competition acquires similar operational efficiencies.

Porter describes this as follows: “The more benchmarking companies do, the more they look alike. The more that rivals outsource activities to efficient third parties, often the same ones, the more generic those activities become. As rivals imitate one another’s improvements in quality, cycle times, or supplier partnerships, strategies converge and competition becomes a series of races down identical paths that no one can win. Competition based on operational effectiveness alone is mutually destructive, leading to wars of attrition that can be arrested only by limiting competition. Is it any surprise then that the entire video game industry is consolidating? Merger after merger from Sega Sammy, to Square Enix, to Namco Bandai. As long as the current video game industry continues down the path it is on now, this consolidation has no hope of ending. The Nintendo Strategy On the other hand, there is strategic positioning. This superior and legitimate strategy is accomplished by performing different activities or even similar activities but in different ways. “Competitive strategy is about being different,” says Porter. It means deliberately choosing a different set of activities to deliver a unique mix of value. ” Strategic positioning is ultimately a unique set of sustainable activities that use your company’s valuable, rare and inimitable resources to change the rules of the game. As a result, Nintendo’s unique set of chosen activities are bound to cause controversy among the “current video game industry” — you know who I’m referring to, that audience who is content playing sequels with updated graphics and using the same input method (i. . controller) multiple generations in a row on several consoles that are, for all intents and purposes, the same. Strategic positioning is about being different. Nintendo has therefore decided to uniquely devote its resources to expanding the game playing audience through two keys things: “strong community” and “immersive games”. You can see these goals mirrored in, for example, the restructuring of Nintendo’s development divisions, the Nintendo Wi-Fi Connection, and the makeover of Nintendo Power — these aren’t coincidences.

They are a chosen part of Nintendo actively piecing together its activities to fulfill its strategic position to expand the game playing audience. “A company can outperform rivals only if it can establish a difference that it can preserve,” says Porter. The difference Nintendo is establishing with the Revolution is a low-cost device with an accessible interface. The accessible interface will largely come in the form of its revolutionary controller, but also in the simple, yet elegant, design of the console. In addition, Nintendo will not only have low-cost hardware, but also low-cost software and a free online community.

Nintendo is also changing things up with its virtual library of NES, SNES, and N64 games. This is a new and unique distribution model that has the potential for great success. Low cost hardware and software, a new intuitive gameplay input, and original game genres are a unique set of sustainable activities that, when combined, form Nintendo’s strategic position. “But why can’t Nintendo have high-definition graphics? ” exclaim current gamers! Don’t worry, I hear ya and I’ll explain exactly why they can’t. There is an absolute need for trade-offs.

When there are no trade-offs, you encounter a situation that Microsoft is currently in — billions of dollars in losses within its Xbox division. This shouldn’t come as a surprise, but trade-offs are an intimate part of our daily lives. We make trade-offs with our time (do I spend it studying or playing Zelda), trade-offs with our money (do I buy school books or buy Barbie dolls), trade-offs with what we watch on television (Oprah or Jenny Jones), etc. This is why the “Nintendo Difference” is seen much more clearly today than it was during the GameCube generation.

The GameCube was the tragedy of not recognizing that trade-offs are needed. Nintendo wanted to offer an accessible low-cost box with a handle, yet it also wanted to directly compete with Sony and Microsoft in the current video game industry. Nintendo wanted to create small pick-up and play software that appealed to casual gamers, yet it had graphics and a controller intended for the current video game industry. Conflicting activities, contradicting goals, and lack of trade-offs — from the very beginning, the GameCube and Nintendo’s strategy were doomed to take second place to Sony and Microsoft.

Ultimately, Sony and Microsoft could also sell their consoles for cheap, create a new controller and develop new game genres. They could without a doubt try to cater to Nintendo’s other video game industry. However, just like GameCube proved to us, they are doomed to fail if they refuse to make trade-offs. “Attempts to compete in several ways at once create confusion and undermine organizational motivation and focus,” notes Porter. This is exactly why high-definition graphics don’t mesh with Nintendo’s strategy. A company simply cannot be anything and everything.

A dollar can only be stretched so much, advertisements directed only so far, and resources allocated only so deep. Nintendo has learned that it cannot be Sony nor Microsoft. And without drastically altering their business, Sony and Microsoft will likewise never be able to replicate the structure of Nintendo’s development studios, the low cost of producing Revolution hardware, the unique interface in the Revolution controller, or the virtual catalog of NES, SNES and other retro titles. Nintendo’s trade-offs are therefore as much part of its strategy as are the chosen activities just mentioned.

Apple’s set of activities formed a strategy that revolutionized the music industry. I’ve mentioned before that a ‘revolution’ is the sum of its parts. If Nintendo successfully molds its already unique set of activities (Nintendo, please don’t forget innovative marketing — I’m looking at you Reggie) to fit into a single cohesive strategy… the other video game industry is theirs for the taking. “It is harder for a rival to match an array of interlocked activities than it is to merely imitate a particular sales-force approach, match a process technology, or replicate a set of product features,” says Porter.

As we’ll soon see in 2006, Nintendo’s strategic position will be hard, perhaps even impossible, to match. Best of luck to the imitators. [pic] The Nintendo Difference “Nintendo stays in hardware because it has no choice in the matter… ” No company frustrates the soothsayers quite as much as Nintendo. Across the land, divining rods are being snapped, crystal balls are being smashed and tea leaves are being stamped on in fury – as the firm whose death has been predicted countless times reveals itself once again to be in rude good health and ready to take on the world. I refer, of course, o the launch of the Wii in Europe, which saw the firm clocking up a record breaking 325,000 sales over the weekend; but even more astonishing, and more laudable, is the stunning success of the Nintendo DS in the same week. Over half a million units of the handheld were sold in Europe last week, and the installed base now tops 8. 5 million units in this territory alone. If this is an indication of how the Wii’s sales will go, then Nintendo’s risky gamble with the motion sensing Wiimote could actually turn out to be the stroke of genius which hands dominance of the console space back to its one-time master.

As the Kyoto-based firm continues to confound the doom-mongers who have gleefully predicted its demise for the best part of a decade, it’s worth pausing for a moment to think about the other common prediction which is associated with Nintendo – namely that the company will (or at least, should) abandon the hardware market entirely, and instead focus on bringing its unique range of IPs and franchises to other platforms. Going third-party – or “doing a Sega”, as industry slang would have it.

The most common argument for this strategy is that while Nintendo may be hugely profitable, the company’s home consoles are in distant second or even third place behind those of the market leader – so in theory, by moving franchises like Mario and Zelda to the PlayStation and the Xbox, the firm would have a much larger target market, would sell more units, and would ultimately be much more successful. This is particularly relevant now, proponents of this model argue, because the astonishing cost of the new generation of consoles has forced Nintendo out of the arms race, leaving its games confined to an innovative but underpowered system.

On the face of it, it’s a compelling argument – and it certainly worked for Sega, which has turned around its fortunes since bailing out of the Dreamcast (aided, admittedly, by being acquired by wealthy Japanese gambling firm Sammy) and is now one of the most influential third-party publishers in the industry. Why shouldn’t Nintendo follow Sega’s example, then, and leave the CPU and GPU arms race to the multinational giants with cash to burn?

The simple answer is because “The Nintendo Difference” isn’t just a cunning marketing slogan; Nintendo genuinely is different. Its structure and business model are a radical departure from how every other company in the interactive entertainment industry works, and the comparisons between Nintendo and Sega are merely skin deep. Sega left hardware because it had no choice; the failure of the Dreamcast was a nail in the coffin, and the structure of its internal studios was perfect for transplanting into a third party publisher.

Nintendo stays in hardware because it, too, has no choice in the matter. Of course, on a very simple level, if Nintendo was to leave hardware then it would lose a major revenue stream, because the company notoriously designs and prices its consoles such that hardware is a profit-making enterprise. Making up for that lost revenue would also be tougher than it looks, because as a third-party publisher, Nintendo would be forced to pay a significant license fee on each game it sold, so its profit margin from software would be reduced.

As such, the company would have to vastly increase its software sales in order to make up both for reduced margins and for the loss of the hardware revenue stream – an incredibly daunting task, even for a firm with franchises like Mario and Zelda. Bear in mind that those franchises already sell millions of copies, and have an astonishingly high attach rate with Nintendo hardware – even on a system with five times the installed base, achieving higher sales would be a challenge.

Even more important, though, is the change which would have to be made to Nintendo’s entire culture, to its business and creative models, if it were to abandon the hardware market. Considering this gives an insight into the workings of one of the most fascinating companies in the videogames market – a firm which is quite unlike its competitors, with an approach which owes more to that of a toy company than to the videogame publishing model. Nintendo’s entire philosophy is focused on the platform – not on hardware or software as separate entities or businesses, but as the platform as a whole.

Unlike Sony and Microsoft, where it’s apparent that Chinese walls have been erected between the designers of the hardware and the creators of first-party software, Nintendo actually places its top software designers at the helm of hardware design. Consoles are designed to suit the game concepts which will run on them – a working model which is apparent in the design of both the Nintendo DS and the Wii, and which allows the company to create early first-party titles which really showcase the hardware.

This top-down approach, which creates consoles based on the games which will run on them, is the antithesis of Microsoft and Sony’s approach, which designs from the bottom up – first creating a console and then worrying about what games will run on it. It gives Nintendo an enormous competitive advantage which would not be evident if it were a third-party publisher, and allows its top first-party software to innovate and evolve in ways which would be impossible on another company’s hardware.

It’s also the approach which has informed the decision to restrain the specifications of the Wii – and indeed the DS – to a manageable level, which allows development to take place faster and less expensively than on rival consoles. These factors combine to make Nintendo into the company it is today – a company whose low development costs, tight integration between hardware and software and enormous profit margins allow it to take creative risks, drive forward innovation and promote the growth of the gaming market as a whole.

Without Nintendo’s unique business model and first-party status, games like Nintendogs, Brain Age, Animal Crossing and Wario Ware simply could not exist; they either rely heavily on the hardware which supports them, or are so far off the beaten track that creating them on a system with higher development costs and lower profit margins would be commercially untenable. That’s why Nintendo will remain in the hardware business – because its consoles are more than just a platform to run its software on.

They are part of a platform strategy which defines the entire company’s approach to the market, and which means Nintendo is more than just one of the world’s leading videogame companies – it is also, and arguably more importantly, one of the world’s leading toy companies, and remains a powerhouse of innovation and development which is a driving force for the entire games sector. “Doing a Sega” is not on the cards for this firm, and probably never will be – especially not when it’s still in the enviable position of being able to shift the better part of a million units in Europe in a single week.

The New Sony PSP Handheld: a Clear Victory of Form Over Function [pic] Sony’s innovative new PSP (PlayStation Portable) gaming and media handheld – aka, “the iPod killer” – was introduced last Thursday, and then… today (Monday 28 March) it was ordered withdrawn. Immersion Corp. , a San Jose company who, in a 2002 lawsuit, accused Sony of patent infringement with the Dual Shock controller for the PlayStation and PlayStation2. Dual Shock technology makes the controller shake in rhythm with what’s going on in the game.

Sony denies that Dual Shock violates Immersion’s patents and, while the district court decision included an order to suspend PlayStation sales, that order does not hold while an appeal is being heard so Sony will continue to sell its game machines in the United States. But the bigger question may be, will anybody buy this thing? The PSP faces tough competition from the Nintendo DS as it sparks a battle for the $4. 5 billion global handheld entertainment market, just at a time when Sony’s in the midst of a pitched internal battle to get back on its feet after product successes fell short.

Then, the PSP launches as more of a legacy product than anything – c’mon guys, the Memory Stick is a big failure and your failure to use non-proprietary technology standards will lead to the ultimate failure of the consumer electronics business in the long-run! I cannot believe you people can’t see this!?! Simply stunning. Anyways, Red Herring broke it down for us on how the competitive battle lines are drawn: The PSP’s unique features are console-quality graphics, a 24-title movie lineup, Wi-Fi capabilities, and the amalgamation of games, music, and movies in one gadget.

Sony is expected to ship at least 3. 7 million units to North America during 2005, according to research firm IDC. Nintendo, so far, has been the leader in the portable gaming market with the GameBoy Advance and, more recently, the $150 Nintendo DS. The $250 PSP is the “first legitimate competitor to Nintendo’s dominance” in the handheld market, said IDC analyst Shelly Olhava. Other competitors in the market are Nokia’s Ngage portable and Gizmondo Europe’s portable. David Cole, an analyst with DFC Intelligence, thinks that the PSP could become a long-term product and build a base for Sony for several years. [Sony] is so strong in the game industry, it should do very well,” said Mr. Cole. “It really satisfies the need of the portable audience. ” The target audience for the PSP is adults between the ages of 18 to 34 rather than the younger audience gaming companies usually target. Nintendo, on the other hand, is more popular with the younger audience. “I think Sony decided that’s where they were really strong,” said Mr. Cole. The PSP is a black gadget weighing just under 10 ounces with a 4. 3-inch widescreen and high-resolution TFT display.

It also has digital photo display and supports digital music playback in MP3 and ATRAC formats. The processor is a high-capacity Universal Media Disk (UMD), which is an optical medium enabling feature films and high-quality games to be played on the portable. The 60-mm disk has a storage capability of 1. 8 GB. This format will be utilized across the Sony family of products and is available for outside hardware makers and non-game entertainment content providers to use. The portable gaming market worldwide was about $4. billion in 2004 and is expected to grow to $9 billion in 2009, according to DFC Intelligence. The PSP first launched in Japan on December 12 and has sold 1. 18 million units there so far. Mr. Cole expects the PSP to get a better reception in North America, where Sony plans to ship 1 million units for the launch. Company officials said that most U. S. stores are on their third and fourth waiting lists for the PSP. “The Japan market hasn’t been doing very well in general. Any product tends to do better [in the U. S. ],” he said. European launch uncertain

Analysts are expecting long lines outside stores on the night of the launch in North America. The demand for the PSP has reached such a peak that its European launch, which was scheduled for March 31, could take several more months. Ms. Olhava said Sony hasn’t been able to handle shipments because of logistical problems. “I have heard that Sony has manufacturing issues,” she said. “It’s a brand-new product and it’s bound to have some hiccups along the way. One problem could be the $250 price. “It’s an unproven price point and that will be a real challenge,” said Mr. Cole.

Early adopters are price-insensitive, he said, but consumers will get tighter with their wallet after the first 1 million sales. The Nintendo DS has already launched in the three major markets—North America, Europe, and Japan. The DS, which launched in North America on November 21, sold 1. 5 million units by February. Company officials have said that Nintendo plans to ship 6 million DS units globally by the end of March. Analysts feel the 2005 holiday season and the software availability will determine which portable product succeeds. “Both the DS and PSP are excellent portable systems,” said Mr.

Cole. “You really will be able to get the analysis going into the holiday season. ” Meanwhile, every review I’ve read of the device itself leaves me wondering if it’s worth the trouble. Jim Louderback has a few backhanded compliments in that regard, “it’s going to redefine handheld gaming. But it’s not going to be as popular or as successful as everyone claims. If Sony’s expecting an iPod killer, this isn’t it. Here’s what I see as the good and the not-so-good in Sony’s latest platform. ” More of his review is excerpted below: Screen: A standout display, for sure.

It’s big, wide, and captivating. Colors are rich and detailed. Response rates seemed superb while I was playing Ridge Racer. But there’s a downside to all those pulsating pixels, too. First, Sony opted for a very reflective coating. This makes the image look great, but also turns the screen into a mirror in bright light. Even in lower light, the reflections can become annoying in some situations. Don’t plan on taking it hiking; this is not a player for the great outdoors. Graphics: Far better than the competition’s, the graphics engine made the smallish screen look much bigger.

Although some of the early titles probably won’t take advantage of all the power, Ridge Racer at least looked fantastic. Sound: I have no complaints here. The audio quality was simply stunning on my tests, especially when paired with high-quality headphones. The built in speakers are weak and tinny, as you can imagine, but the top-notch audio—when combined with the zippy screen—creates a truly immersive gaming experience on the go. Controls: The PSP includes the standard complement of PlayStation 2 controls—although it has only one joystick and one pair of shoulder buttons—and pads that are reasonably easy to use.

It has no touch screen, unlike the Nintendo DS, but includes a real portable-gaming breakthrough: a tiny round nub that appears to be the twisted progeny of a joystick and the IBM TrackPoint mouse replacement. Instead of having to be yanked back and forth, this “pointing pad” glides almost effortlessly across a small part of the PSP’s surface. It provided a perfect stand-in for a steering wheel in Ridge Racer, and it’ll probably become the controller of choice for all but the most precise and demanding tasks.

Games: The PSP’s launch library is good for a new platform, with about two dozen titles available now. Over time, expect to see PS2 retreads and brand extensions galore. But those titles will only reinforce one of the PSP’s problems: It’s a portable version of a home console, but nothing more. The Nintendo DS, with a touch screen, microphone, and unique dual-screen design, offers more potential for breakthrough styles of portable gaming that don’t rely on the archetypes established by console games. Just because you build it, however, doesn’t mean they’ll come.

Even though the DS has been out for four months, only a paltry number of titles are available, and few take much advantage of the unique DS features. The DS has one ace card: It’s compatible with the huge library of Game Boy Advance titles too, which makes it a better upgrade for existing Nintendo handheld customers. Movies: The PSP has also been widely touted as a portable movie player. The device includes a new optical disc format, called UMD (for Universal Media Disc). Each disc is about twice the size of a quarter, and can hold an entire movie. In fact, the first million PSPs here in the U.

S. will come bundled with Spider-Man 2 on UMD. Sony’s penchant for launching unsuccessful proprietary media formats is legendary (witness Beta, Memory Stick, etc. ), and I believe UMD as a broad media storage technology will fail here, too. Why? First, because it’s highly unlikely that many users will purchase movies in a format that works only on portable players—and no one will replace their home DVD player to go with UMDs. Movie availability is likely to be limited to Sony’s back catalog and a smattering of other titles at first, so there won’t be much to watch. What about rentals?

The picture is murky there, too. Shernaz Daver, from Netflix, said that the company “will support any format as long as it becomes popular,” but wasn’t ready to commit at launch. The big bugaboo here is that you can’t make your own discs. And if Junior can’t drop Letterman or the X Games onto a disc at night and watch it the next day, then the idea that any significant number of people are going to buy the PSP to watch videos is moot. About five years ago, a company called Data Play released a nifty new quarter-sized optical media format. It was recordable, tiny and promised a revolution in media players.

But before Data Play could get it to market, tiny hard-drive and flash-based players took off. Data Play sunk without a trace, and even though Sony has far bigger resources to bring to bear, UMD will too. Oh, one other fundamental drawback for the PSP as a movie and video player: It lacks a kickstand or other way to keep it upright. Playing games is interactive; you want to hold the player while you frag. Watching video is passive and, based on my experience with first-generation portable video systems from Archos and Creative, if it doesn’t stand on its own, it just isn’t worth carrying.

Music: The PSP has the potential to be a great music player, but unfortunately it relies on a flash-based Memory Stick to store music. The system comes with a 32MB Memory Stick, enough for an hour or so of very compressed music—if you didn’t have to share the Memory Stick with saved games. But even if you also picked up a 1GB Memory Stick—for an additional $130—you still wouldn’t have enough space for music. I frequently hear iPod Mini users complain that even 4GB isn’t enough for them. Sure, you can pick up a 4GB Memory Stick, if you’ve got a spare $500 lying around.

I suggest a Creative Zen Xtra or Apple iPod instead. In a pinch, the PSP can stand in as a music player. But until you can load 10GB or more onto the system—without spending as much on the memory card as you would on a brand new iPod—few people will use it as their primary music player. To support music and movies, Sony will have to add a mini-hard drive to the PSP, which will only make it heavier and more power-hungry. Battery Life: Speaking of power, Sony claims you can get six hours of hard-core game play or movie playback on a single charge. If the PSP delivers on that promise, that’s good.

Based on my own experience with battery-powered devices, though, you’re better off cutting that number in half. Even three hours of game play or movie watching is pretty good, except when your batteries cut out during a long flight or a boring class. Better pack a spare battery or two. Price: $250 for a game-playing, movie-watching, music-playing device is pretty darn good, especially for one with a screen as beautiful as the PSP’s. It must cost them more than that to make each one, which means they intend to profit on the games and the movies, instead.

To justify that price, though, the PSP will have to do more than just play games, as Nintendo’s offerings cost half as much or less. Many hard-core gamers will certainly pony up, but the jury is out on whether enough casual gamers will adopt it to make it a success. My best guess is no. Connectivity: Like the DS, the PSP will ship with built-in wireless networking. That’s great for group gaming, but why is there no built-in Web browser or e-mail client? And no way to connect your PSP to your PC wirelessly to transfer music and movies to the Memory Stick?

All the parts are there, but the whole is sadly lacking. I, for one, would love to see Skype for the PSP—that would have been a real breakthrough! Reliability: This is the great unknown:. How well will the PSP hold up to months and years of heavy playing and portable jostling? I’m not particularly bullish, especially because that large screen is unprotected. Sure, the PSP comes with a slip-on foam case, but it’s so nondescript that I almost lost it five times in one week. In just a few short months, a scratched screen will take much of the luster off of the PSP.

The Nintendo DS’s clamshell design makes it much more likely to survive years on the road, especially in the backpacks of all those hyperactive kids and one clumsy journalist. I was almost scared to travel with the fragile-seeming PSP, particularly because we only had one in the entire company. And how long will the battery last? Regular gamers will probably need a new one every year or so, which creates a tremendous after-market opportunity. Finally, what about the internal software? Is it robust enough for all the banging—and hacking—that’s bound to go on?

Will it need regular flash updates? And how do you distribute a flash update to the PSP if you don’t have a wireless network? Via UMD? Memory Stick? I don’t know about you, but I certainly don’t have a memory stick reader for my PC. Fortunately there’s also a standard USB 2. 0 port. Perhaps you’ll download updates off the Web site and send them to the PSP via this port. All in all, I think the PSP will be extremely popular among hard-core gamers, especially those who spend hours each week banging on their PS2s.

I wouldn’t buy it for kids, though, because it’s too fragile. And I think the lack of robust media playback—non-writable UMD, paltry and expensive Memory Stick storage options—make it less than ideal for casual gamers. In the end, the PSP excels at just one thing: portable gaming. Casual gamers who already own a satisfactory portable gaming platform, whether it’s an old Game Boy Advance or even a game-playing cell phone, have little incentive to switch. And anyone looking for a portable media player that will unseat Apple’s iPod needs to keep looking.

Because when it comes to everything else, the PSP just doesn’t cut it. And, PC Magazine sums it up even more concisely, a victory of form over function: Those in the target demographic have eagerly awaited its arrival. And even people other than 15- to- 25-year-old males may have more than a passing interest in one of the year’s most anticipated pieces of gadgetry: the Sony PSP. Originally conceived as the PlayStation Portable (and now simply called the PSP), the slick, gorgeous device succeeds spectacularly as a portable gaming console.

If you view its music- and video-playback capabilities as bonus features, you’ll be thrilled; if you were hoping it would be best-in-class at all its endeavors, you’ll be slightly disappointed. Clearly breakthrough product innovation can make or break the company that gets it to market; but there must be a compelling customer value-proposition inherent in the product itself, differentiated in the way it is built/sold/positioned, or it must be disruptive to existing markets for there to be a hope for success. It sounds to me like the Sony PSP falls short on all three counts, despite all the hype and lawsuit PR. Arik How the Wii is creaming the competition Business 2. 0 Magazine tells the inside story of how Nintendo outfoxed Sony and Microsoft and got itself back in the game. By John Gaudiosi, Business 2. 0 Magazine April 25 2007: 9:58 AM EDT (Business 2. 0 Magazine) — A year ago it looked like game over for Nintendo’s storied console business. The Kyoto-based gamemaker–whose Nintendo Entertainment System ushered in the modern age of videogames–was bleeding market share to newer, more powerful systems from Sony and Microsoft. Even as the videogame business grew into a $30 billion global industry, Nintendo saw its U.

S. hardware sales shrink to almost half of what they had been nearly 20 years earlier. |[pic] | |The Wii is reversing 20 years of declining | |Nintendo console sales. | |[pic] | |The DS broadened Nintendo’s market. The Wii goes| |even further; grade-schoolers and grandmas are | |getting into the swing. | [pic] | | | | | | | |CNN’s Nicole Lapin talks with Scott Steinberg of| |Embassy Multimedia about the latest in gaming | |systems. | | | |Play video | | | | | | | | | Today, as anybody within shouting distance of a teenager knows, Nintendo is the comeback kid of the gaming world.

Instead of joining Sony (Charts) and Microsoft (Charts, Fortune 500) in the arms race to pack their consoles with ever-higher-performance graphics chips (to better attract sophisticated gamers), Nintendo built the Wii–a cuddly, low-priced, motion-controlled machine that broke the market wide open by appealing to everyone from grade-schoolers to grandmas. Unorthodox? Maybe. Effective? You bet. The Wii is a pop culture smash of such dimensions that Nintendo still can’t make consoles fast enough. Even so, it’s outselling Sony’s PlayStation 3 and Microsoft’s Xbox 360–at least since January. The Xbox had blowout pre-Christmas sales. ) And while its competitors lose money on every console they build, expecting to make it back selling high-margin games, the Wii was designed to sell for a profit from the get-go. Nintendo blows by forecasts Nintendo’s turnaround began five years ago, when the company’s top strategists, including CEO Satoru Iwata and legendary game designer Shigeru Miyamoto, zeroed in on two troubling trends: As young consumers started careers and families, they gradually cut back on game time. And as consoles became more powerful, making games for them got more expensive.

Studios thus became more conservative, putting out more editions of Madden NFL and fewer new, inventive games that might actually grow the market. Iwata and Miyamoto eventually concluded that to gain ground, Nintendo would have to do something about the game controllers, whose basic design had hardly changed since the first NES paddles. Changing how the controllers interacted with the consoles would mean changing how engineers designed a system’s electronics and casing and eventually the games themselves. The first product to test the new strategy was not the Wii but the DS handheld game system, released in 2004.

To appeal to a broader audience, Nintendo abandoned the kid-friendly Game Boy name it had given its other popular handhelds, while building in Wi-Fi networking, voice recognition, and two screens (See correction below). The idea was not to load the DS with technology but to help draw in new gamers by offering options other than the old button-based controls. Some DS games would work through the tap of a pen and simple voice commands. The trouble with gee-whiz gadgets The $150 gadget got off to a tepid start. Until gamers tried it, they tended to be wary. People thought it was weird,” says Perrin Kaplan, vice president for marketing at Nintendo of America. “It took about two years for people to warm up to it. ” But warm up they did, largely thanks to Miyamoto. The creator of Nintendo’s blockbuster franchises–Donkey Kong, Super Mario Bros. , Legend of Zelda– offered up Nintendogs, a Tamagotchi-like simulation in which players use every feature of the DS to nurture virtual puppies. The game struck a chord with female gamers in particular, says John Taylor, an analyst at Arcadia Research. During the first holiday season after Nintendogs hit the market, Nintendo sold 5. million DS units–a standout performance that was nearly twice its total for the rest of the year. Soon after Nintendogs, the company released Brain Age, a game designed for more mature players in which they solve a series of puzzles by filling in answers or speaking phrases aloud. “That further bolstered the market by attracting older boomers and even senior citizens,” Taylor says. The DS surge encouraged Nintendo executives, who saw their strategy to grow the market taking shape. They wouldn’t have to wait long to put it to a bigger test. Work had already begun on the console, code-named Revolution, that would become the Wii.

Club Penguin, Webkinz corner the tween market Nintendo’s top strategists knew early on that they wanted to build a machine with a wireless, motion-sensitive controller. But equally important was the chip that would be the brains of the Wii console itself. The more powerful processors that Sony and Microsoft were using would make the screen action look better but would also guzzle more electricity. What if Nintendo used a cheaper, lower-power chip instead? After all, the DS, with its efficient mobile processor, had already proven that you could create new gaming experiences without the fastest chips.

A low-power chip also meant that the machine could be left on overnight to download new content. It was settled: The design team made the risky decision to build the Wii around a chip similar to the one that powered the GameCube, an earlier Nintendo entry that posted disappointing sales. If the Wii succeeded, it wouldn’t be on the strength of breathtaking graphics. Next, engineers settled on a new approach for the Wii’s looks. Just as the DS shunned the Game Boy name to appeal to a broader audience, the Wii would adopt a sleek white exterior instead of the toylike loud colors used on the GameCube.

Even CEO Iwata got involved in the design process; at one point he handed engineers a stack of DVD jewel cases and told them the console should not be much bigger. Why so small? To work with the motion-sensitive wireless controller Nintendo planned, Iwata reasoned, the console would have to sit directly beside the TV. Make it any larger and customers would hesitate to leave it there. Videogames get real While the console team worked on the shell, Miyamoto and another team perfected the controller.

He was determined that its design be as simple as possible–he insisted on several revisions that enlarged the “A” button to make its importance obvious. When design work was done, players could arc the Wii remote to throw a football in Madden NFL 07, tilt it to steer off-road vehicles in Excite Truck, and swing it to play sports like Wii tennis and baseball. Market tests suggested that the product was everything its designers hoped: engaging enough that nongamers might give it a go, and simple enough that newbies could quickly get up to speed. Finally it came time for Nintendo to market the Wii to the world.

In addition to its standard TV campaigns targeting schoolkids, the company pumped 70 percent of its U. S. TV budget into programs aimed at 25-to 49-year-olds, says George Harrison, senior vice president for marketing at Nintendo of America. He even put Wii ads into gray-haired publications like AARP and Reader’s Digest. For Nintendo’s core users, he took a novel, Web-based approach: “To reach the under-25 audience,” he says, “we pushed our message through online and social-networking channels” including MySpace. But Nintendo’s most effective marketing trick was to give away its killer app, Wii Sports, with every $250 console.

It was a calculated attempt to speed up the process that brought success to the DS. And because Nintendo makes about $50 in profit on every Wii sold, it can afford to give away a game. To be sure, not everything has gone according to plan. Although Nintendo shipped more than 3 million Wiis in 2006, supply-and-demand problems have plagued the machine since its launch. Demand continues to outpace supply and may continue to do so until summer. It’s a problem many businesses wouldn’t mind having, but it means that Nintendo might be leaving money on the table–something no company can afford to do for long, not even the newly revived Nintendo.

John Gaudiosi is a freelance journalist in Raleigh, N. C. Nintendo’s marketing strategy impresses Microsoft Submitted by Shubha Krishnappa on Sat, 06/09/2007 – 07:43. :: • Technology • Top Story • United States After year-long speculation that Microsoft is planning to cut Xbox 360 prices, the software giant on Friday gave the hints that its most powerful video game and entertainment system, Xbox 360 will soon see a price drop, apparently realizing the fact that consumers react well to lower cost consoles.

Picture: Get original file (8KB) Full Article: In an interview with Bloomberg Microsoft’s David Hufford, a director of Xbox product management admitted that USD 199 price point is the “sweet spot”. “We are well aware that the sweet spot of the market is really 199 bucks,” Hufford said in the interview. As the world’s largest software maker recently has committed to add more family-orientated games, Hufford thinks US$199 is the ideal price for the family audience.

With its plans to cut the price of Xbox 360 and add more family titles, Microsoft apparently is trying to follow its rival Nintendo’s success in the gaming industry. Among the trio of next generation gaming consoles, Nintendo has emerged as the numero uno in April for the fourth consecutive month after selling 360,000 of the popular video game devices in the United States, repeatedly outshining both its rivals Microsoft’s Xbox 360 and Sony’s PlayStation 3 in the US video game consoles market.

The more-powerful systems, Sony’s PlayStation 3 and Microsoft’s Xbox 360, in April, again lagged behind in the fierce battle for dominance in the booming gaming consoles market by moving 82,000 and 174,000 units, respectively. Launched in November last year, Nintendo’s Wii console retails for $US250, while PS2 costs only $129. The PS3 and Xbox 360 start at $499 and $299 , respectively, while their high-end versions are priced at $US599 and $US399(Xbox 360 Premium System) and $US479 (Xbox 360 Elite System), respectively.

During his discussion with Bloomberg, Hufford confessed that besides the youngsters Nintendo Wii has drawn an audience that wouldn’t normally play games, such as soccer moms, and the elderly. Making a remark on Nintendo’s low-cost gaming device, Hufford said “When Mom walks into the store and sees she can get a console with a game for USD 250, she sees it as a USD 300 value. They’ve done a good job”. Impressed with the way Nintendo has attracted the women, children and the elderly, Microsoft now intends to adopt the Japanese company’s marketing strategy to win a broader audience than the first Xbox attracted.

The Redmond giant hopes this move will help it shake off its ‘hardcore’ image that hampered sales of its sixth generation era video game console, the original Xbox, which was first released on November 15, 2001 in North America. “If we don’t make that move, make it early and expand our demographic, we will wind up in the same place as with Xbox 1, a solid business with 25 million people,” said Peter Moore, Microsoft’s Head of Interactive Development. “What I need is a solid business with 90 million people. “

Although, Microsoft has not yet clearly confirmed price cut for the console, but Heather Bellini, analyst for the Institutional Investor, foretells that a price reduction may happen as soon as September. “If they really are going to have a good Christmas games line-up, then they just have to have the largest number of boxes out there so that they sell the largest number of games”, commented Bellini. Earlier this month, Microsoft has added Blinky, Clyde and other Pac-Man hallmarks to its Xbox 360 game consoles.

Microsoft, which has always tried to hook more and more customers, now has made Pac-man, the legendary game, its weapon to boost the sales of Xbox 360. In March, Microsoft Game Studios and Bungie Studios revealed the first details of the three editions, “Standard”, “Limited” and “Legendry”, of the most-eagerly anticipated Xbox 360 game, “Halo 3”, which will hit stores later this fall, also giving gaming enthusiasts three different purchase options.

Announcing the three different SKUs of Halo 3, the companies have divided the gaming community in three kinds: First, those who are really, truly desperate for Halo 3, then those who are sort of desperate, and lastly those who are just your garden-variety desperate. The companies said that the three different editions are adapted for the tastes of all kinds of gamers. New challenges for Nintendo Ryan Kim, Chronicle Staff Writer Monday, November 30, 2009 [pic] The Nintendo Wii, the motion-control darling of the video game console world, faces new challenges and questions about its future like never before. [pic] Images pic][pic]View Larger Image [pic] Bottom of Form • Realtors to release home sales index Tuesday 11. 30. 09 The industry-leading hardware has been the hottest gift three holiday seasons in a row since it was released in November 2006. This summer, the Wii topped 50 million consoles sold worldwide, making it the fast-selling console ever. But in recent months, sales have fallen off and holiday supplies are plentiful for the first time, removing some of the cachet of being a sold-out product. Some independent game publishers are restless as well, unhappy that they aren’t having the success on the platform that Nintendo’s games are.

Rivals Microsoft and Sony have both announced plans to add motion-control systems, which will appear next year, negating some of the innovation advantage that Nintendo has enjoyed. The question is whether the Wii can maintain its market lead or if it will fall back into the ranks of more traditional game systems, Microsoft’s Xbox 360 and Sony’s PlayStation 3. “The Wii … has passed a significant milestone in that supply has met demand,” said analyst Billy Pidgeon of Game Changer Research. “It’s not an issue that you can’t get a Wii. A lot of people … are eager to see what happens now. Nintendo executives dismiss concerns about the Wii platform, saying it has legs to compete for years. Sales of the console, however, dropped 43 percent from April to September this year, and are expected to fall far short of the 10 million units sold a year before. Nintendo has also revised its revenue forecast for the fiscal year ending March 31 from $20. 6 billion to $17 billion. Regaining lead Nintendo lost the title o

How Scientific Management Influenced Management Thinking

How Scientific Management Influenced Management Thinking Butler (1991, pp. 23) believes “ Many of Taylor’s ideas, concepts, and rules seem even more appropriate today than at the time he promulgated them. Furthermore, today’s technology and developments enable a more effective implementation. ” The four principles of scientific management according to Butler (1991, pp. 4) are as follows: Scientific development of the best work methods through observation, measurement and analysis – replacing rule of thumb method Scientific selection and development of the workmen through training – previous system the workmen chose his own work and was self tort Relating and bringing together of best work methods and training and development of the selected workmen Cooperation of employers and workmen which should include the division of work and the managers responsibility of work – previous system almost all work & responsibility was placed on the worker I believe the modern workplace still has a system very similar to this in place today. The world is still competitive and advancing in technology and searching further for efficiency and earning power.

Now I will turn to the introduction of scientific management and how these principles changed management thinking. The introduction of this system in the United States was well received and agreed with Taylor’s (1911) suggestion that, employers and the workmen who adopt scientific management will eliminate disputes and disagreements, in particular relating to wages through scientific investigation. The result of scientific management was huge gains in productivity and prosperity and it also seemed to ease working conditions and industrial unrest that was occurring. The introduction of scientific management in France was somewhat a different story. According to Witzel (2005, pp. 0) “Subsequent studies have shown that fewer than 100 French companies adopted scientific management methods during this period. The debate which raged in intellectual and academic circles had very little impact on French management or French industry. ” Maybe the French culture was concerned about the deskilling and dehumanising that would occur and the dominance of capital over labour. It is interesting to note that scientific management was and still is used in France today. Witzel (2005, pp. 91) affirms “scientific management became what its detractors in the West always feared it could become: a tool for driving workers harder rather than a means of rewarding them for efficiency gains. The Soviet Union welcomed Taylor’s system with open arms but ended tragically in my opinion due to the communist government of the day and lack of education in the workplace. This is an example of scientific management techniques being used without the correct philosophy applied. In this situation it is most likely that the employer gained higher production levels leading to higher profits, however the workmen would have been disadvantaged in many ways. Butler (1991, pp. 25) believes “There are strong economic incentives to invest in selection, training, human resource development at levels in order to insure the success and survival of the firm” This very statement is why the principles of scientific management are relevant today and changed management thinking in the early 1900’s.

I have looked at the principles of scientific management when it was introduced to the world and believe it is more relevant today due to development in economic conditions, technology, education and social skills. The world is a changing place and is currently in the process of globalisation which has many aspects such as politics, economics, financial markets. environment etc which will require some form of management to succeed whether scientific management is valid is a controversial debate. REFERENCE LIST Butler, G. R. (1991). ‘Frederick Winslow Taylor: The Father of Scientific Management and His Philosophy Revisited’, Business Premier Database, pp. 23-27. Taylor, F. W. (1911), The Principles of Scientific Management, Harper, New York. Witzel, M. (2005). ‘Where Scientific Managment Went Awry’, Business Premier Database, pp. 89-91.

Uncovered Interest Parity

UNCOVERED INTEREST PARITY Let us take a simple example in order to understand uncovered interest parity condition. The interest rate in the Eurozone for one year is slightly above 4% when compared to Czech interest rate which is less than 3% for one year. But despite still having negative interest rate differential we can see many investors still preferring and holding Czech assets. This is because financial market participants expects the Czech crown shall appreciate in the future and are ready to compensate for the negative interest rate differential.

Now let us see why uncovered interest parity does not hold in long run: Because of the presence of such an uncertainty, uncovered interest rate may not hold. Furthermore even the empirical evidence emphasizes that it does not hold generally. In general when compared to covered interest parity, uncovered interest parity is more difficult to test due to the presence of expected exchange rate changes which are unobservable.

One of the reasons for uncovered interest parity may not hold is that of the inefficiency of the capital markets due to which it provides chance for money managers, speculators and other hedge funds to exploit this inefficiency for making profit. A new area of research involves investigation of whether uncovered interest parity holds for emerging markets. Bansal and Dahlquist (2000) found that there was a basic asymmetry in which UIP holds. In particular, they find that when the U. S. interest rate is lower than foreign country rates, uncovered interest parity (UIP) holds, while UIP fails to hold when the U.

S. rate is higher. The other important factor which determines the measure of failure of UIP not holding is the Gross Domestic Product per capita of the foreign country. . For example Uncovered interest parity does not hold for G-7 countries. But it was argued that this is because under the uncovered interest parity model a country’s nominal interest rate is determined by the world interest rate plus the expected change in exchange rates, where the formulation of such expectations is based on neoclassical assumptions.

With this one can clearly say that central Banks have the power to act autonomously in their respective countries and will act according to the economic changes. Assuming for a moment no other sources of deviation from UIRP, when agents are confident of forecasts then capital will move and UIRP will tend to hold; when they are not, however, then one can have a situation in which nvestors in aggregate believe the return that can be earned in one nation exceeds that in the other, but they lack the conviction to act. Due to which capital flow will be in sufficient volume and UIRP will not hold. One of the main drawbacks of the UIP condition is that it does not say, as is sometimes thought, that a one percentage point increase in the sterling rate will cause a one percent depreciation of the exchange rate.

In fact, in response to the higher return on sterling assets there would be a capital inflow and sterling would appreciate, not depreciate. Furthermore this would happen instantly. The explanation for this can also be found in the UIP condition. However on the other hand uncovered interest parity and covered interest parity will only hold good when the government or authorities do not lay any obstacle to the circulation of your money or to the financial capital.

Many of the governments in the earlier period used to lay down a number of restrictions regarding the movement of the capital in respect to how much cash could be stimulated out of the country or brought to a country. But when u compare it to now, it’s been changed drastically especially to the developed countries like US, UK, Germany but on the other hand it’s still the same with the developing countries like India, Sri Lanka, China where there are restrictions on the movement of the capital.

Also there is always a danger of government slapping on the restrictions on your savings in the country. In such a circumstance it is very unlikely to find the parity conditions holding. At the end I would like to conclude it by saying that if both covered and uncovered parity holds good then the forward rate must be equal to the market’s expectation of the future spot rate. Reference Copeland,L. (2005). Exchange Rates and Interenational Finance. Pearson Education Limited,Essex.

Five Layers in the Internet Network Model and What They Do

1. Physical Layer. Is the physical connection between the sender and the receiver. It includes all the hardware devices (computers, modems, and hubs) and physical media (cables and satellites). This layer specifies the type of connection and the electrical signals, radio waves, or light pulses that pass through it. 2. Data Link Layer. Is responsible for moving a message from one computer to the next computer in the network path from the sender to the receiver. This layer has 3 functions: a.

Control the physical layer by deciding when to transmit messages over the media. b. Format the messages by indicating where they start and end. c. Detects and corrects any errors that have occurred during transmission. 3. Network Layer. This layer performs the following functions: a. Routing, selecting the next computer to which the message should be sent. b. Find the address of that computer if it doesn’t already know it. 4. Transport Layer. It performs two functions: a.

It is responsible for linking the application layer software to the network and establishing end-to-end connections between the sender and receiver when such connections are needed. b. It breaks long messages into several smaller messages to make them easier to transmit. c. Detect lost messages and request that they be resent. 5. Application Layer. This is the application software used by the network user. With this layer the user defines what messages are sent over the network. Examples of this layer are the internet explorer and web pages.